url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-common-core/chapter-2-functions-equations-and-graphs-2-2-direct-variation-practice-and-problem-solving-exercises-page-71/14
## Algebra 2 Common Core Published by Prentice Hall # Chapter 2 - Functions, Equations, and Graphs - 2-2 Direct Variation - Practice and Problem-Solving Exercises - Page 71: 14 #### Answer This is not a direct variation equation ($y$ does not vary directly with $x$). #### Work Step by Step Direct variation can be represented as $y=kx$, where k is the constant of variation. This equation cannot be represented as $y=kx$ because it has a y-intercept, so this is not a direct variation equation. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2018-10-20 16:50:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378516554832458, "perplexity": 1324.0020164702273}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00135.warc.gz"}
https://cstheory.stackexchange.com/questions/12404/choosing-one-number-from-each-set-so-that-the-difference-between-maximum-and-min
# Choosing one number from each set so that the difference between maximum and minimum is minimized Suppose I have four sets A={0, 4, 9}, B={2, 6, 11}, C={3, 8, 13}, and D={7, 12}. I need to choose exactly one number from each of these sets, so that the difference between the largest and smallest chosen numbers is as small as possible. What type of problem is this? Is there a graph algorithm that could be used to solve this problem? • Is this question really about groups or is it actually about sets? Aug 25 '12 at 20:29 • Edited for clarity Aug 26 '12 at 1:38 • the tags seem random Aug 26 '12 at 16:11 • I hope you enjoyed seeing people do duplicate work, but that is against some vague ethics. Please refrain from it in the future. Aug 27 '12 at 15:06 In general minimum range matroid basis problems can be solved in $O(n\log n)$ time, plus $O(n)$ steps of a subroutine that finds a basis of a set with corank one: sort the elements from smallest to largest and then process the elements one by one. While you process the elements maintain an independent set $I$; when you process an element $e$, add $e$ to $I$ and, if that addition causes $I$ to become dependent, kick out the minimum weight element in the unique circuit of $I$. The minimum range basis is one of the sets $I$ that you found in this process: the one that has full rank and has as small a range between min and max as possible. For your partition matroid special case it's easy to find the circuit in $I$ when it becomes dependent: a circuit happens when $e$ belongs to the same group as an element $f$ that you previously added, and $f$ is the element you should kick out of $I$. So the whole algorithm takes $O(n\log n)$ time, or possibly faster depending on how much time it takes to do the sorting step.
2022-01-18 16:53:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40903595089912415, "perplexity": 178.32832431050537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00583.warc.gz"}
http://cms.math.ca/cmb/msc/57S15?fromjnl=cmb&jnl=CMB
location:  Publications → journals Search results Search: MSC category 57S15 ( Compact Lie groups of differentiable transformations ) Expand all        Collapse all Results 1 - 2 of 2 1. CMB 2007 (vol 50 pp. 365) Godinho, Leonor Equivariant Cohomology of $S^{1}$-Actions on $4$-Manifolds Let $M$ be a symplectic $4$-dimensional manifold equipped with a Hamiltonian circle action with isolated fixed points. We describe a method for computing its integral equivariant cohomology in terms of fixed point data. We give some examples of these computations. Categories:53D20, 55N91, 57S15 2. CMB 1999 (vol 42 pp. 248) Weber, Christian The Classification of $\Pin_4$-Bundles over a $4$-Complex In this paper we show that the Lie-group $\Pin_4$ is isomorphic to the semidirect product $(\SU_2\times \SU_2)\timesv \Z/2$ where $\Z/2$ operates by flipping the factors. Using this structure theorem we prove a classification theorem for $\Pin_4$-bundles over a finite $4$-complex $X$. Categories:55N25, 55R10, 57S15
2015-07-02 12:34:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9592874646186829, "perplexity": 1067.521679815315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095557.73/warc/CC-MAIN-20150627031815-00256-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.icir.org/mallman/pubs/ABP05/
Mark Allman / ICSI @mallman_icsi Mark Allman, Ethan Blanton, Vern Paxson. An Architecture for Developing Behavioral History. Workshop on Steps to Reduce Unwanted Traffic on the Internet (SRUTI), July 2005. PS | PDF | Slides Abstract: We present an architecture for large-scale sharing of past behavioral patterns about network actors (e.g., hosts or email addresses) in an effort to inform policy decisions about how to treat future interactions. In our system, entities can submit reports of certain observed behavior (particularly attacks) to a distributed database. When deciding whether to provide services to a given actor, users can then consult the database to obtain a global history of the actor's past activity. Three key elements of our system are: (i) we do not require a hard-and-fast notion of identity, (ii) we presume that users make local decisions regarding the reputations developed by the contributors to the system as the basis of the {trust} to place in the information, (iii) we envision enabling witnesses to attest that certain activity was observed \emph{without} requiring the witness to agree as to the behavioral meaning of the activity. We sketch an architecture for such a system that we believe the community could benefit from and collectively build. BibTeX: @inproceedings{ABP05, author = "Mark Allman and Ethan Blanton and Vern Paxson", title = "{An Architecture for Developing Behavioral History}", booktitle = "Proceedings of USENIX Workshop on Steps to Reducing Unwanted Traffic on the Internet", year = 2005, month = jul, } "We are what we repeatedly do. Excellence, then, is not an act, but a habit." --Aristotle
2023-03-29 10:02:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2271987348794937, "perplexity": 4410.771733288199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00668.warc.gz"}
https://quantumcomputing.stackexchange.com/questions/15235/what-kind-of-boolean-functions-are-faster-to-compute-on-qc
# What kind of boolean functions are faster to compute on qc? Deutsch-Jozsa algorithm can compute if some function $$f : \{0,1\}^n \rightarrow \{0,1\}$$ is constant. This goes exponentially faster than on classical computers. If we consider the set of all boolean functions $$f : \{0,1\}^n \rightarrow \{0,1\}$$ is there a characterizations or intuition about the properties of boolean functions, which achieve such a speedup compared to classical computations? Consider for example the AND gate, which ANDs all $$n$$ inputs. I don't know if this is faster on quantum computer, but if yes what does both functions share in common and if not what is different here compared to the constant testing function? Following up on @luciano's answer, I think you are envisioning a quantum computer as being fast at evaluating functions, when in actuality, quantum computers are better at evaluating global properties of functions (and not, necessarily, the function themselves.) For example referring to the Deutsch-Jozsa problem, consider two separate bags containing Boolean functions on $$n$$ variables. • In one bag (called "constant") we put in the $$2$$ functions that either evaluate to $$0$$ for all $$2^n$$ inputs, or to $$1$$ for all $$2^n$$ inputs; and • In another bag (called "balanced") we put in the functions that evaluate to $$0$$ for precisely $$2^{n-1}$$ inputs (and $$1$$ otherwise). If we were to scramble the bags and choose a random function, classically we'd have to evaluate the function a couple of times (and worse-case up to $$2^{n-1}+1$$ times) to know from which bag we grabbed our function. But following the Deutsch-Jozsa algorithm, we only need to evaluate the function once on a quantum computer. This "balanced" vs. "constant" property is a global property of the functions, closer to what a Fourier transform evaluates. There are $$2^{2^n}$$ individual Boolean functions with $$n$$ inputs and $$1$$ output. However, of all of these, there is only $$1$$ function on $$n$$ variables that performs the $$\mathsf{AND}$$ of all inputs (namely the $$\mathsf{AND}$$ function), and only $$1$$ that performs the $$\mathsf{XOR}$$ of all inputs (namely the $$\mathsf{XOR}$$ function). Deutsch-Jozsa algorithm is about classifying an oracle $$f$$ as constant/balanced. The complexity of executing the oracle $$f$$ itself is not directly relevant for that classification. What it is relevant is how many executions of the oracle $$f$$ are needed to answer the constant/balanced question. An example: Let's take an $$f$$ with $$n=2$$. You don't know it yet, but $$f$$ is defined as the classical XOR gate, therefore it is balanced. You need, classically, to execute $$f$$ 3 times ($$2^{{n-1}}+1$$, in the general case) with different inputs in order to have enough information to know that is balance. In contrast, a quantum implementation needs a single execution. Note that the fact that XOR is linear does not play a role here.
2021-05-16 02:43:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7253422737121582, "perplexity": 448.09398725561294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00349.warc.gz"}
https://www.spanishpropertyinsight.com/discussion/reply/90943/
# Reply To: Why are illegal constructions not appearing in Spanish media #90943 Anonymous Participant @katy wrote: Similar with many British living here, they live in a bubble, appeasing the Spanish and accusing the fellow British of being naive! And goodness knows we’ve had a belly-full of them posting exactly that on this forum in the past. One particular repugnant one has thankfully been quiet recently, hopefully having crawled back in the hole he came from….in Fuengirola. (No, I don’t mean you Fuengi!)
2017-02-27 04:52:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248513340950012, "perplexity": 14422.688506603747}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172447.23/warc/CC-MAIN-20170219104612-00064-ip-10-171-10-108.ec2.internal.warc.gz"}
https://dba.meta.stackexchange.com/questions/2981/add-an-mcve-section-to-our-help-page
Add an MCVE section to our help page? StackOverflow has a great page in their help section on how to create a minimal, complete, and verifiable, example (MCVE). I think it would be very helpful if we had a similar document showing how to create a simple MCVE for SQL related questions. Something like this: How to create a Minimal, Complete, and Verifiable Example for database-related questions Database-related questions asking for practical advice will get the most helpful answers if they provide a framework others can use to reproduce that problem. With that in mind, when asking a question please create a framework that is: • Minimal – Use as little code as possible that still produces the same problem • Complete – Provide all parts needed to reproduce the problem • Verifiable – Test the code you're about to provide to make sure it reproduces the problem Minimal Reducing the code to the bare minimum necessary to convey the problem makes the question easier to ask, and inherently easier to answer. Win-win. If you have a question about a query that has 400 columns, and all 400 columns are not required for the answer, only show the two or three columns that are pertinent to the question. Complete Include all the tables, queries, indexes, constraints, and other parts as necessary to ensure the person answering your question has all the information at the outset. When including these pieces, provide the SQL scripts so others don't have to recreate them. Do not provide screenshots of tables or results. Verifiable Include test output, in text formatted as a table, to show both what you're currently getting as well as what your desired output should be. Search for "ascii table generator" on your favorite search engine - there are several that are extremely easy to use. An example question, including an MCVE framework I want to get the total count of the number of ducks in each pond. The ponds table: CREATE TABLE ponds ( PondName varchar(30) , DuckName varchar(30) ); Some sample data: INSERT INTO ponds (PondName, DuckName) VALUES ('Golden', 'Daffy') , ('Walden', 'Daisy'); My query so far: SELECT COUNT(DuckName) FROM ponds; The output I'm getting: ╔═══════╗ ║ Value ║ ╠═══════╣ ║ 2 ║ ╚═══════╝ The output I'd like to get: ╔════════╦═══════╗ ║ Pond ║ Count ║ ╠════════╬═══════╣ ║ Golden ║ 1 ║ ║ Walden ║ 1 ║ ╚════════╩═══════╝ Yes, the above sample question would be considered "too localized" for our site; this is just a quick example • Well, if we point them at the StackOverflow one and they still don't read it, why would they read one at a different domain name? I mean, I tried to get a living document going when the blog was hot. Didn't really catch on. dba.blogoverflow.com/2012/06/help-us-help-you – Aaron Bertrand May 15 '18 at 19:09 • I thought it was great! I've looked for that recently and was unable to find it. I'm going to add it into the list of canonical questions/answers, to make it easily available. – Max Vernon May 15 '18 at 19:12 • Also, to be clear, I'm not trying to sh*t on your idea. I've just lost hope that people will proactively read anything like that. – Aaron Bertrand May 15 '18 at 19:47 • The people who do read that of their own accord never actually need to. The people who need it, will never be able to find it. That's why I'm always pointing people to the Stack Overflow version. And I never thought you were trying to "sh*t on my idea"! – Max Vernon May 15 '18 at 19:50 • Hopefully, this would help some people figure out what we want when we close their question as "unclear". – Max Vernon May 15 '18 at 19:52 • Ok, so we should definitely add some links to that close reason. – Aaron Bertrand May 15 '18 at 20:00 • It would be nice if it were easier to paste query results into SE. Right now, it's kind of a pain to format. – James May 16 '18 at 15:50 • @James - I'd recommend using ozh.github.io/ascii-tables - it's super easy. – Max Vernon May 16 '18 at 15:50 Introducing an MCVE Chapter to our Help Centre would be a good idea. However, looking at your example it seems to take up a lot of space. After reading through the help page on stackoverflow (as proposed by Max in his comment), I have come to the conclusion that adding an MCVE pages is do-able and would be a valuable resource. It would help the community being able to comment questions with a link to the MCVE Help Page when questions lack substance and for newcomers who actually take the time to read through the Help Centre. • what do you mean by "take up a lot of space"? Do you mean the mcve itself? Or the proposed help page? – Max Vernon May 16 '18 at 15:43 • The proposed information / help page as you laid out in your question. Or would you limit the description / information in the help centre? – hot2use May 16 '18 at 17:11 • Darn. I intentionally tried to keep it as short and concise as possible, knowing many people like to "tl;dr" – Max Vernon May 16 '18 at 17:17 • The stackoverflow page on mcve has quite a bit more text, although it doesn't include an actual usable mini-mcve – Max Vernon May 16 '18 at 17:18 • Just had a look. You are right. – hot2use May 16 '18 at 17:27 • @MaxVernon There you go. Rephrased answer based on comments and SO MCVE page. – hot2use May 16 '18 at 17:34 I've created a question containing the mcve from above, here. Please feel free to modify the MCVE as you see fit, but let's try and keep it simple.
2019-06-17 15:06:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1908334344625473, "perplexity": 1567.4674544822922}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998509.15/warc/CC-MAIN-20190617143050-20190617165050-00162.warc.gz"}
https://www.sarthaks.com/2638055/given-specific-gravity-mercury-intensity-pressure-express-intensity-pressure-various
# Given that: Specific gravity of mercury = 13.6; intensity of pressure = 40 kPa Express the intensity of pressure (gauge) in various units (S.I) 195 views in General closed Given that: Specific gravity of mercury = 13.6; intensity of pressure = 40 kPa Express the intensity of pressure (gauge) in various units (S.I) 1. 0.3 bar, 3.077 m of water, 0.15 m of mercury 2. 0.4 bar, 4.077 m of water, 0.299 m of mercury 3. 0.5 bar, 5.077 m of water, 0.339 m of mercury 4. None of the above by (37.3k points) selected Correct Answer - Option 2 : 0.4 bar, 4.077 m of water, 0.299 m of mercury Concept: Pressure = 40 kPa 1 bar = 105 Pa ⇒ 100 kPa ⇒ 40 kPa = 0.4bar P = ρgh For water, ρ = 1000 kg/m3, g = 9.81, P = 40 kPa $\Rightarrow h = \frac{{40 × {{10}^3}}}{{1000 × 9.8}} = 4.08\;m\;$of water For Mercury, ρ = 13.6 × 103 kg/m3, P = 40 kPa $\Rightarrow h = \frac{{40 × {{10}^3}}}{{13.6 × 1000 × 9.8}} = 0.30~m\;of\;mercury$
2023-02-08 17:48:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5671383738517761, "perplexity": 11038.994402160397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500837.65/warc/CC-MAIN-20230208155417-20230208185417-00432.warc.gz"}
https://studysoup.com/tsg/1046823/single-variable-calculus-early-transcendentals-8-edition-chapter-11-10-problem-1
× × # If fsxd o n0 bnsx 2 5d n for all x, write a formula for b8 ISBN: 9781305270336 484 ## Solution for problem 1 Chapter 11.10 Single Variable Calculus: Early Transcendentals | 8th Edition • Textbook Solutions • 2901 Step-by-step solutions solved by professors and subject experts • Get 24/7 help from StudySoup virtual teaching assistants Single Variable Calculus: Early Transcendentals | 8th Edition 4 5 1 319 Reviews 18 0 Problem 1 If fsxd o n0 bnsx 2 5d n for all x, write a formula for b8 Step-by-Step Solution: Step 1 of 3 Philosophy study guide 1 What is philosophy A pursuit of Wisdom Critical Thinking Questioning and asking about everything Creating a better understanding of your life, your situation, yourself & creating a Fuller life Branches of Philosophy: -Metaphysics: Studying the characteristics of existence and reality, can include the questions of what life is really about or if there is a god or why do we exist -Epistemology: Studying knowledge, creating criteria and methodologies for what is known and why it is known or why it is truth -Ethics: Studying values, morals, beliefs and the principles that a person guides themselves throughout their life -Aesthetics: Studying art, beauty, taste, and this can include anything from modern art to the way someone decorates their house or t Step 2 of 3 Step 3 of 3 #### Related chapters Unlock Textbook Solution
2021-10-20 10:22:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18799124658107758, "perplexity": 6337.296299589642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00279.warc.gz"}
https://people.ucsc.edu/~ealdrich/Teaching/Econ133/LectureNotes/risk.html
## Probabilistic Returns¶ Since we don’t know future returns, we will treat them as random variables. • We can model them as discrete random variables, taking one of a finite set of possible values in the future: $$r(s)$$, $$s = 1, \ldots, S$$. • In this case the probability of each value is $$p(s)$$, $$s=1,\ldots,S$$. • We can model them as continuous random variables, taking one of an infinite set of possible values in the future: $$r(s)$$, $$s \in \mathcal{S}$$ (e.g. $$\mathcal{S} = (-\infty, \infty)$$). • In this case the probability of each value (kind of) is $$f(s)$$, $$s \in \mathcal{S}$$. ## Expected Returns¶ Our best guess for the future return is the expected value: $\begin{split}E[r] & \equiv \mu = \sum_{s=1}^S r(s) p(s),\end{split}$ or $\begin{split}E[r] & \equiv \mu = \int_{s \in \mathcal{S}} r(s) f(s) dr(s).\end{split}$ ## Return Volatility¶ The amount of uncertainty in potential returns can be measured by the variance or standard deviation. • Volatility of returns specifically refers to standard deviation, NOT VARIANCE. $\begin{split}Std(r) & \equiv \sigma = \sqrt{\sum_{s=1}^S (r(s) - \mu)^2 p(s)},\end{split}$ or $\begin{split}Std(r) & \equiv \sigma = \sqrt{\int_{s \in \mathcal{S}} (r(s) - \mu_r)^2 f(s) dr(s)}.\end{split}$ ## Expectation and Variance Example¶ State Probability Return Severe Recession 0.05 -0.37 Mild Recession 0.25 -0.11 Normal Growth 0.40 0.14 Boom 0.30 0.30 What are $$\mu$$ and $$\sigma$$? $\begin{split}\mu & = 0.05*(-0.37) + 0.25*(-0.11) \\ & \qquad \qquad + 0.40*0.14 + 0.30*0.30 = 0.10\end{split}$ $\begin{split}E[r^2] & = 0.05*(-0.37)^2 + 0.25*(-0.11)^2 \\ & \qquad \qquad + 0.40*(0.14)^2 + 0.30*(0.30)^2 = 0.04471\end{split}$ $\begin{split}\sigma & = \sqrt{E[r^2] - \mu^2} = 0.04471 - 0.10^2 = 0.03471\end{split}$ ## Assumption of Normality¶ It will often be convenient to assume asset returns are normally distributed. • In this case, we will treat returns as continuous random variables. • We can use the normal density function to compute probabilities of possible events. • We will not assume that returns of different assets come from the same normal, but instead FROM DIFFERENT normal distributions. ## Differing Normal Distributions¶ As an example, suppose that • Amazon stock (AMZN) has an expected monthly return of 3% and a volatility (standard deviation) of 8%. • Coca-Cola stock (KO) has an expected monthly return of 1% and a volatility (standard deviation) of 4%. What do their probability distributions look like? ## Implications of Normality¶ The assumption of normality is convenient because • If we form a portfolio of assets that are normally distributed, then the distribution of the portfolio return is also normally distributed. • Recall that if $$X_i \sim \mathcal{N}(\mu_i, \sigma_i)$$, $$i = 1,\ldots,N$$, then $$W = \sum_{i=1}^N w_i X_i$$ is also normally distributed (where $$w_i$$ are constant weights). • The mean and the variance (or standard deviation) fully characterize the distribution of returns. • The variance or standard deviation alone is an appropriate measure of risk (no other measure is needed). ## Estimating Means and Volatilities¶ Typically we don’t know the true mean and standard deviation of Amazon and Coca-Cola. What do we do? • Use historical data to estimate them. • Collect $$N+1$$ past prices of each asset for a particular interval of time (daily, monthly, quarterly, annually). • Compute $$N$$ returns using the formula $\begin{split}r_t & = \frac{P_t - P_{t-1}}{P_{t-1}}.\end{split}$ We don’t include dividends in the return calculation above, because we use ADJUSTED closing prices, which account for dividend payments directly in the prices. ## Estimating Means and Volatilities¶ Compute the sample mean of returns $\begin{split}\hat{\mu} & = \frac{1}{N} \sum_{t=1}^N r_t.\end{split}$ Compute the sample standard deviation of returns $\begin{split}\hat{\sigma}^2 & = \frac{1}{N-1} \sum_{t=1}^N (r_t - \hat{\mu})^2.\end{split}$ The “hats” indicate that we have estimated $$\mu$$ and $$\sigma$$: these are not the true, unknown values. ## Estimating Means and Volatilities - Example¶ Let’s collect the $$N = 13$$ closing prices for Amazon and Coca-Cola between 3 Jan 2012 and 2 Jan 2013. • We will only keep the first closing price on the first trading day of each month. • We can then compute 12 monthly returns by computing the difference in month prices at the beginning of each month, dividing by the price of the previous month. • This will give us 12 returns that we can use to estimate the means and standard deviations. ## Risk-Free Returns¶ We will typically assume that a risk-free asset is available for purchase. • We will denote the risk-free return as $$r_f$$. • If an asset is risk free, its return is certain and has no variability: $\begin{split}E[r_f] & = r_f \\ Var(r_f) & = 0.\end{split}$ ## T-Bills as Risk-Free Assets¶ The return on a short-term government t-bill is usually considered risk free: • Although the price changes over time, the risk of default is extremely low. • Also, the holding period return can be determined at the beginning of the holding period (unlike other risky assets). ## Compensation for Risk¶ If you can invest in a risk-free asset, why would you purchase a risky asset instead? • Risky assets compensate for risk through higher expected return. • If risky assets didn’t offer higher expected return, everyone would sell them, leading to a price decline today and a higher expected return: $\begin{split}\uparrow E[r_t] & = \frac{E[P_t] - \downarrow P_{t-1}}{\downarrow P_{t-1}}\end{split}$ • There is no guarantee that the actual return will be higher – only its expected value. ## Risk Premium & Excess Returns¶ The amount by which the expected return of some risky asset $$A$$ exceeds the risk-free return is known as the risk premium: $\begin{split}\text{rp}_{A,t} & = E[r_{A,t}] - r_{f,t}.\end{split}$ The excess return measures the difference between a previously observed holding period return of $$A$$ and the risk-free: $\begin{split}\text{er}_{A,t-1} & = r_{A,t-1} - r_{f,t-1}.\end{split}$ ## Risk Premium & Excess Returns¶ • Note that excess returns can only be computed with past returns. • We estimate risk premia with the sample mean of historical excess returns. ## Sharpe Ratio¶ The Sharpe Ratio is a measure of how much risk premium investors require, per unit of risk: $\begin{split}\text{SR}_{A,t} & = \frac{\mu_{A,t} - r_{f,t}}{\sigma_{A,t}}\end{split}$ • The Sharpe Ratio is a measure of risk aversion. • It is often referred to as the price of risk. • The Sharpe Ratio for a broad market index of assets (like the S&P 500) is referred to as the market price of risk. • The true Sharpe Ratio is unknown, since we don’t know $$\mu_{A,t}$$ and $$\sigma_{A,t}$$, but we can estimate these with historical returns. $rp_{AMZN} = 0.03 - 0.002 = 0.028$ $SR_{AMZN} = \frac{rp_{AMZN}}{0.08} = 0.35$
2017-04-28 23:49:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8011794090270996, "perplexity": 1327.854211687523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123102.83/warc/CC-MAIN-20170423031203-00224-ip-10-145-167-34.ec2.internal.warc.gz"}
https://open.kattis.com/problems/statisticians
Hide # Statisticians Statisticians like to create a lot of statistics. One simple measure is the mean value: the sum of all values divided by the number of values. Another is the median: the middle among all values when they have been sorted. If there are an even number of values, the mean of the two middle values will form the median. These kinds of measures can be used for example to describe the population in a country or even some parts of the population in the country. Anne Jensen, Maria Virtanen, Jan Hansen, Erik Johansson and Jón Þórsson want to find a statistical measurement of how many statisticians there are in the Nordic countries. To be more precise, they want to find out how many statisticians there are per unit area. As the population in the Nordic countries are well spread out they will try the new measurement MAD, Median of All Densities. First put a square grid on the map. Then draw a rectangle aligned with the grid and calculate the density of statisticians in that area, i.e. the mean number of statisticians per area unit. After that, repeat the procedure until all possible rectangles have been covered. Finally the MAD is the median of all statistician densities. ## Input The first line of the input contains of two space separated numbers $h$ and $w$ describing the height and width of the square grid, where $1 \leq h \leq 140$ and $1 \leq w \leq 120$. The next line contains two space separated numbers $a$ and $b$ which are the lower and upper bound of the allowed rectangle areas, i.e. $1 \leq a \leq rectangle area \leq b \leq w \times h$. Then there will follow $h$ lines with $w$ space separated numbers $s$ describing the number of statisticians in each square of the map, $0 \leq s \leq 10\, 000$. There will always exist a rectangle with an area in $[a,b]$. ## Output The output contains of one line with the MAD. The number should be printed in number of statisticians per square and have absolute error at most $< 10^{-3}$. Sample Input 1 Sample Output 1 4 2 1 8 6 5 2 5 2 9 7 13 5.250000000 Sample Input 2 Sample Output 2 2 3 2 4 6 1 4 2 7 1 3.667000000 CPU Time limit 4 seconds Memory limit 1024 MB Difficulty 7.5hard Statistics Show
2022-08-08 13:44:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5474026203155518, "perplexity": 479.2134782021165}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570827.41/warc/CC-MAIN-20220808122331-20220808152331-00570.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/288/S3xC2%5E2xC12.html
Copied to clipboard ## G = S3×C22×C12order 288 = 25·32 ### Direct product of C22×C12 and S3 Series: Derived Chief Lower central Upper central Derived series C1 — C3 — S3×C22×C12 Chief series C1 — C3 — C6 — C3×C6 — S3×C6 — S3×C2×C6 — S3×C22×C6 — S3×C22×C12 Lower central C3 — S3×C22×C12 Upper central C1 — C22×C12 Generators and relations for S3×C22×C12 G = < a,b,c,d,e | a2=b2=c12=d3=e2=1, ab=ba, ac=ca, ad=da, ae=ea, bc=cb, bd=db, be=eb, cd=dc, ce=ec, ede=d-1 > Subgroups: 890 in 499 conjugacy classes, 290 normal (22 characteristic) C1, C2, C2 [×6], C2 [×8], C3 [×2], C3, C4 [×4], C4 [×4], C22 [×7], C22 [×28], S3 [×8], C6 [×2], C6 [×12], C6 [×15], C2×C4 [×6], C2×C4 [×22], C23, C23 [×14], C32, Dic3 [×4], C12 [×8], C12 [×8], D6 [×28], C2×C6 [×14], C2×C6 [×35], C22×C4, C22×C4 [×13], C24, C3×S3 [×8], C3×C6, C3×C6 [×6], C4×S3 [×16], C2×Dic3 [×6], C2×C12 [×12], C2×C12 [×28], C22×S3 [×14], C22×C6 [×2], C22×C6 [×15], C23×C4, C3×Dic3 [×4], C3×C12 [×4], S3×C6 [×28], C62 [×7], S3×C2×C4 [×12], C22×Dic3, C22×C12 [×2], C22×C12 [×14], S3×C23, C23×C6, S3×C12 [×16], C6×Dic3 [×6], C6×C12 [×6], S3×C2×C6 [×14], C2×C62, S3×C22×C4, C23×C12, S3×C2×C12 [×12], Dic3×C2×C6, C2×C6×C12, S3×C22×C6, S3×C22×C12 Quotients: C1, C2 [×15], C3, C4 [×8], C22 [×35], S3, C6 [×15], C2×C4 [×28], C23 [×15], C12 [×8], D6 [×7], C2×C6 [×35], C22×C4 [×14], C24, C3×S3, C4×S3 [×4], C2×C12 [×28], C22×S3 [×7], C22×C6 [×15], C23×C4, S3×C6 [×7], S3×C2×C4 [×6], C22×C12 [×14], S3×C23, C23×C6, S3×C12 [×4], S3×C2×C6 [×7], S3×C22×C4, C23×C12, S3×C2×C12 [×6], S3×C22×C6, S3×C22×C12 Smallest permutation representation of S3×C22×C12 On 96 points Generators in S96 (1 31)(2 32)(3 33)(4 34)(5 35)(6 36)(7 25)(8 26)(9 27)(10 28)(11 29)(12 30)(13 46)(14 47)(15 48)(16 37)(17 38)(18 39)(19 40)(20 41)(21 42)(22 43)(23 44)(24 45)(49 82)(50 83)(51 84)(52 73)(53 74)(54 75)(55 76)(56 77)(57 78)(58 79)(59 80)(60 81)(61 91)(62 92)(63 93)(64 94)(65 95)(66 96)(67 85)(68 86)(69 87)(70 88)(71 89)(72 90) (1 22)(2 23)(3 24)(4 13)(5 14)(6 15)(7 16)(8 17)(9 18)(10 19)(11 20)(12 21)(25 37)(26 38)(27 39)(28 40)(29 41)(30 42)(31 43)(32 44)(33 45)(34 46)(35 47)(36 48)(49 61)(50 62)(51 63)(52 64)(53 65)(54 66)(55 67)(56 68)(57 69)(58 70)(59 71)(60 72)(73 94)(74 95)(75 96)(76 85)(77 86)(78 87)(79 88)(80 89)(81 90)(82 91)(83 92)(84 93) (1 2 3 4 5 6 7 8 9 10 11 12)(13 14 15 16 17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80 81 82 83 84)(85 86 87 88 89 90 91 92 93 94 95 96) (1 5 9)(2 6 10)(3 7 11)(4 8 12)(13 17 21)(14 18 22)(15 19 23)(16 20 24)(25 29 33)(26 30 34)(27 31 35)(28 32 36)(37 41 45)(38 42 46)(39 43 47)(40 44 48)(49 57 53)(50 58 54)(51 59 55)(52 60 56)(61 69 65)(62 70 66)(63 71 67)(64 72 68)(73 81 77)(74 82 78)(75 83 79)(76 84 80)(85 93 89)(86 94 90)(87 95 91)(88 96 92) (1 52)(2 53)(3 54)(4 55)(5 56)(6 57)(7 58)(8 59)(9 60)(10 49)(11 50)(12 51)(13 67)(14 68)(15 69)(16 70)(17 71)(18 72)(19 61)(20 62)(21 63)(22 64)(23 65)(24 66)(25 79)(26 80)(27 81)(28 82)(29 83)(30 84)(31 73)(32 74)(33 75)(34 76)(35 77)(36 78)(37 88)(38 89)(39 90)(40 91)(41 92)(42 93)(43 94)(44 95)(45 96)(46 85)(47 86)(48 87) G:=sub<Sym(96)| (1,31)(2,32)(3,33)(4,34)(5,35)(6,36)(7,25)(8,26)(9,27)(10,28)(11,29)(12,30)(13,46)(14,47)(15,48)(16,37)(17,38)(18,39)(19,40)(20,41)(21,42)(22,43)(23,44)(24,45)(49,82)(50,83)(51,84)(52,73)(53,74)(54,75)(55,76)(56,77)(57,78)(58,79)(59,80)(60,81)(61,91)(62,92)(63,93)(64,94)(65,95)(66,96)(67,85)(68,86)(69,87)(70,88)(71,89)(72,90), (1,22)(2,23)(3,24)(4,13)(5,14)(6,15)(7,16)(8,17)(9,18)(10,19)(11,20)(12,21)(25,37)(26,38)(27,39)(28,40)(29,41)(30,42)(31,43)(32,44)(33,45)(34,46)(35,47)(36,48)(49,61)(50,62)(51,63)(52,64)(53,65)(54,66)(55,67)(56,68)(57,69)(58,70)(59,71)(60,72)(73,94)(74,95)(75,96)(76,85)(77,86)(78,87)(79,88)(80,89)(81,90)(82,91)(83,92)(84,93), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96), (1,5,9)(2,6,10)(3,7,11)(4,8,12)(13,17,21)(14,18,22)(15,19,23)(16,20,24)(25,29,33)(26,30,34)(27,31,35)(28,32,36)(37,41,45)(38,42,46)(39,43,47)(40,44,48)(49,57,53)(50,58,54)(51,59,55)(52,60,56)(61,69,65)(62,70,66)(63,71,67)(64,72,68)(73,81,77)(74,82,78)(75,83,79)(76,84,80)(85,93,89)(86,94,90)(87,95,91)(88,96,92), (1,52)(2,53)(3,54)(4,55)(5,56)(6,57)(7,58)(8,59)(9,60)(10,49)(11,50)(12,51)(13,67)(14,68)(15,69)(16,70)(17,71)(18,72)(19,61)(20,62)(21,63)(22,64)(23,65)(24,66)(25,79)(26,80)(27,81)(28,82)(29,83)(30,84)(31,73)(32,74)(33,75)(34,76)(35,77)(36,78)(37,88)(38,89)(39,90)(40,91)(41,92)(42,93)(43,94)(44,95)(45,96)(46,85)(47,86)(48,87)>; G:=Group( (1,31)(2,32)(3,33)(4,34)(5,35)(6,36)(7,25)(8,26)(9,27)(10,28)(11,29)(12,30)(13,46)(14,47)(15,48)(16,37)(17,38)(18,39)(19,40)(20,41)(21,42)(22,43)(23,44)(24,45)(49,82)(50,83)(51,84)(52,73)(53,74)(54,75)(55,76)(56,77)(57,78)(58,79)(59,80)(60,81)(61,91)(62,92)(63,93)(64,94)(65,95)(66,96)(67,85)(68,86)(69,87)(70,88)(71,89)(72,90), (1,22)(2,23)(3,24)(4,13)(5,14)(6,15)(7,16)(8,17)(9,18)(10,19)(11,20)(12,21)(25,37)(26,38)(27,39)(28,40)(29,41)(30,42)(31,43)(32,44)(33,45)(34,46)(35,47)(36,48)(49,61)(50,62)(51,63)(52,64)(53,65)(54,66)(55,67)(56,68)(57,69)(58,70)(59,71)(60,72)(73,94)(74,95)(75,96)(76,85)(77,86)(78,87)(79,88)(80,89)(81,90)(82,91)(83,92)(84,93), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96), (1,5,9)(2,6,10)(3,7,11)(4,8,12)(13,17,21)(14,18,22)(15,19,23)(16,20,24)(25,29,33)(26,30,34)(27,31,35)(28,32,36)(37,41,45)(38,42,46)(39,43,47)(40,44,48)(49,57,53)(50,58,54)(51,59,55)(52,60,56)(61,69,65)(62,70,66)(63,71,67)(64,72,68)(73,81,77)(74,82,78)(75,83,79)(76,84,80)(85,93,89)(86,94,90)(87,95,91)(88,96,92), (1,52)(2,53)(3,54)(4,55)(5,56)(6,57)(7,58)(8,59)(9,60)(10,49)(11,50)(12,51)(13,67)(14,68)(15,69)(16,70)(17,71)(18,72)(19,61)(20,62)(21,63)(22,64)(23,65)(24,66)(25,79)(26,80)(27,81)(28,82)(29,83)(30,84)(31,73)(32,74)(33,75)(34,76)(35,77)(36,78)(37,88)(38,89)(39,90)(40,91)(41,92)(42,93)(43,94)(44,95)(45,96)(46,85)(47,86)(48,87) ); G=PermutationGroup([(1,31),(2,32),(3,33),(4,34),(5,35),(6,36),(7,25),(8,26),(9,27),(10,28),(11,29),(12,30),(13,46),(14,47),(15,48),(16,37),(17,38),(18,39),(19,40),(20,41),(21,42),(22,43),(23,44),(24,45),(49,82),(50,83),(51,84),(52,73),(53,74),(54,75),(55,76),(56,77),(57,78),(58,79),(59,80),(60,81),(61,91),(62,92),(63,93),(64,94),(65,95),(66,96),(67,85),(68,86),(69,87),(70,88),(71,89),(72,90)], [(1,22),(2,23),(3,24),(4,13),(5,14),(6,15),(7,16),(8,17),(9,18),(10,19),(11,20),(12,21),(25,37),(26,38),(27,39),(28,40),(29,41),(30,42),(31,43),(32,44),(33,45),(34,46),(35,47),(36,48),(49,61),(50,62),(51,63),(52,64),(53,65),(54,66),(55,67),(56,68),(57,69),(58,70),(59,71),(60,72),(73,94),(74,95),(75,96),(76,85),(77,86),(78,87),(79,88),(80,89),(81,90),(82,91),(83,92),(84,93)], [(1,2,3,4,5,6,7,8,9,10,11,12),(13,14,15,16,17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80,81,82,83,84),(85,86,87,88,89,90,91,92,93,94,95,96)], [(1,5,9),(2,6,10),(3,7,11),(4,8,12),(13,17,21),(14,18,22),(15,19,23),(16,20,24),(25,29,33),(26,30,34),(27,31,35),(28,32,36),(37,41,45),(38,42,46),(39,43,47),(40,44,48),(49,57,53),(50,58,54),(51,59,55),(52,60,56),(61,69,65),(62,70,66),(63,71,67),(64,72,68),(73,81,77),(74,82,78),(75,83,79),(76,84,80),(85,93,89),(86,94,90),(87,95,91),(88,96,92)], [(1,52),(2,53),(3,54),(4,55),(5,56),(6,57),(7,58),(8,59),(9,60),(10,49),(11,50),(12,51),(13,67),(14,68),(15,69),(16,70),(17,71),(18,72),(19,61),(20,62),(21,63),(22,64),(23,65),(24,66),(25,79),(26,80),(27,81),(28,82),(29,83),(30,84),(31,73),(32,74),(33,75),(34,76),(35,77),(36,78),(37,88),(38,89),(39,90),(40,91),(41,92),(42,93),(43,94),(44,95),(45,96),(46,85),(47,86),(48,87)]) 144 conjugacy classes class 1 2A ··· 2G 2H ··· 2O 3A 3B 3C 3D 3E 4A ··· 4H 4I ··· 4P 6A ··· 6N 6O ··· 6AI 6AJ ··· 6AY 12A ··· 12P 12Q ··· 12AN 12AO ··· 12BD order 1 2 ··· 2 2 ··· 2 3 3 3 3 3 4 ··· 4 4 ··· 4 6 ··· 6 6 ··· 6 6 ··· 6 12 ··· 12 12 ··· 12 12 ··· 12 size 1 1 ··· 1 3 ··· 3 1 1 2 2 2 1 ··· 1 3 ··· 3 1 ··· 1 2 ··· 2 3 ··· 3 1 ··· 1 2 ··· 2 3 ··· 3 144 irreducible representations dim 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 type + + + + + + + + image C1 C2 C2 C2 C2 C3 C4 C6 C6 C6 C6 C12 S3 D6 D6 C3×S3 C4×S3 S3×C6 S3×C6 S3×C12 kernel S3×C22×C12 S3×C2×C12 Dic3×C2×C6 C2×C6×C12 S3×C22×C6 S3×C22×C4 S3×C2×C6 S3×C2×C4 C22×Dic3 C22×C12 S3×C23 C22×S3 C22×C12 C2×C12 C22×C6 C22×C4 C2×C6 C2×C4 C23 C22 # reps 1 12 1 1 1 2 16 24 2 2 2 32 1 6 1 2 8 12 2 16 Matrix representation of S3×C22×C12 in GL4(𝔽13) generated by 1 0 0 0 0 12 0 0 0 0 1 0 0 0 0 1 , 12 0 0 0 0 1 0 0 0 0 12 0 0 0 0 12 , 11 0 0 0 0 4 0 0 0 0 10 0 0 0 0 10 , 1 0 0 0 0 1 0 0 0 0 3 0 0 0 1 9 , 1 0 0 0 0 1 0 0 0 0 10 8 0 0 12 3 G:=sub<GL(4,GF(13))| [1,0,0,0,0,12,0,0,0,0,1,0,0,0,0,1],[12,0,0,0,0,1,0,0,0,0,12,0,0,0,0,12],[11,0,0,0,0,4,0,0,0,0,10,0,0,0,0,10],[1,0,0,0,0,1,0,0,0,0,3,1,0,0,0,9],[1,0,0,0,0,1,0,0,0,0,10,12,0,0,8,3] >; S3×C22×C12 in GAP, Magma, Sage, TeX S_3\times C_2^2\times C_{12} % in TeX G:=Group("S3xC2^2xC12"); // GroupNames label G:=SmallGroup(288,989); // by ID G=gap.SmallGroup(288,989); # by ID G:=PCGroup([7,-2,-2,-2,-2,-3,-2,-3,192,9414]); // Polycyclic G:=Group<a,b,c,d,e|a^2=b^2=c^12=d^3=e^2=1,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,b*c=c*b,b*d=d*b,b*e=e*b,c*d=d*c,c*e=e*c,e*d*e=d^-1>; // generators/relations ׿ × 𝔽
2020-10-21 02:49:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000090599060059, "perplexity": 10860.209872048103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00393.warc.gz"}
https://huggingface.co/datasets/davidwisdom/reddit-randomness
# Datasets: davidwisdom /reddit-randomness Dataset Preview The dataset preview is not available for this split. Server error Status code: 400 Exception: ParserError Message: Error tokenizing data. C error: Expected 1 fields in line 6, saw 3 Need help to make the dataset viewer work? Open an issue for direct support. # Reddit Randomness Dataset A dataset I created because I was curious about how "random" r/random really is. This data was collected by sending GET requests to https://www.reddit.com/r/random for a few hours on September 19th, 2021. I scraped a bit of metadata about the subreddits as well. randomness_12k_clean.csv reports the random subreddits as they happened and summary.csv lists some metadata about each subreddit. # The Data ## randomness_12k_clean.csv This file serves as a record of the 12,055 successful results I got from r/random. Each row represents one result. ### Fields • subreddit: The name of the subreddit that the scraper recieved from r/random (string) • response_code: HTTP response code the scraper recieved when it sent a GET request to /r/random (int, always 302) ## summary.csv As the name suggests, this file summarizes randomness_12k_clean.csv into the information that I cared about when I analyzed this data. Each row represents one of the 3,679 unique subreddits and includes some stats about the subreddit as well as the number of times it appears in the results. ### Fields • subreddit: The name of the subreddit (string, unique) • subscribers: How many subscribers the subreddit had (int, max of 99_886) • current_users: How many users accessed the subreddit in the past 15 minutes (int, max of 999) • creation_date: Date that the subreddit was created (YYYY-MM-DD or Error:PrivateSub or Error:Banned) • date_accessed: Date that I collected the values in subscribers and current_users (YYYY-MM-DD) • time_accessed_UTC: Time that I collected the values in subscribers and current_users, reported in UTC+0 (HH:MM:SS) • appearances: How many times the subreddit shows up in randomness_12k_clean.csv (int, max of 9) # Missing Values and Quirks In the summary.csv file, there are three missing values. After I collected the number of subscribers and the number of current users, I went back about a week later to collect the creation date of each subreddit. In that week, three subreddits had been banned or taken private. I filled in the values with a descriptive string. • SomethingWasWrong (Error:PrivateSub) • HannahowoOnlyfans (Error:Banned) • JanetGuzman (Error:Banned) I think there are a few NSFW subreddits in the results, even though I only queried r/random and not r/randnsfw. As a simple example, searching the data for "nsfw" shows that I got the subreddit r/nsfwanimegifs twice.
2022-05-21 13:49:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23773211240768433, "perplexity": 7842.203052545745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00594.warc.gz"}
http://physics.stackexchange.com/questions/34243/is-there-an-observable-of-time/34246
# Is there an observable of time? [duplicate] In Quantum Mechanics, position is an observable, but time may be not. I think that time is simply a classical parameter associated with the act of measurement, but is there an observable of time? And if the observable will exist, what is an operator of time? - possible dup physics.stackexchange.com/questions/12287/… –  Yrogirg Aug 15 '12 at 18:24 Possible duplicate: physics.stackexchange.com/q/6584/2451 –  Qmechanic Aug 15 '12 at 18:38 In the first chapter of Srednicki's book on QFT he states that one route to QFT is to promote time to an operator on an equal footing with position. He says this is viable but complicated so in general we do QFT by demoting position to a label on an equal footing with time. I don't know more about this but hope it may be of interest. –  Mistake Ink Aug 15 '12 at 18:53 The first link above is a related but different question. the second link is more or less the same question, but the answers there are quite different from the answers below. –  Arnold Neumaier Aug 15 '12 at 19:02 ## marked as duplicate by Qmechanic♦Nov 7 '13 at 6:28 The problem of extending Hamiltonian mechanics to include a time operator, and to interpret a time-energy uncertainty relation, first posited (without clear formal discussion) in the early days of quantum mechanics, has a large associated literature; the survey article P. Busch. The time-energy uncertainty relation, in Time in quantum mechanics (J. Muga et al., eds.), Lecture Notes in Physics vol. 734. Springer, Berlin, 2007. pp 73-105. doi:10.1007/978-3-540-73473-4_3, arXiv:quant-ph/0105049. carefully reviews the literature up to the year 2000. (The book in which Busch's survey appears discusses related topics.) There is no natural operator solution in a Hilbert space setting, as Pauli showed in 1958, W. Pauli. Die allgemeinen Prinzipien der Wellenmechanik, in Handbuch der Physik, Vol V/1, p. 60. Springer, Berlin, 1958. Engl. translation: The general principles of quantum mechanics, p. 63. Springer, Berlin 1980. by a simple argument that a self-adjoint time operator densely defined in a Hilbert space cannot satisfy a CCR with the Hamiltonian, as the CCR would imply that $H$ has as spectrum the whole real line, which is unphysical. Time measurements do not need a time operator, but are captured well by a positive operator-valued measure (POVM) for the time observable modeling properties of the measuring clock. - In QM, the temporal variable $t$ is not an observable in the technical sense (i.e., in the same sense that position and momentum are). In order to be an observable, it should have to exist a linear self-adjoint operator $\hat T$ whose eigenvalues $t$ were the outcomes of measurements. But then (at least in the most naive way and according with the Schr. equation) the Hamiltonian and the temporal operator should be non-compatible observables with canonical commutation relations like position and momentum. And this is not possible because in a quantum theory the Hamiltonian must be bounded from bellow and this would imply that its conjugate (time operator) were no self-adjoint.
2014-04-20 14:48:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7949029803276062, "perplexity": 200.211746052451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
https://zbmath.org/software/7890
## Triangle swMATH ID: 7890 Software Authors: Jonathan Richard Shewchuk Description: Triangle: A Two-Dimensional Quality Mesh Generator and Delaunay Triangulator. Triangle generates exact Delaunay triangulations, constrained Delaunay triangulations, conforming Delaunay triangulations, Voronoi diagrams, and high-quality triangular meshes. The latter can be generated with no small or large angles, and are thus suitable for finite element analysis. Triangle (version 1.6, with Show Me version 1.6) is available as a .zip file (159K) or as a .shar file (829K) (extract with sh) from Netlib in the voronoi directory. Please note that although Triangle is freely available, it is copyrighted by the author and may not be sold or included in commercial products without a license. Homepage: http://www.cs.cmu.edu/~quake/triangle.html Related Software: TetGen; CGAL; Gmsh; Matlab; DistMesh; Netgen; PETSc; PARDISO; Eigen; Qhull; CUDA; Voronoi; FreeFem++; Voro++; FEniCS; PolyMesher; ParaView; METIS; dfnWorks; ALBERTA Referenced in: 360 Publications all top 5 ### Referenced by 783 Authors 8 Araya, Rodolfo A. 7 Linke, Alexander 7 Manzini, Gianmarco 6 Chrisochoides, Nikos P. 6 Soghrati, Soheil 5 Fuhrmann, Jürgen 5 Ju, Lili 5 Nagarajan, Anand 4 Chernikov, Andrey N. 4 Edwards, Michael G. 4 Fumagalli, Alessio 4 Ganesan, Sashikumaar 4 Gunzburger, Max D. 4 Hormann, Kai 4 Nochetto, Ricardo Horacio 4 Shewchuk, Jonathan Richard 4 Vavasis, Stephen A. 3 Adams, Nikolaus A. 3 Ahmed, Raheel 3 Berrone, Stefano 3 Fu, Lin 3 Huisman, Bastiaan A. H. 3 Hyvönen, Nuutti 3 Keilegavlen, Eirik 3 Lamine, Sadok 3 Langmach, Hartmut 3 Linardakis, Leonidas 3 López-Fernández, María 3 Pal, Mayur 3 Rivara, Maria-Cecilia 3 Rodríguez, Rodolfo 3 Schädle, Achim 3 Scialò, Stefano 3 Sellountos, Euripides J. 3 Tang, Qian 3 Tobiska, Lutz 3 Üngör, Alper 3 Walker, Shawn W. 3 Yvinec, Mariette 3 Zhang, GuiYong 3 Zhong, Zhihua 2 Ahamadi, Malidi 2 Ahmadian, Hossein 2 Anisimov, Dmitry 2 Bakr, Shaaban A. 2 Bänsch, Eberhard 2 Barrenechea, Gabriel R. 2 Behrens, Edwin M. 2 Benedetti, Ivano 2 Bertolazzi, Enrico 2 Bhat, Sourabh P. 2 Bonfiglioli, Aldo 2 Brezzi, Franco 2 Burstedde, Carsten 2 Candiani, Valentina 2 Cangiani, Andrea 2 Chaumont-Frelet, Théophile 2 Dabrowski, Marcin 2 Dassi, Franco 2 Dell’Accio, Francesco 2 Devillers, Olivier 2 Di Tommaso, Filomena 2 Doǧan, Günay 2 Engvall, Luke 2 Evans, John A. 2 Ganguly, Pritam 2 Gärtner, Klaus 2 Gee, James C. 2 Genova, Kyle 2 Ghattas, Omar N. 2 Gustafsson, Tom 2 Haber, Robert Bruce 2 Harlen, Oliver G. 2 He, Zhicheng 2 Held, Martin 2 Hendrickson, Bruce A. 2 Hitschfeld, Nancy 2 Holke, Johannes 2 Horne, Roland N. 2 Hu, Xiangyu 2 Hysing, Shuren 2 Jeon, Kiwan 2 Ji, Zhe 2 Jiao, Xiangmin 2 Kamenski, Lennard 2 Kröker, Ilja 2 Lew, Adrian J. 2 Liang, Bowen 2 Liu, Gui-Rong 2 Lubich, Christian 2 Mandal, Jadav Chandra 2 Mannseth, Trond 2 Miller, Gary Lee 2 Mishev, Ilya D. 2 Morin, Pedro 2 Müller, Fabian Lukas 2 Munson, Todd S. 2 Nakshatrala, K. B. 2 Nouisser, Otheman 2 Ogita, Takeshi ...and 683 more Authors all top 5 ### Referenced in 104 Serials 36 Computer Methods in Applied Mechanics and Engineering 31 Journal of Computational Physics 20 SIAM Journal on Scientific Computing 16 Computational Geometry 13 Journal of Computational and Applied Mathematics 13 Computational Mechanics 11 International Journal for Numerical Methods in Engineering 11 Computational Geosciences 10 International Journal for Numerical Methods in Fluids 10 Computer Aided Geometric Design 8 Computers & Mathematics with Applications 7 Applied Numerical Mathematics 5 SIAM Journal on Numerical Analysis 5 Journal of Scientific Computing 5 Numerical Linear Algebra with Applications 4 Computers and Fluids 4 ACM Transactions on Mathematical Software 4 Applied Mathematics and Computation 4 BIT 4 Mathematics and Computers in Simulation 3 Computer Physics Communications 3 International Journal of Solids and Structures 3 Numerische Mathematik 3 International Journal of Computational Geometry & Applications 3 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 3 Pattern Recognition 2 IMA Journal of Numerical Analysis 2 Journal of Fluid Mechanics 2 ACM Transactions on Graphics 2 Discrete & Computational Geometry 2 Numerical Methods for Partial Differential Equations 2 Mathematical and Computer Modelling 2 Japan Journal of Industrial and Applied Mathematics 2 Journal of Global Optimization 2 Numerical Algorithms 2 Computational Mathematics and Mathematical Physics 2 Mathematical Programming. Series A. Series B 2 Advances in Engineering Software 2 Communications in Numerical Methods in Engineering 2 Physics of Fluids 2 International Journal of Computer Vision 2 Engineering Analysis with Boundary Elements 2 Mathematical Problems in Engineering 2 European Journal of Mechanics. A. Solids 2 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 2 Communications in Computational Physics 1 Archive for Rational Mechanics and Analysis 1 Inverse Problems 1 Information Processing Letters 1 Journal of Mathematical Biology 1 Journal of the Mechanics and Physics of Solids 1 Physics Letters. A 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Mathematics of Computation 1 Automatica 1 Calcolo 1 Computing 1 Finite Elements in Analysis and Design 1 Algorithmica 1 Information and Computation 1 CAD. Computer-Aided Design 1 Discrete Event Dynamic Systems 1 European Journal of Operational Research 1 Journal of Non-Newtonian Fluid Mechanics 1 SIAM Journal on Applied Mathematics 1 SIAM Review 1 Computational Statistics and Data Analysis 1 Experimental Mathematics 1 Journal of Computer and Systems Sciences International 1 Electronic Journal of Differential Equations (EJDE) 1 Computational and Applied Mathematics 1 Fractals 1 Advances in Computational Mathematics 1 Journal of Geodesy 1 INFORMS Journal on Computing 1 Computing and Visualization in Science 1 International Journal of Computational Fluid Dynamics 1 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 1 Abstract and Applied Analysis 1 1 Data Mining and Knowledge Discovery 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Flow, Turbulence and Combustion 1 CMES. Computer Modeling in Engineering & Sciences 1 Mathematical Modelling and Analysis 1 Milan Journal of Mathematics 1 Granular Matter 1 International Journal of Numerical Analysis and Modeling 1 International Journal of Fracture 1 Algorithms and Computation in Mathematics 1 Studies in Fuzziness and Soft Computing 1 Texts in Computational Science and Engineering 1 Computational & Mathematical Methods in Medicine 1 Inverse Problems and Imaging 1 SIAM Journal on Imaging Sciences 1 Foundations and Trends in Computer Graphics and Vision 1 Mathematical Geosciences 1 Mathematical Programming Computation 1 GEM - International Journal on Geomathematics 1 Journal of Computational and Graphical Statistics ...and 4 more Serials all top 5 ### Referenced in 31 Fields 222 Numerical analysis (65-XX) 92 Fluid mechanics (76-XX) 64 Partial differential equations (35-XX) 55 Mechanics of deformable solids (74-XX) 51 Computer science (68-XX) 18 Geophysics (86-XX) 14 Operations research, mathematical programming (90-XX) 13 Biology and other natural sciences (92-XX) 12 Optics, electromagnetic theory (78-XX) 11 Calculus of variations and optimal control; optimization (49-XX) 10 Combinatorics (05-XX) 10 Statistical mechanics, structure of matter (82-XX) 7 Convex and discrete geometry (52-XX) 7 Classical thermodynamics, heat transfer (80-XX) 5 Global analysis, analysis on manifolds (58-XX) 5 Information and communication theory, circuits (94-XX) 4 Statistics (62-XX) 3 Ordinary differential equations (34-XX) 3 Approximations and expansions (41-XX) 3 Differential geometry (53-XX) 3 Systems theory; control (93-XX) 2 Integral transforms, operational calculus (44-XX) 2 Integral equations (45-XX) 2 Geometry (51-XX) 2 Probability theory and stochastic processes (60-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 1 General and overarching topics; collections (00-XX) 1 Number theory (11-XX) 1 Group theory and generalizations (20-XX) 1 Measure and integration (28-XX) 1 Operator theory (47-XX)
2022-12-02 15:17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2856803834438324, "perplexity": 8262.475514674083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00359.warc.gz"}
http://docserver.carma.newcastle.edu.au/76/
# A Norm Convergence Result on Random Products of Relaxed Projections in Hilbert Space Bauschke, Heinz H. (1994) A Norm Convergence Result on Random Products of Relaxed Projections in Hilbert Space. [Preprint] Preview Postscript Suppose $X$ is a Hilbert space and $C_1,\ldots,C_N$ are closed convex intersecting subsets with projections $P_1,\ldots,P_N$. Suppose further $r$ is a mapping from ${\Bbb N}$ onto $\{1,\ldots,N\}$ that assumes every value infinitely often. We prove (a more general version of) the following result: \begin{quote} If the $N$-tuple $(C_1,\ldots,C_N)$ is innately boundedly regular'', then the sequence $(x_n)$, defined by $$x_0 \in X ~\mbox{arbitrary}, ~~~ x_{n+1} := P_{r(n)}x_n, ~~\mbox{for all n \geq 0},$$ converges in norm to some point in $\bigcap_{i=1}^{N} C_i$. \end{quote} Examples without the usual assumptions on compactness are given. Methods of this type have been used in areas like computerized tomography and signal processing.
2017-11-19 12:24:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9719724059104919, "perplexity": 347.54446152371355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805578.23/warc/CC-MAIN-20171119115102-20171119135102-00749.warc.gz"}
https://cartesianproduct.wordpress.com/2012/10/13/maths-a-level-dumbing-down-the-proof/
Maths ‘A’ level dumbing down: the proof The proof that is, that I have dumbed down… My mother recently gave me a book of old ‘A’ level exam papers. Here’s a 4 mark (ie low mark) question from the June 1982 University of London “Syllabus B” paper, that I think I’d really struggle with – I’ll try it later: Given that $f(x) \equiv 3 - 7x + 5x^2 -x^3$ show that $3 -x$ is a factor of $f(x)$. Factorise $f(x)$ completely and hence state the set of values for which $f(x) \leqslant 0$. This one (also 4 marks), seems a bit easier though: The functions $f$ and $g$, each with domain $D$, where: $D = \{ x:x \in \mathcal{R}$ and $0 \leqslant x \leqslant \pi \}$, are defined by $f: x \rightarrow cos x$ and $g: x \rightarrow x-\frac{1}{2}\pi$, Write down and simplify and expression for $f(g(x))$, giving its domain of definition. Sketch the graph of $y = f(g(x))$. 1. Mary Wimbury says: ah now – I could do the first one but had to google trig functions to do the second one. Mind you better than my finals papers which I discovered at some point and couldn’t understand the questions! • Actually, the first one wasn’t as horrific as I thought, so maybe I’d be on my way to an E grade after all. 2. Hugh says: So I worked through the problem my way (seldom the right way) and made a few interesting-to-me discoveries. At least the way I did it seems a bit tough for students in a test environment. “(3-x) is a factor of P(x)” is the same as “P(3) is 0”. Easy to show that by calculation. It’s even easy to calculate P(3) by hand. You might notice that it is easier to calculate P(3)/3 and gets you the same information (that the result is zero) (but explaining that is more work). Once you divide the polynomial by the given factor, you have a quadratic which can be solved by the normal formula. In fact, you should be able to solve this particular quadratic by inspection. After I did that, I realized that the constant term (3) suggests which values to probe as roots. The product of roots is 3, and one is 3, and so the others ought to be 1 or -1 (all the coefficients are integral and the highest term’s coefficient is -1). Once you know all the roots, you know all the zero crossings and can figure out where the function is negative. Oh, I just remembered: it is negative exactly where an odd number of the factors is negative and the others are non-zero. Since one root is doubled, it is exactly where the other root is negative and the doubled root is not zero: when x is greater than 3. f is zero when any of the factors is zero: when x is 3 or 1. So the answer is: when x is 1 or greater or equal to 3. So all that is straightforward if you are comfortable with polynomials. Probably not if you only learned by rote (unless this looks like your rote questions). So this looks like a good question. It may even work well for partial marks. But it would surprise me if a majority of students, even undergrads, could do this under time pressure. I find it interesting that the two problems define functions using different notations. The second question is a little trickier because one has to understand and describe that the composition has a reduced domain ([pi/2, pi]). All the more confusing because each function’s defining expression is well defined for all of R. I don’t think students are very good with domains. • Hugh, interesting reply and a few insights that passed me by. But it’s surely not that difficult a question? I just thought must be case that $f(x) = (3 - x)(a + bx + cx^2)$ and it turned out to be not so difficult to work out what a, b and c were. When I wrote the question out this afternoon I thought it would be difficult, but it wasn’t really. The second one I could work out in my head even as I typed it out, thought it was really quite trivial. But that’s just me I guess. 3. […] Maths ‘A’ level dumbing down: the proof […]
2018-12-17 06:59:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731663823127747, "perplexity": 440.9853296743607}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00439.warc.gz"}
https://tex.stackexchange.com/questions/483193/beamer-how-to-change-color-of-each-equation-in-align-environment
# Beamer - how to change color of each equation in align environment? I want to use beamer overlays to change the color of equations in an align environment. On each slide, I want all equations to be black, except one which should be red. On the first slide, the first equation should be red, on the second slide, the second equation should be red, etc. \documentclass{beamer} \usepackage{amsmath, bm} \begin{document} \begin{frame} \begin{align*} \bm{f_t} &= \sigma(\bm{W_f} \cdot [\bm{h_{t-1}}, \bm{x_t}] + \bm{b_f}) \\ \bm{i_t} &= \sigma(\bm{W_i} \cdot [\bm{h_{t-1}}, \bm{x_t}] + \bm{b_i}) \\ \bm{\tilde{C}_t} &= \tanh(\bm{W_{\tilde{C}}} \cdot [\bm{h_{t-1}}, \bm{x_t}] + \bm{b_{\tilde{C}}}) \\ \bm{C_t} &= \bm{f_t} \odot \bm{C_{t-1}} + \bm{i_t} \odot\bm{\tilde{C}_t} \\ \bm{o_t} &= \sigma(\bm{W_o} \cdot [\bm{h_{t-1}}, \bm{x_t}] + \bm{b_o}) \\ \bm{h_t} &= \bm{o_t} \odot \tanh(\bm{C_t}) \end{align*} \end{frame} \end{document} If anybody can help me, I'd be incredibly thankful. I've been fiddling with \onslide for hours! • Do the equations have to appear after each other or is it enough that the color changes? Apr 4, 2019 at 16:09 • Preferably the equations would appear one after each other. Sorry, I should have mentioned that in the question. Apr 4, 2019 at 16:45 You can use \alert to highlight each line after another: \documentclass{beamer} \usepackage{amsmath, bm} \begin{document} \begin{frame} \begin{align*} \end{align*} \end{frame} \end{document} • Thanks so much sam! Although is it possible for the equations to appear one after each other as well? Apr 4, 2019 at 16:42 • @Henry Sure it is possible, although in your question you explicitly asked that all equations to be black, only one red. Try \uncover<+->{\alert<.>{\bm{i_t}}} &\uncover<.->{\alert<.>{= \sigma(\bm{W_i} \cdot [\bm{h_{t-1}}, \bm{x_t}] + \bm{b_i})}} \\ Apr 4, 2019 at 16:48
2022-10-05 14:32:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999935626983643, "perplexity": 4585.623610387889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00468.warc.gz"}
https://mathoverflow.net/questions/271936/closed-form-for-int-0t-e-x-fraci-n-alpha-xxdx
# Closed form for $\int_0^T e^{-x}\frac{I_n(\alpha x)}{x}dx$ EDIT: Some additional details and corrections, I would appreciate any information about the highlighted expression. I try to solve $\int_0^T e^{-x}\frac{I_n(\alpha x)}{x}dx$ where $I_n(x)$ is the modified Bessel function of the first kind and $0<\alpha<1$. My first approach was to turn this integral into an infinite sum to fit a hypergeometric series: • Using the infinite series representation of the Bessel function, I got incomplete gamma functions in the sum, which does not sound promising. • The multiplication theorem yields an infinite series of integrals where we get rid of the $\alpha$: $$\int_0^T e^{-x}\frac{I_n(\alpha x)}{x}dx=\alpha^n\sum_{m=0}^{\infty}\frac {\big(\frac {\alpha ^{2}-1}{2}\big)^m}{m!}\int_0^T e^{-x}x^{m-1}I_{n+m}(x)dx$$ According to a table of integrals, the new integrals are: \begin{align} \int_0^T e^{-x}x^{m-1}I_{n+m}(x)dx&=\frac{T^{2m+n}}{2^{m+n}}\frac{\Gamma(2m+n)}{\Gamma(m+n+1)\Gamma(2m+n+1)}\\ &\times{}_2F_2[\{m+n+\frac{1}{2},2m+n\};\{2m+2n+1,2m+n+1\};-2T] \end{align} Expanding ${}_2F_2$ (let's call $k$ the summation index), we get a double infinite series which might fit the definition of a hypergeometric function of 2 variables. However, I get several Pochhammer symbols with coupled summations indices: $$\sum_{m,k=0}^{\infty}\frac{(n+\frac{1}{2})_{m+k}(n)_{2m+k}}{(n+1)_{2m+k}(2n+1)_{2m+k}}\frac{X^mY^k}{m!\,k!}$$ which, apparently, does not fit any hypergeometric function definition (at least, this is not an Appell function). Another approach could be to get inspiration from the limit $T\rightarrow \infty$ which is the Laplace transform of $\frac{I_n(x)}{x}$ (up to a constant) and it has a closed form (according to a table): $$\int_0^\infty e^{-x}\frac{I_n(\alpha x)}{x}dx=\frac{\big(\frac{\alpha}{1+\sqrt{1-\alpha^2}}\big)^n}{n}$$ However, I don't find any reference on the way to compute this. EDIT: This comes from the recurrence identity $I_{n-1}(x)-I_{n+1}(x)=2n\frac{I_n}{x}$ and the calculation of the Laplace transform of $I_n(x)$ is well documented. Do you have any information or suggestion about the above formulae ? • I don't understand the downvotes. On the other hand, I don't understand why the OP expects a closed form, either... – Igor Rivin Jun 11 '17 at 13:23 • You are right, it may have no closed form. I asked because this last integral seems to be an unexploited approach. Maybe I should look at the asymptotic behaviour and be satisfied with it. – Alexandre Jun 11 '17 at 13:32 • for $\alpha=1$ there is a closed form expression in terms of $I_0(t)$ and $I_1(t)$ – Carlo Beenakker Jun 11 '17 at 14:29 • and for small $\alpha$ it's an incomplete gamma function, are these asymptotics of interest? – Carlo Beenakker Jun 11 '17 at 14:56 • Great! I found the incomplete gamma for small $\alpha$ from the first infinite series proposed, and I can also get $\alpha=1$ from the second infinite series but it is expressed with the hypergeometric function ${}_2F_2$. Do you have more detail about this closed expression in terms of $I_0$ and $I_1$? – Alexandre Jun 11 '17 at 15:53 $$\int_0^T e^{-x}I_n(x)\frac{1}{x}\,dx=\frac{1}{n}+\frac{1}{n T^{n-1}}e^{-T}\left[a_n(T)I_0(T)+b_n(T)I_1(T)\right]$$ the functions $a_n$ and $b_n$ are polynomials of degree $n-1$, I do not have a closed form expression; the first few are: $$a_1(T)=-1,\;\;a_2(T)=-2T,\;\;a_3(T)=-3 T^2+4 T,$$ $$a_4(T)=-4 T^3+8 T^2-24 T,\;\;a_5(T)=-5 T^4+20 T^3-48 T^2+192 T$$ $$b_1(T)=-1,\;\;b_2(T)=-2T+2,\;\;b_3(T)=-3T^2+4T-8,$$ $$b_4(T)=-4 T^3+12 T^2-16 T+48,\;\;b_5(T)=-5 T^4+20 T^3-88 T^2+96 T-384$$ • Thanks! I would appreciate any reference or indication on how you got this. The coefficients $b_{n,0}$ seems to be $(-2)^nn!$. – Alexandre Jun 11 '17 at 19:33 • Mathematica evaluates the integral for arbitrary real $n$ in terms of a hypergeometric function, which reduces to these explicit expressions for integer $n$. – Carlo Beenakker Jun 11 '17 at 20:14 • Your expression seems to come from the formula relating ${}_2F_2$ and ${}_0F_1$ and other similar identities. But the expressions of your polynomials are not clear yet. – Alexandre Jun 13 '17 at 10:26
2021-05-08 17:06:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9146130084991455, "perplexity": 164.63803451307774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.94/warc/CC-MAIN-20210508151721-20210508181721-00473.warc.gz"}
https://nrich.maths.org/2021/index?nomenu=1
Is it possible to rearrange the numbers 1, 2 ...12 around a clock face in such a way that every two numbers in adjacent positions differ by any of 3, 4 or 5 hours? How many solutions can you find? Can you convince us that you have all of them?
2015-05-30 18:43:17
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8000081777572632, "perplexity": 304.6257109605304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932596.84/warc/CC-MAIN-20150521113212-00053-ip-10-180-206-219.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/627897/is-the-fact-that-were-moving-with-a-certain-speed-with-respect-to-the-cmb-speci
# Is the fact that we're moving with a certain speed with respect to the CMB special-relativity consistent? As a sidenote to an exercise about the aberration of CMB at the dipole level, which scope was to find the peculiar velocity we have with respect to the cosmic background (assuming the doppler effect derived from it is the only source of dipole aberration), my professor left the following question: Is the fact that we're moving with a certain speed with respect to the CMB special-relativity consistent? My calculations were as follows: Apparently, the difference in the temperature of the CMB due to the dipole is $$\Delta T_{l=1}=3.372\cdot 10^{-3}K$$, and choosing our axis so that the spherical harmonic with $$m=0$$ is oriented with the dipole, it's easy to see that, at the dipole order, temperature will be written: $$T(\theta)=T_0(1+\frac{v}{c}cos\theta)$$ So I just took $$\frac{\Delta T_{l=1}}{T} = \frac{v}{c} cos\theta$$, and this leads to $$v\approx371\frac{km}{s}$$. I don't see how this such small speed could not be consistent with special relativity, but I'm suspicious that answering "Yes it is consistent" would be too easy to be true... Is my reasoning correct? Thanks! The actual speed with which we are moving relative to the CMB is unimportant; we could be moving at $$0.99c$$ and special relativity would still apply, since that theory only requires a velocity that is less than $$c$$. It's like the Newtonian formula for calculating kinetic energy - it works regardless of whether your speed is $$371 km/s$$ (an extremely fast speed by terrestrial standards) or $$0.001 m/s$$.
2021-04-21 01:32:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262269735336304, "perplexity": 126.2518212396694}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039503725.80/warc/CC-MAIN-20210421004512-20210421034512-00162.warc.gz"}
http://tex.stackexchange.com/questions/2958/why-is-newpage-ignored-sometimes/2959
# Why is \newpage ignored sometimes ? I have the following towards the end of an article. The bibliography is short (4 entries). What's happening is that on the very last page of document I get the chart and immediately afterwards the References section, despite the \newpage directive. While I personally prefer everything on one page, I have a requirement to put the references on a different page. Is LaTeX ignoring \newpage because it finds plenty of space to use on that page? If so, I'm confused why it does so even when told explicitly to start a new page. I cannot post the entire article so hopefully the excerpt below will be helpful. bla bla bla bla bla bla bla bla bla \begin{figure}[htp] \centering \includegraphics{my-image} \caption{caption here}\label{my-label} \end{figure} \newpage \begin{thebibliography}{99} - Floating figures and tables can move past \newpage, so what is happening is that the \newpage does start a new page, then inserts the figure, then starts the references section. You want \clearpage, which has the same effect as \newpage but restricts floats as well. If there are pending floats when \clearpage hits, a float page is created only after which the content will continue. neither \newpage nor \clearpage nor \cleardoublepage work for me. Any ideas? I have lots of graphics and not much text in my document but latex float placement at the moment is really far off. –  wirrbel May 13 '13 at 7:55
2014-12-22 02:16:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404791951179504, "perplexity": 1659.9836883931557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802773058.130/warc/CC-MAIN-20141217075253-00021-ip-10-231-17-201.ec2.internal.warc.gz"}
https://runzhuoli.me/2018/08/13/largest-k-elements.html
# Find the Largest K Elements Find the largest K element in an array is a common code challenge in interviews. You may see this problem in several different formats like K Closest Points to the Origin. Sorting the whole given array can solve this problem in time complexity of $O(n\,log\,n)$ and space complexity of $O(n)$. However, there is another efficient solution, which has a huge improvement of space complexity when K is small, through an import data structure - Heap. ### Heap Heap is an abstract data structure of priority queue. It is implemented by an array (index starts with 1) and visualized as a complete binary tree. Based on the above definition, we get the following important proprieties of Heap: • For a give element e at index n, its children elements are at index 2n(left child) and 2n+1(right child). • The max height of a heap with N elements is $log\,N$. Heap is usually used as max heap or min heap. If all nodes are larger than its children, this heap is a max heap. It guarantees the first element(root node) is the largest one in a heap. I am going to discuss basic operations of max heap and their complexities in this post. The same applies to min heap. #### Heapify: This is the most basic operation of max heap. For a given heap and an index of a node in the heap, “heapify” converts the subtree of the given element(array[i]) to a max heap. An important assumption of “heapify” is that all the sub trees of node i are max heaps already. public static <T extends Comparable> void heapify(T[] array, int i) { int l = i * 2; int r = l + 1; int largest = i; if (l <= array.length - 1 && array[l].compareTo(array[i]) > 0){ largest = l; } if (r <= array.length - 1 && array[r].compareTo(array[largest]) > 0){ largest = r; } if (largest != i) { T temp = array[i]; array[i] = array[largest]; array[largest] = temp; heapify(array, largest); } } As we can see the time complexity of “heapify” is the height of node i in the heap. If subtree of the give node has n elements, the time complexity is $O(log\,n)$. #### Convert an unsorted array to a max heap: By using the “heapify” method defined above, it is easy to construct a max heap from an unsorted array. static <T extends Comparable> void buildMaxHeap(T[] array){ int n = array.length; for(int i = n/2 ; i > 0 ; i --){ heapify(array, i); } } The “buildMaxHeap” method calls “heapify” method of the nodes from the second lowest level up to the root node in a heap. It is very obviously that the time complexity of the above method is $O(n\,log\,n)$. However, after the following detailed analysis, we can get the time actual complexity is $O(n)$. For a node at height l, the time complexity of “heapify” is $O(l)$ and max number of nodes at height l is $\lceil {n \over {2^{l+1}}} \rceil$. So the total time complexity of “buildMaxHeap” is: Let’s analyze whether we can converge $\sum_{i=1}^{\lceil log\,n \rceil} {i \over {2^{i+1}}}$ to a constant value. This expression can be written as $S_i = 1/2^2 + 2/2^3 + ... + i/2^{i+1}$. And, $S_i/2 = 1/2^3 + 2/2^4 + ... + (i-1)/{2^{i+1}} + i/2^{i+2}$. Then by using $S_i - S_i/2$, we can get: So, the time complexity of “buildMaxHeap” is $O(n)$ as $\sum_{i=1}^{\lceil log\,n \rceil} {i \over {2^{i+1}}}$ in (1) is always smaller to 1. #### Heap sort: Based on the above operations, we could sort an unsorted array with the following algorithm: 1. Using “buildMaxHeap” to convert an unsorted array to a max heap - $O(n)$ 2. Find the largest element a[1] - $O(1)$ 3. Swap element a[n] with a[1] - $O(1)$ 4. Discard the element a[n] from the heap - $O(1)$ 5. Call “heapify(array, 1)” to rebuild the max heap and go to step 2 until there is only 1 element in the heap. - $O(log\,n)$ So, as we can see the time complexity of heap sort is $O(n\,log\,n)$. ### Find the K largest elements Min heap can be used to find the largest K elements in an unsorted array of size n. 1. Create a min heap of K elements from array[1] to array[k] - $O(K)$ 2. Loop through the rest elements from array[k+1] to array[n] 3. For each element, if it is larger than the root of min heap, replace the root element with it. And “heapify” the new root - $O((n - K) * log\,K)$. The time complexity of this algorithm is $O(n\,log\,K)$ and the space complexity is $O(K)$. Using max heap only needs to go through the given array once. It is an effective algorithm when n is very big value. However, we can get better time complexity to solve this problem by using partion sort, whose time complexity is $O(n)$. *What if even K is too big to keep a size-K heap in memory? We can get the K’ largest elements(K is the maximum size of a heap can be kept in memory). Discard the selected K’ elements in the array and then get the next K’ largest elements until we have all the K element we needed. By doing this, we could keep the space complexity to $O(K')$ and the time complexity comes to $\lceil {K \over K'} \rceil O(n\,log\,K)$.
2021-06-17 01:31:43
{"extraction_info": {"found_math": true, "script_math_tex": 27, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39998045563697815, "perplexity": 1171.979412950128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00616.warc.gz"}
https://electronics.stackexchange.com/questions/388607/using-generic-tools-to-program-atf150x-series-cplds-from-a-jedec-file-understan
# Using generic tools to program ATF150X series CPLDs from a JEDEC file (understanding JTAG details) I have a number of ATF1504 44 pin PLCC CPLD devices. I can design for them without a problem to get a JEDEC file. I want to program them via the JTAG ISP interface which has the same pinout as the Atmel/Microchip AVR JTAG. I have been previously using ATF750s (no JTAG) and programming them directly with a GALEP-5 universal programmer - but it doesn't support (parallel) programming of the larger devices. I also possess the (older) Atmel AVR ICE (JTAGICE3), and Atmel SAM ICE JTAG programmers. I have also been using avrdude from the command line with various USB device programers for microcontrollers, and also using Atmel Studio which supports the above programmers for AVR/ARM. Supposedly the GALEP-5 supports some native downloading of precompiled JTAG command files for which I have an adapter. I'm not sure how to make these files. What I don't understand is whether anything I already have (plus free software) can be leveraged to flash the ATF150X devices from the JEDEC file. I don't understand enough about JTAG to know if there needs to be some special software that sends the right device specific commands to the CPLD device to program it with the JEDEC file or if some generic software can send the data via the 10-pin JTAG ISP to the chip via one of the hardware USB devices that I already own. Having spent \$100s on programmers I'd rather not buy another without necessity. (I'm also familiar with writing code to directly communicate with USB devices using python without drivers using libusb.) It seems that I ought to be able to use the AVRICE3 USB programmer in JTAG mode to send the data, but I am not sure what software to use. I'd rather use it from a linux command line which makes it easy to automate via make files. Avrdude seems to require a specific device name (which are all microcontrollers) to "know" how to send the given data even though it supports JTAG programers. Is there something like avrdude that can send out JTAG data to flash these CPLDs, or more generally, FPGAs?
2019-06-16 17:15:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3132723569869995, "perplexity": 3844.171774620752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00002.warc.gz"}
https://papers-gamma.link/all/page/4
### Comments: I'm in love with this paper! ### Non-worst-case analysis: In practice, "we are not interested in all problem instances, but only in those which can actually occur in reality." "The notion of stability [...] is a concrete way to formalize the notion that the only instances of interest are those for which small perturbation in the data (which may reflect e.g. some measurement errors) do not change the optimal partition of the graph." This Stable Analysis is different from "Smoothed Analysis" where "one shows that the hard instances form a discrete and isolated subset of the input space". ### Open problems: **Conjecture:** There exists some constant $\gamma^{*}$ such that $\gamma^{*}$-stable instances can be solved in polynomial time. **Question:** it is shown that $\gamma$-stable instances, with $\gamma>\sqrt{\Delta n}$, can be solved in polynomial time. Can this be improved (without further assumptions such as a lower bound on the minimum degree)? As $\sqrt{\Delta n}$ is usually large, this may not be useful in practice. **Question:** How does the algorithm "FindMaxCut" (page 6) perform in practice on real-world instances??? **Question:** How about the greedy heuristic?: Start from a random cut and do passes on the nodes moving each node to the other side of the cut if the size of the cut increases until convergence. Does it have some guarantee on $\gamma$-stable instances? ### Extended spectral clustering: "Let D be a diagonal matrix. Think of W + D as the weighted adjacency matrix of a graph, with loops added. Such loops do not change the weight of any cut, so that regardless of what D we choose, a cut is maximal in W iff it is maximal in W + D. Furthermore, it is not hard to see that W is $\gamma$-stable, iff W + D is. Our approach is to first find a “good” D, and then take the spectral partitioning of W + D as the maximal cut. These observations suggest the following question: Is it true that for every $\gamma$-stable instance W with γ large enough there exists a diagonal D for which extended spectral partitioning solves Max-Cut? If so, can such a D be found efficiently? Below we present certain sufficient conditions for these statements." I did not fully understand what is presented below that paragraph. Let G be a $\gamma$-stable graph, how do I get $D$? ### Goemans-Williamson algorithm: The approximation guarantee of the Goemans-Williamson algorithm is better on $\gamma$-stable instances than in general. ### Random model: With high probability, the extended specral clustering leads to the optimal cut on $\gamma$-stable instances generated from a certain random model for $\gamma\geq 1+\Omega(\sqrt{\frac{\log(n)}{n}})$. ### Typos: - page 3, Proposition 2.1: " A graph G graph" - page 4: "this follows from Definition 2.1", should be "Proposition 2.1". - page 5, Definition 2.2: should be "E" instead of "e" in the equation. - page 5: "which must to be on the" and "of the optional cut" -> "optimal". - page 8: "we multiply it be a PSD matrix" Read the paper, add your comments… ### Comments: Nice paper building on top of [the WebGraph framework](https://papers-gamma.link/paper/31) and [Chierichetti et al.](https://papers-gamma.link/paper/126) to compress graphs. ### Approximation guarantee I read: "our algorithm is inspired by a theoretical approach with provable guarantees on the final quality, and it is designed to directly optimize the resulting compression ratio.". I misunderstood initially, but the proposed algorithm actually does not have any provable approximation guarantee other than the $\log(n)$ one (which is also obtained by a random ordering of the nodes). Designing an algorithm with (a better) approximation guarantee for minimizing "MLogA", "MLogGapA" or "BiMLogA" seems to be a nice open problem. ### Objectives Is there any better objective than "MLogA", "MLogGapA" or "BiMLogA" to have a proxy of the compression obtained by the BV-framework? Is it possible to directly look for an ordering that minimizes the size of the output of BV compression algorithm? Read the paper, add your comments… ### Comments: Read the paper, add your comments… ### Comments: Hello, I need the key point of your algorithm is not really clear > • We start with the root such that all edges are possibly > here or not. The upper bound is the sum of the $n \choose 2$ > heaviest edges, while the associated solution is the empty > subgraph, the lower bound is thus 0. > > • Then at each iteration we create two children for the node > with maximum lower bound (i.e. density of the associated > solution). Suppose the node is at depth $i$ in the tree, we > keep the decisions made on the first $i-1$ edges and create > two children, one where the $i^\text{th}$ edge is included and one > where it is not. I need to understand this part in clear way if that is possible please > I need to understand this part in clear way if that is possible please I see. I agree it is not perfectly clear, sorry about that. Can you try to understand it with the branch and bound [wikipedia page](https://en.wikipedia.org/wiki/Branch_and_bound), [the slides](https://drive.google.com/file/d/0B6cGK503Ibt0Qlg3bUVKRnFBTG8/view) and [the code](https://github.com/maxdan94/HkS)? If it is still not clear after that, please come back and I'll try to phrase a better explanation ASAP. Read the paper, add your comments… ### Comments: ##To take away:## - This paper is about a slight improvement of the $k$-clique Algorithm of Chiba and Nishizeki - The performance in practice on sparse graphs is impressive - The parallelization is non-trivial and the speedup is nearly optimal up to 40 threads - Authors generate a stream of k-cliques to compute "compact" subgraphs - A parallel C code is available here: https://github.com/maxdan94/kClist ##Suggestions to extend this work:## - Can we find a node ordering better than the core ordering? - Generate a stream of $k$-cliques to compute other quantities? - Generalize the algorithm to $k$-motifs? - Parallelization on higher order $k$-cliques if more threads are available? Slides of the talk: https://drive.google.com/file/d/15MVJ2TzkdsHcyF6tE4VeYQqH8bU0kzDE/view > Another extension: can we guarantee a given order on the output stream? that it is uniformly random, for instance? I think that this is a very interesting and open question! I have tried to generate a stream of k-cliques such that the order is random by modifying the kClist algorithm. But I was not able to do so. I wanted to do that in order to optimize a function depending on the k-cliques using stochastic gradient descent. I have found that using a random ordering lead to a faster convergence than using the order the k-cliques are outputed by the kClist algorithm. Here is what I've tried: - If you have enough RAM, then you can of course store all k-cliques and do a [random permutation](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle). But, since you mention "steam", I do not think that this is the case for you. - You can use another node-ordering (different from the core-ordering) to form the DAG. You can use, for instance, a random node ordering. You may lose the theoretical upperbound on the running time, but you will see that, in practice, the algorithm is still very fast (say twice slower than with the core ordering (but this depends on the input graph and k, you may also find some settings where it is actually faster than with the core ordering)). The order the k-cliques are stream will then change, but it will not be uniform at random. - Once you have formed the DAG using the node ordering (core ordering or any other ordering), you do not need to process the nodes in that same order. You can use another random ordering for that. It will add some randomness in the stream, but the order will still not be uniform at random. Please let me know if you have any better ideas. > Another extension: can we guarantee a given order on the output stream? that it is uniformly random, for instance? One possible way to do is using a buffer, however, it remains non uniform. A buffer of size n/100 can be filled at first using the first n/100 output. Afterwards, one K-clique is randomly selected to be « outputted » and replaced with a new k-clique. The larger the buffer, the closer the ouput will be to a uniformly random output. Read the paper, add your comments…
2019-05-23 20:23:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7679709792137146, "perplexity": 616.6846454105388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257361.12/warc/CC-MAIN-20190523184048-20190523210048-00474.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2008-February/034179.html
[OS X TeX] weird indentation Ross Moore ross at ics.mq.edu.au Sun Feb 24 13:57:05 EST 2008 On 25/02/2008, at 5:08 AM, Herbert Schulz wrote: > Hate to burst your bubble but I see the difference in indentation > and I'm running under 10.5.2 (which shouldn't have anything to what > is going on anyway). > > I notice that if I change the first argument of the > \setdefaultleftmargin to 7mm or larger everything seems to be > alright. I haven't looked at its definition but I suspect that is > where the indentation is coming from. This is defined in the paralist.sty package. What version do you have? Mine has: \ProvidesPackage{paralist}% [2002/03/18 v2.3b Extended list environments (BS)] Is there a earlier or later version, perhaps containing an error? (An extra space token, that affects the first entry only, is a possibility for this kind of effect.) > > Good Luck, > > Herb Schulz > (herbs at wideopenwest.com) Hope this helps, Ross ------------------------------------------------------------------------ Ross Moore ross at maths.mq.edu.au Mathematics Department office: E7A-419 Macquarie University tel: +61 +2 9850 8955 Sydney, Australia 2109 fax: +61 +2 9850 8114 ------------------------------------------------------------------------
2023-02-06 12:01:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9499195218086243, "perplexity": 11493.201721169455}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00731.warc.gz"}
https://chemistry.stackexchange.com/questions/135834/ortho-para-selectivity-in-the-reimer-tiemann-reaction
# ortho/para-Selectivity in the Reimer-Tiemann Reaction [duplicate] I can't figure out why the dichlorocarbene molecule produced in the Reimer-Tiemann reaction gets attached to the ortho position of phenol. The ortho position is sterically slightly more hindered due to the oxygen, so why dosen't the carbene attach to the para position? • Does this answer your question? In Reimer-Tiemann reaction why does phenol attack the carbene from ortho position? – Aniruddha Deb Jun 27 '20 at 14:23 • No, because this answer was a bit more simple and was straightforward – user95393 Jun 27 '20 at 14:53 • @EVO Answers can be simple and straightforward. If they answers your question, then yours is a duplicate of that. – Nilay Ghosh Jun 29 '20 at 9:02 • I don't think this is a duplicate; the linked duplicate asks about C2 attack vs O attack, not C2 vs C4. – orthocresol Jun 29 '20 at 9:37 The Reimer–Tiemann reaction will give a mixture of products unless the formation of one product is highly disfavoured. The Kürti-Csakó notes in its introductory paragraph on the Reimer–Tiemann:[1] 1. the regioselectivity is not high, but ortho-formyl products tend to predominate; The statement already hints at a product mixture being obtained. To further show that both ortho and para attacks are possible, the 1960 review published by Wynberg contains the following scheme as its very first scheme in the introduction:[2] So while both products are obtained, a back of the envelope calculation already shows that the distribution is not a perfect $$2:1$$ ratio. This can apparently be explaned by the effect of positive counterions, as Hine and van der Veen report:[3] The ratio of ortho to para product was found to be 2.21 [under high base concentrations], showing that the tendency towards o-substitution is indeed increased under conditions where ion-pair formation is encouraged. One factor that would certainly be expected to be present and that would tend to favor o-substitution is an electrostatic effect. When a dichloromethylene molecule attacks the o-position of a sodium phenoxide ion-pair to yield the probable initial product, there is less separation of unlike charges than when the analogous para product is formed. So while the steric hindrance of a single oxygen atom is not much more than that of a hydrogen atom and thus does not play a huge role, the presence of positive counterions – although increasing steric congestion – may serve to enhance the formation of the ortho-product through favourable electronic effects. References: [1]: L. Kürti and B. Czakó: Strategic Applications of Named Reactions in Organic Synthesis. Background and Detailed Mechanisms, Elsevier Academic Press, Burlington, MA, USA, 2005, page 378. [2]: H. Wynberg, Chem. Rev. 1960, 60, 169–184. DOI: 10.1021/cr60204a003. [3]: J. Hine, J. M. van der Veen, J. Am. Chem. Soc. 1959, 81, 6446–6449. DOI: 10.1021/ja01533a028. In Reimer-Tiemann Reaction, a mixture of ortho and para isomers is obtained in which the ortho isomer predominates (it is not the sole product). If one of the ortho positions is occupied, the para-isomer is the major product. The two isomers can be separated by fractional distillation, in which the unreacted phenol and the ortho-isomer distil over leaving behind the para-isomer. Ortho product is major mainly due to 2 reasons- 1. Probability factor ( there are 2 ortho positions available Vs only 1 para position ) 2. H- bonding in the final salisaldehyde.(There is a formation of 6 membered chelated ring which increases the stability of this product.) • Is the Reimer-Tiemann reaction thermodynamically controlled, such that the product stability determines the outcome? To me, it seems more likely that it's kinetically controlled, where transition state stability is the key. – orthocresol Jun 29 '20 at 8:02 • Although it is run at refluxing chloroform conditions, I want to agree with orthocresol on this one and think that thermodynamic control is slightly unlikely. – Jan Jun 29 '20 at 8:55
2021-06-25 12:07:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4977037012577057, "perplexity": 3117.6702888900604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630175.17/warc/CC-MAIN-20210625115905-20210625145905-00475.warc.gz"}
https://gitter.im/FreeCodeCamp/HelpFrontEnd/archives/2018/02/06
6th Feb 2018 Arezohayeman @Arezohayeman Feb 06 2018 00:40 UTC Hay Ray Liriano @mrlirianojr Feb 06 2018 01:06 UTC Hi AbrisM @AbrisM Feb 06 2018 01:40 UTC @mrlirianojr Hello Manan Shah @mananshah51 Feb 06 2018 02:03 UTC @mrlirianojr Hello Ray Liriano @mrlirianojr Feb 06 2018 02:05 UTC @AbrisM @mananshah51 Hi Guys! I'm new :smile: Gulsvi @gulsvi Feb 06 2018 02:06 UTC Welcome! :sparkles: Manan Shah @mananshah51 Feb 06 2018 02:09 UTC @mrlirianojr I also started with FCC last week. Let me know if I can help you in any ways. AbrisM @AbrisM Feb 06 2018 02:38 UTC @mrlirianojr Hi :) German Gamboa Gonzalez @germangamboa95 Feb 06 2018 03:24 UTC Hello FFC community! :wave: @germangamboa95 AbrisM @AbrisM Feb 06 2018 04:02 UTC Hello German Anyone use netbeans or intellij? Also, can anyone tell me why my code isn't running? https://repl.it/repls/AcidicFarawayBittern why is it yelling on line 35 when it should be compiled and ready to run? This is a short script btw :) AbrisM @AbrisM Feb 06 2018 04:08 UTC "no suitable method found" and "not applicable" German Gamboa Gonzalez @germangamboa95 Feb 06 2018 04:16 UTC @moT01 HI! what is everyone working on tonight? I'm trying to make a calendar with HTML and SCSS with plans to add javascript to make it interactive DMsalati @DMsalati Feb 06 2018 04:20 UTC hey guys can someone tell me why my icon is not working? also you will need to open it in codepen for it to work properly Nick @rhozeta Feb 06 2018 04:22 UTC @DMsalati where is the path to the icon? DMsalati @DMsalati Feb 06 2018 04:25 UTC I am using the same api for the icons as the rest of the stuff, heres the api path https://fcc-weather-api.glitch.me/ also heres a direct link to the icon im using rn based on my location https://cdn.glitch.com/6e8889e5-7a72-48f0-a061-863548450de5%2F50n.png?1499366021876 Claudio Restifo @Marmiz Feb 06 2018 04:32 UTC @DMsalati I'm seeing a cloudy/sunny icon right now DMsalati @DMsalati Feb 06 2018 04:35 UTC @Marmiz ok i just tried it on edge and it worked fine, but for some reason its not working in chrome, what browser are you using ? @Marmiz nvm i just reloaded and it worked there too mb sorry guys. thank you for pointing it out tho CamperBot @camperbot Feb 06 2018 04:37 UTC dmsalati sends brownie points to @marmiz :sparkles: :thumbsup: :sparkles: :star2: 1144 | @marmiz |http://www.freecodecamp.org/marmiz Claudio Restifo @Marmiz Feb 06 2018 04:37 UTC :+1: Jay Vora @jayvora92 Feb 06 2018 05:03 UTC i want return the value true if anyone of the defined varibale is true ''' if({{var1}}=='true'||{{var2}}=='true') { return 'true'; } ''' CamperBot @camperbot Feb 06 2018 05:03 UTC :bulb: to format code use backticks! more info Jay Vora @jayvora92 Feb 06 2018 05:03 UTC i want return the value true if anyone of the defined varibale is true if({{var1}}=='true'||{{var2}}=='true') { return 'true'; } can anyone tell what is wrong Sweet Coding :) @SweetCodingInc Feb 06 2018 05:08 UTC what this {{ syntax? It should be just return var1 || var2; or if( var1 == true || var2 == true){ return true; } Nishanth-S @Nishanth-S Feb 06 2018 07:38 UTC My Angular app is behaving very weird, it is not printing navbar(using bootstrap classes), however, the webpack compiles successfully and other bootstrap components like buttons are printed correctly. How do I go about this? Navbar code is correct and seems to run perfectly when put into static html aRtoo @artoodeeto Feb 06 2018 07:51 UTC hey guys who is using react? im trying react for the first time and i followed every instruction but i still get this error. import React from 'react'; | ReactDOM.render( > 513 | <div>REACT REACT REACT NOW!!</div>, document.getElementById('root') | ^ 514 | ); ); this is the simple code. import React from 'react'; import ReactDOM from 'react-dom'; ReactDOM.render( <div>REACT REACT REACT NOW!!</div>, document.getElementById('root') ); thats everything Sorin Ruse @sorinr Feb 06 2018 07:52 UTC @artoodeeto have u tried double quoting the html part? aRtoo @artoodeeto Feb 06 2018 07:53 UTC @sorinr just right now. yea it worked. i tried this " ' ' " didnt worked .i thougt theres something wrong with my install. thanks bri CamperBot @camperbot Feb 06 2018 07:53 UTC artoodeeto sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles: Amit Patel @AmitP88 Feb 06 2018 07:53 UTC hey guys, I'm just about to start on the Random Quote Machine project on fcc and I was wondering, is it still worth learning Grunt or Gulp to use as a build system? or should I stick to Webpack? (I sort of have a basic understanding of webpack) CamperBot @camperbot Feb 06 2018 07:53 UTC :star2: 1398 | @sorinr |http://www.freecodecamp.org/sorinr Sorin Ruse @sorinr Feb 06 2018 07:54 UTC @artoodeeto great :) aRtoo @artoodeeto Feb 06 2018 07:55 UTC @sorinr hey bro can i ask one more? Sorin Ruse @sorinr Feb 06 2018 07:56 UTC @artoodeeto not good at react but go ask. if i can help i help aRtoo @artoodeeto Feb 06 2018 07:56 UTC @sorinr the <div> is being included in the rendeer bro. Sorin Ruse @sorinr Feb 06 2018 07:57 UTC @artoodeeto then u need to read this:https://reactjs.org/docs/rendering-elements.html Randy @RandyGoldsmith Feb 06 2018 07:58 UTC @artoodeeto did you include reactdom at the top? import react-dom from ReactDOM? aRtoo @artoodeeto Feb 06 2018 07:58 UTC @sorinr im confused why is it on the tutorial he didnt use any quotation mark. yes sir i did import React from 'react'; import ReactDOM from 'react-dom'; import React from 'react'; import ReactDOM from 'react-dom'; Sorin Ruse @sorinr Feb 06 2018 08:00 UTC @artoodeeto because of jsx part of the react that translates it in html. i said to add double quotes just to see its working and rendering that string aRtoo @artoodeeto Feb 06 2018 08:01 UTC @sorinr yea it worked but did it like this sir. " <div> ... </div> it included the tag Randy @RandyGoldsmith Feb 06 2018 08:02 UTC @artoodeeto which excercise is this? aRtoo @artoodeeto Feb 06 2018 08:03 UTC @RandyGoldsmith ohh this is in udemy . this is an es6 lesson but hes just showing the uses of es6 in react. heres the link https://www.udemy.com/es6-in-depth/learn/v4/t/lecture/6557930?start=0 Sorin Ruse @sorinr Feb 06 2018 08:05 UTC @artoodeeto yes i know. it was just rendering the string "div .... /div" (sorry too lazy now). it also could be render('Hello world!", getelement...bla bla bla)so thats why i sent you the link above. just to see how to render jsx aRtoo @artoodeeto Feb 06 2018 08:07 UTC @sorinr yea. the link you gave me they didnt use any quote but ill ask someone else bro. thank you for you help. go to sleep bro its 3:07am now. lols CamperBot @camperbot Feb 06 2018 08:07 UTC artoodeeto sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles: api offline Sorin Ruse @sorinr Feb 06 2018 08:08 UTC @artoodeeto sorry. plz read "lazy"="too tired". havent got some sleep in the last 30 hours aRtoo @artoodeeto Feb 06 2018 08:09 UTC @sorinr haha. go to sleep bro. Sorin Ruse @sorinr Feb 06 2018 08:11 UTC @artoodeeto i hear my bed its calling me :) Claudio Restifo @Marmiz Feb 06 2018 08:15 UTC How are you converting JSX @artoodeeto ? aRtoo @artoodeeto Feb 06 2018 08:15 UTC @Marmiz what do you mean sir? @Marmiz im using webpack? is that what you mean? Claudio Restifo @Marmiz Feb 06 2018 08:16 UTC using html inside JS is JSX, and need to be converted :) (usually with babel) try: ReactDOM.render( React.CreateElement('div', null, 'Hello World'), document.getElementById('root') ); and see if it works aRtoo @artoodeeto Feb 06 2018 08:18 UTC @Marmiz i have babel to sir @Marmiz npm install babel-preset-react react react-dom react-bootstrap --save-dev this are the presets the i installed sir. i think this is the problem. Claudio Restifo @Marmiz Feb 06 2018 08:22 UTC @artoodeeto if you just want to prototipe just use create-react-app... otherwise you need to set webpack as well as babel. And that can be confusing especially if you just want to try it out aRtoo @artoodeeto Feb 06 2018 08:22 UTC @Marmiz maybe i just need sleep too. haha. thank you sir. appreciate the help CamperBot @camperbot Feb 06 2018 08:22 UTC artoodeeto sends brownie points to @marmiz :sparkles: :thumbsup: :sparkles: :star2: 1145 | @marmiz |http://www.freecodecamp.org/marmiz Claudio Restifo @Marmiz Feb 06 2018 08:23 UTC Or there are online editors that lets you prototype with React. i like https://codesandbox.io/ aRtoo @artoodeeto Feb 06 2018 08:27 UTC @Marmiz thanks sir CamperBot @camperbot Feb 06 2018 08:27 UTC artoodeeto sends brownie points to @marmiz :sparkles: :thumbsup: :sparkles: api offline Karthik @karthik-ir Feb 06 2018 08:54 UTC hello , I'm new to es6 I'm trying to understand the below code import React from 'react' const ListingPage = ({onListingPageLoad}) => { return( <div> "Some text value" </div> ) } How does this create a listingPage component? I tried converting this to es5 using babel traanspiler online and it knows its a create component. I'm just wondering how. Also how can i add componentDidMount() in this code? DMsalati @DMsalati Feb 06 2018 09:29 UTC can someone help me? not really sure how to make $.toggle work with ajax its not showing. Also when im displaying the temp i want there to be a space between the unit and number but its not doing it can someone explain to me why its not working please? its been frustrating me for the past hour https://codepen.io/Muradmsalati/pen/gvMyYb?editors=1010 abyshukla @abyshukla Feb 06 2018 09:34 UTC Hey guys, what is the problem with this jquery? https://codepen.io/aby_shukla/pen/aqZgNw dinesh @1532j0004kg Feb 06 2018 09:36 UTC test is id DMsalati @DMsalati Feb 06 2018 09:36 UTC you need to add a # in$("#test") dinesh @1532j0004kg Feb 06 2018 09:36 UTC give the # before add bootstrap and jquery in settings ! @abyshukla Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 09:39 UTC can anyone advise me how to improve this please? https://codepen.io/MuhammedK/full/KQzRXe/ dinesh @1532j0004kg Feb 06 2018 09:40 UTC why u didn't css tab ,?! abyshukla @abyshukla Feb 06 2018 09:42 UTC @1532j0004kg I added the jquery. do not need bootstrap DMsalati @DMsalati Feb 06 2018 09:42 UTC i would add some color to it, it looks very bland dinesh @1532j0004kg Feb 06 2018 09:42 UTC yup @abyshukla Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 09:42 UTC @DMsalati background colour? dinesh @1532j0004kg Feb 06 2018 09:43 UTC then add #test dont just put test in js abyshukla @abyshukla Feb 06 2018 09:43 UTC doesn't work @DMsalati :( DMsalati @DMsalati Feb 06 2018 09:44 UTC @MuhammedKarim possibly and maybe pictures of the recipes too? Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 09:45 UTC ok, good idea, thanks :) @DMsalati CamperBot @camperbot Feb 06 2018 09:45 UTC muhammedkarim sends brownie points to @dmsalati :sparkles: :thumbsup: :sparkles: :cookie: 167 | @dmsalati |http://www.freecodecamp.org/dmsalati DMsalati @DMsalati Feb 06 2018 09:48 UTC @abyshukla try using getjson instead of ajax its basically the same thing abyshukla @abyshukla Feb 06 2018 09:49 UTC I was using getJSON in my other pen. Same thing. Like @Masd925 said, it is a crossorigin issue Edit: crossorigin.me is dead @ezioda004 Feb 06 2018 10:01 UTC @abyshukla Remove crossorigin.me from url and try adding &origin=* abyshukla @abyshukla Feb 06 2018 10:02 UTC ok @ezioda004 Still the same @ezioda004 https://codepen.io/aby_shukla/pen/aqZgNw @ezioda004 Feb 06 2018 10:06 UTC @abyshukla Its working now, do console.log(wikiJson) inside the success function, and you'll see an object dinesh @1532j0004kg Feb 06 2018 10:07 UTC $("button").click(function(){$.ajax({ type: "GET", url: "https://en.wikipedia.org/w/api.php?action=opensearch&search=api&limit=10&namespace=0&format=jsonfm&origin=*", //dataType : jsonp, success: function(wiki) { $("#test").html(wiki); //console.log(wiki) }, error: function(err) {$("#test").text(err); } }); }); try this its working ! @abyshukla @ezioda004 Feb 06 2018 10:07 UTC Also I'd suggest removing async: false, its deprecated hensn5250 @hensn5250 Feb 06 2018 10:11 UTC @ezioda004 did you complete the Frontend Cert? abyshukla @abyshukla Feb 06 2018 10:12 UTC It did not work in the pen. I can see the data though @ezioda004 Feb 06 2018 10:13 UTC @hensn5250 Making the last project, portfolio and then I'm done ^^ hensn5250 @hensn5250 Feb 06 2018 10:14 UTC the last project is a portfolio? Abraham Anak Agung @AbrahamAnakAgung Feb 06 2018 10:15 UTC @abyshukla it work, the problem is you can't display object with .html try console.log(wikiJson.query.pages); and it show you all the query you ask try this and you will see it $('#test').html(wikiJson.query.pages[11557106].title); hensn5250 @hensn5250 Feb 06 2018 10:17 UTC I'm on the twitch app. I saw your codepen page. Good stuff Aditya @ezioda004 Feb 06 2018 10:17 UTC @hensn5250 Nah last is simon, but I didnt do portfolio one before cause I wanted to make a nice portfolio. hensn5250 @hensn5250 Feb 06 2018 10:17 UTC Oh ok Aditya @ezioda004 Feb 06 2018 10:17 UTC Thanks, not that good at css but I've tried hensn5250 @hensn5250 Feb 06 2018 10:18 UTC well if ever you want to collab in the future I'm up for it. I'm 2 projects behind at the moment but just putting it out there. Later. Aditya @ezioda004 Feb 06 2018 10:19 UTC @hensn5250 I will keep that in mind :) abyshukla @abyshukla Feb 06 2018 10:20 UTC thanks guys @ezioda004 @1532j0004kg @padunk CamperBot @camperbot Feb 06 2018 10:20 UTC abyshukla sends brownie points to @ezioda004 and @1532j0004kg and @padunk :sparkles: :thumbsup: :sparkles: :cookie: 274 | @1532j0004kg |http://www.freecodecamp.org/1532j0004kg :cookie: 434 | @ezioda004 |http://www.freecodecamp.org/ezioda004 :cookie: 425 | @padunk |http://www.freecodecamp.org/padunk Ayush Bahuguna @relentless-coder Feb 06 2018 10:23 UTC export const Client = (props)=>{ return ( <div className="client"> <Sidebar props={props}/> <Switch> <Route exact path="/clients" component={AllClients}/> <Route exact path="/clients/:clientId" component={SingleClient}/> <Route exact path="/clients/:clientId/branches" component={Branches}/> </Switch> </div> ); }; do i need to place the exact attribute in every route? Abraham Anak Agung @AbrahamAnakAgung Feb 06 2018 10:24 UTC @relentless-coder in your case yes, cause the start of the path have the same name /clients Ayush Bahuguna @relentless-coder Feb 06 2018 10:27 UTC @padunk so this is normal? Abraham Anak Agung @AbrahamAnakAgung Feb 06 2018 10:28 UTC @relentless-coder yes except you have different path name like / /home /product etc than you need to place exact only on / Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 10:30 UTC how can i set these apart? Aditya @ezioda004 Feb 06 2018 10:32 UTC @MuhammedKarim If you're using flex then can do justify-content: space-between; Ayush Bahuguna @relentless-coder Feb 06 2018 10:34 UTC @padunk sure, thank you CamperBot @camperbot Feb 06 2018 10:34 UTC relentless-coder sends brownie points to @padunk :sparkles: :thumbsup: :sparkles: :cookie: 429 | @padunk |http://www.freecodecamp.org/padunk Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 10:44 UTC @ezioda004 would it work with display:inline-block? Aditya @ezioda004 Feb 06 2018 10:47 UTC @MuhammedKarim Dont think so, if you're not using flex then I guess you can use margin-right: Xpx; on the left div to give space b/w them. Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:02 UTC with display:flex this happens @ezioda004 Aditya @ezioda004 Feb 06 2018 11:10 UTC @MuhammedKarim Do you have codepen link? Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:12 UTC @ezioda004 Aditya @ezioda004 Feb 06 2018 11:16 UTC Oh its a table You can wrap them up in a div and then use flex I removed display: flex; from the table and img since its not needed Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:18 UTC then there's too much space between them! @ezioda004 Aditya @ezioda004 Feb 06 2018 11:18 UTC Ok then you can do justify-content: space-around; I updated it You can play with the space with flex property of the items. Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:21 UTC it's not working on my one for some reason i need to use inline styles coz its a project for Khan Academy and they don't have seperate CSS tab @ezioda004 Aditya @ezioda004 Feb 06 2018 11:23 UTC You need to wrap the <table> and <img> inside a <div> Like this <div> <table> </table> <img> </div> then remove the display:flex from your inline table styling Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:24 UTC that's precisely what i did, can you check mine again please? Aditya @ezioda004 Feb 06 2018 11:24 UTC And add display: flex; justify-content: space-around; to the above div Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:24 UTC i did! :( @ezioda004 oh got it now, i just merged the inline styles! thanks a lot :) @ezioda004 CamperBot @camperbot Feb 06 2018 11:25 UTC muhammedkarim sends brownie points to @ezioda004 :sparkles: :thumbsup: :sparkles: :cookie: 435 | @ezioda004 |http://www.freecodecamp.org/ezioda004 Aditya @ezioda004 Feb 06 2018 11:26 UTC @MuhammedKarim Awesome :thumbsup: Puyan Wei @puyanwei Feb 06 2018 11:33 UTC Hi, I'm trying to write some tests for my single page app which uses logic. As a start I want to press and button and have a number appear on the page. I have gotten zombie.js to press the button, but the outcome doesn't change. Do any of you guys know if the zombie js headless browser works with JQuery/DOM manipulation? I have other passing tests working that asserts elements on the page. My test; const Browser = require("zombie"); var chai = require("chai"), expect = chai.expect, should = chai.should(); var should = require("chai").should(); var url = "http://localhost:8080/"; describe("User visits page", function() { const browser = new Browser(); before(function() { return browser.visit(url); }); describe("submits form", function() { it("should be successful", function(done) { return browser.pressButton("1", function() { browser.text("#first").should.equal("1"); done(); }); }); }); }); Sorry not sure if this should go front end or back end. Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 11:45 UTC how can i change the contents to make it look better? does the page look better? https://codepen.io/MuhammedK/full/KQzRXe/ abyshukla @abyshukla Feb 06 2018 11:51 UTC Wiki Search done. Please review https://codepen.io/aby_shukla/full/jZrNEX/ CamperBot @camperbot Feb 06 2018 11:51 UTC #### freeCodeCamp Wiki: :point_right: The freeCodeCamp wiki can be found on our forum. Please follow the link and search there. Arezohayeman @Arezohayeman Feb 06 2018 11:52 UTC I ask friends to give someone a dollar for an orphan's surgery. I'm collecting money for action(.Z496695675778) Marianissimus @Marianissimus Feb 06 2018 12:33 UTC @abyshukla the search results display in the console, but not on page abyshukla @abyshukla Feb 06 2018 12:44 UTC Sorry. You can try now. I was fiddling with it... Was trying to generate search by pressing enter @Marianissimus ahmed-issa-mohd @ahmed-issa-mohd Feb 06 2018 12:49 UTC I don't understand perspective-origin in css can you help me ? Lean Junio @leanjunio Feb 06 2018 12:50 UTC Hey guys, how does your team get their users' banking information? Just for the front end part? LydaTech @lydatech Feb 06 2018 13:42 UTC @leanjunio banking info? Stephen Chow @stevchow Feb 06 2018 13:59 UTC Hello, I am just started beta version of freecodecamp, and stuck in "Create a Set of Radio Buttons" section. The webpage show I complete the challenge but I can't go to the next challenge. It say "something went wrong try again later". I have try logging out and clicking the map in nav bar, but it doesn't work. Any solution? Can someone help me. I don't know how to prevent opening other recipes when I click on just one of them. Here is my code: import React from 'react'; import { Collapse, Button, CardBody, Card } from 'reactstrap'; export class ShowRecipes extends React.Component { constructor(props) { super(props); this.state = { collapse: false } this.toggle2 = this.toggle2.bind(this); } toggle2() { this.setState({ collapse: !this.state.collapse }); } render(){ var recipeList = this.props.recipes.map(function(recipeInfo, index, e){ return ( <div className="recipe-list"> <h1 id={recipeInfo.title} onClick={(e) => {this.toggle2(e); e.nativeEvent.stopImmediatePropagation()}}>{recipeInfo.title}</h1> <Collapse isOpen={this.state.collapse}> <Card> <CardBody> {recipeInfo.instructions} </CardBody> </Card> </Collapse> </div> ); }, this); return recipeList; } } Here is also a github link: https://github.com/Teo03/recipe_box Stephen Chow @stevchow Feb 06 2018 14:06 UTC just solved it by manually type the next challenge link shivendrarox @shivendrarox Feb 06 2018 14:12 UTC Stephen James @sjames1958gm Feb 06 2018 14:22 UTC @Teo03 You collapse state should identify which recipe is collapsed/open You could store the index of the open recipe in your state rather than just a boolean @Teo03 Your property isOpen could be (if you stored the open index in the state) isOpen={ this.state.open === index} dinesh @1532j0004kg Feb 06 2018 14:23 UTC @sjames1958gm hi Stephen James @sjames1958gm Feb 06 2018 14:23 UTC @1532j0004kg :wave: dinesh @1532j0004kg Feb 06 2018 14:24 UTC can u please help me to learn loginsystem @sjames1958gm \ with authentication Nate Mallison @NJM8 Feb 06 2018 14:25 UTC @sjames1958gm After that can you help me with my problem on HelpBackEnd?? :smile: Stephen James @sjames1958gm Feb 06 2018 14:25 UTC @1532j0004kg I am at work, so I cannot devote any time right now. I can answer one off questions - that's about it Nate Mallison @NJM8 Feb 06 2018 14:26 UTC @1532j0004kg What are you using for database? Nick @rhozeta Feb 06 2018 14:26 UTC anyone have any experience with passport.js? dinesh @1532j0004kg Feb 06 2018 14:26 UTC @sjames1958gm ok np carryon @NJM8 mongo Nate Mallison @NJM8 Feb 06 2018 14:27 UTC @1532j0004kg I've been using bcrypt for now, check out this tutorial. https://www.rithmschool.com/courses/intermediate-node-express dinesh @1532j0004kg Feb 06 2018 14:27 UTC ohh ok in authentication i want to know onething . Nate Mallison @NJM8 Feb 06 2018 14:28 UTC @rhozeta I don't yet but there is some basics on that Rithm School tutorial above, maybe that will help? Nick @rhozeta Feb 06 2018 14:28 UTC @NJM8 ah thatas looks like a good resource, thanks CamperBot @camperbot Feb 06 2018 14:28 UTC rhozeta sends brownie points to @njm8 :sparkles: :thumbsup: :sparkles: :cookie: 297 | @njm8 |http://www.freecodecamp.org/njm8 Nate Mallison @NJM8 Feb 06 2018 14:29 UTC Sure thing, I've found a bunch of their stuff to be great dinesh @1532j0004kg Feb 06 2018 14:30 UTC if i post anything with my profile logged in . i want to store in my dashboard only so how to store the data in database ?\ @NJM8 Nate Mallison @NJM8 Feb 06 2018 14:32 UTC @1532j0004kg You I think are looking for a one to many relationship. so a user database and a posts database. the user has an array of ids linking to posts. each post only has one linking id relating to it's owner dinesh @1532j0004kg Feb 06 2018 14:32 UTC i mean , everyone have their unique dashboard , must be only visible to them only Nate Mallison @NJM8 Feb 06 2018 14:32 UTC yup That's basically what I'm making now for a basic shopping list app with admin functionality as well. going on Heroku! They will show you all that. dinesh @1532j0004kg Feb 06 2018 14:35 UTC thankyou i will go through this link and give u the status , @NJM8 thanks and congrats ! for ur project :sparkles: CamperBot @camperbot Feb 06 2018 14:35 UTC 1532j0004kg sends brownie points to @njm8 :sparkles: :thumbsup: :sparkles: :cookie: 298 | @njm8 |http://www.freecodecamp.org/njm8 Nate Mallison @NJM8 Feb 06 2018 14:35 UTC Sure thing, good luck! dinesh @1532j0004kg Feb 06 2018 14:36 UTC so where i can start with fundamentals or intermediate ? Nate Mallison @NJM8 Feb 06 2018 14:37 UTC do fundamentals first It'll teach you the database and routing stuff, then intermediate adds authentication dinesh @1532j0004kg Feb 06 2018 14:38 UTC ok :+1: is the video is enough to learn or i want to look through the words ? @NJM8 Nate Mallison @NJM8 Feb 06 2018 14:46 UTC @1532j0004kg I would read it too, it'll help a lot And check out all the links dinesh @1532j0004kg Feb 06 2018 14:47 UTC :+1: done thanks a lot @NJM8 CamperBot @camperbot Feb 06 2018 14:47 UTC 1532j0004kg sends brownie points to @njm8 :sparkles: :thumbsup: :sparkles: api offline Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 15:38 UTC how can i make this contents look good...? Sweet Coding :) @SweetCodingInc Feb 06 2018 15:40 UTC @MuhammedKarim add bootstrap Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 15:42 UTC which style? is there a bootstrap contents style? Sweet Coding :) @SweetCodingInc Feb 06 2018 15:49 UTC @MuhammedKarim checkout their list groups Matej Bošnjak @mbosnjak01 Feb 06 2018 15:49 UTC Just style it with css. Few tips. Remove link underline, add some font, replace numbers with unordered list ... get creative XD maybe even some hover color changing effect with 0.5 seconds of transition Singh Harpal @harry9656 Feb 06 2018 15:59 UTC Hi can someone give me some feedback on this: https://codepen.io/harry9656/full/RQRLvy/ ?? Sweet Coding :) @SweetCodingInc Feb 06 2018 16:01 UTC @harry9656 I didn't understand a word in it, except your name.. But the content organization and placement is very neat Sweet font as well Singh Harpal @harry9656 Feb 06 2018 16:01 UTC it was made with bootstrap 4 and not bootstrap 3 as they teach you on fcc. My initial idea was to make a quick template as possible but, I ended up using most of my time learning bootstrap 4 Sweet Coding :) @SweetCodingInc Feb 06 2018 16:01 UTC @harry9656 : good work :+1: Singh Harpal @harry9656 Feb 06 2018 16:01 UTC @SweetCodingInc yeah I didn't bother to translate from italian to english thanks @SweetCodingInc thanks CamperBot @camperbot Feb 06 2018 16:01 UTC harry9656 sends brownie points to @sweetcodinginc :sparkles: :thumbsup: :sparkles: :cookie: 243 | @sweetcodinginc |http://www.freecodecamp.org/sweetcodinginc Singh Harpal @harry9656 Feb 06 2018 16:02 UTC can you tell me if the button are working? Sweet Coding :) @SweetCodingInc Feb 06 2018 16:02 UTC @harry9656 Also, I'd add some smooth scroll effect when you click on the links in navbar Singh Harpal @harry9656 Feb 06 2018 16:02 UTC i tried to make popovers for the first time Sweet Coding :) @SweetCodingInc Feb 06 2018 16:02 UTC you mean tooltips? yeah, those are working Sorry I don't have any account :P Singh Harpal @harry9656 Feb 06 2018 16:03 UTC @SweetCodingInc yeah whatever are they called :satisfied: @SweetCodingInc I could add smooth scroll but it is not my main concern, right now i want to go deep into js..... thanks again for the feedback Sweet Coding :) @SweetCodingInc Feb 06 2018 16:04 UTC @harry9656 :+1: AbrisM @AbrisM Feb 06 2018 16:32 UTC Hi all, could someone tell me why i'm having a println error for this? https://onlinegdb.com/H1Hfu8DIf aRtoo @artoodeeto Feb 06 2018 16:40 UTC @AbrisM you forgot to close the comment bro on line one hey anyone using react its my first time how i still get an error using JSX but everything on my json.package is installed. @Marauder @Kai "keywords": [], "author": "", "license": "ISC", "devDependencies": { "babel-cli": "^6.26.0", "babel-core": "^6.26.0", "babel-loader": "^7.1.2", "babel-polyfill": "^6.26.0", "babel-preset-env": "^1.6.1", "babel-preset-es2016": "^6.24.1", "babel-preset-es2017": "^6.24.1", "babel-preset-react": "^6.24.1", "react": "^16.2.0", "react-bootstrap": "^0.32.1", "react-dom": "^16.2.0", "webpack": "^3.10.0", "webpack-dev-server": "^2.11.1" }, "presets": [ "env", "react" ], "dependencies": { "create-react-app": "^1.5.1", "start": "^5.1.0" } } thats inside my package.json bros. Vlad Fernandes @Vlad-Fernandes Feb 06 2018 16:45 UTC @artoodeeto what's the error ? aRtoo @artoodeeto Feb 06 2018 16:46 UTC @vieira83 this one bro. <div>REACT REACT REACT NOW!!</div>, document.getElementById('root'); @vieira83 the tag bro Vlad Fernandes @Vlad-Fernandes Feb 06 2018 16:48 UTC without seeing the code is hard to tell are you using webpack to transpile? and run the webpack to compile Razvan Jackson @RazvanJackson Feb 06 2018 17:20 UTC Hey guys! html,body{ margin: 0px; padding: 0px; } body{ height: 100%; background: #929292; background-size: 100% 100%; overflow: visible; } why do i have a little gap down page like html is not going all the way down Eric Weiss @eweiss17 Feb 06 2018 17:29 UTC what is the question? Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 17:41 UTC @mbosnjak01 thanks for the creative ideas lol imma try to do them CamperBot @camperbot Feb 06 2018 17:41 UTC muhammedkarim sends brownie points to @mbosnjak01 :sparkles: :thumbsup: :sparkles: :cookie: 227 | @mbosnjak01 |http://www.freecodecamp.org/mbosnjak01 dinesh @1532j0004kg Feb 06 2018 17:58 UTC guys what is the purpose of origin=* Stephen James @sjames1958gm Feb 06 2018 18:01 UTC @1532j0004kg For the Wikimedia API it signals the far end to send the correct headers in the response so as to avoid CORS problems. dinesh @1532j0004kg Feb 06 2018 18:02 UTC sry , cant get . can u please explain with another word ? Sweet Coding :) @SweetCodingInc Feb 06 2018 18:09 UTC @1532j0004kg When you request data from ajax, by default, it is a strict requirement that the client and server MUST exist on same host. Now, typically that is not the case. Let's say your code runs on codepen.io and you request data from wikipedia api wikipedia.org - then by security standards, the browser will block this request. Because you're sending ajax request from codepen.io to wikipedia.org This is considered as a security threat. To address this problem, you server must send particular headers in the response. That header is Access-Control-Allow-Origin. The value of this header tells the browser if the server is a legitimate server. If the value of this header matches with your domain (which is codepen.io), then the browser will allow this request. origin=* will cause the wikipedia API to set a header Access-Control-Allow-Origin to * which means, this server allows ajax (XMLHttpRequest) from any domain or origin dinesh @1532j0004kg Feb 06 2018 18:12 UTC server is wikipedia ? @SweetCodingInc Sweet Coding :) @SweetCodingInc Feb 06 2018 18:15 UTC @1532j0004kg it's wikipedia.org - the hostname it's just an example but it applies to all the servers out there.. dinesh @1532j0004kg Feb 06 2018 18:15 UTC in this example wiki right ? Sweet Coding :) @SweetCodingInc Feb 06 2018 18:15 UTC more specifically, a server that is hosted at wikipedia.org yes devlyn @devlohnes13 Feb 06 2018 18:15 UTC anyone look at the personal portfolio project code and just get absolutely lost? dinesh @1532j0004kg Feb 06 2018 18:16 UTC @SweetCodingInc thanks a lot :fire: CamperBot @camperbot Feb 06 2018 18:16 UTC 1532j0004kg sends brownie points to @sweetcodinginc :sparkles: :thumbsup: :sparkles: :cookie: 247 | @sweetcodinginc |http://www.freecodecamp.org/sweetcodinginc dinesh @1532j0004kg Feb 06 2018 18:17 UTC • means what @SweetCodingInc * in orgin = * Sweet Coding :) @SweetCodingInc Feb 06 2018 18:19 UTC I can't see your question clearly it just shows 2 dots... coderNewby @coderNewby Feb 06 2018 18:19 UTC @DarrenfJ thanks, I added GitHub what is next? CamperBot @camperbot Feb 06 2018 18:19 UTC codernewby sends brownie points to @darrenfj :sparkles: :thumbsup: :sparkles: :star2: 2379 | @darrenfj |http://www.freecodecamp.org/darrenfj Darren @DarrenfJ Feb 06 2018 18:20 UTC @coderNewby thanks test test thanks @coderNewby CamperBot @camperbot Feb 06 2018 18:20 UTC darrenfj sends brownie points to @codernewby :sparkles: :thumbsup: :sparkles: :cookie: 5 | @codernewby |http://www.freecodecamp.org/codernewby Darren @DarrenfJ Feb 06 2018 18:20 UTC ok, you are good to go. they're linked now dinesh @1532j0004kg Feb 06 2018 18:20 UTC in origin=* * refers what ? @SweetCodingInc coderNewby @coderNewby Feb 06 2018 18:21 UTC thanks @Darren CamperBot @camperbot Feb 06 2018 18:21 UTC codernewby sends brownie points to @darren :sparkles: :thumbsup: :sparkles: :cookie: 70 | @darren |http://www.freecodecamp.org/darren Sweet Coding :) @SweetCodingInc Feb 06 2018 18:21 UTC @1532j0004kg I see Gulsvi @gulsvi Feb 06 2018 18:21 UTC @devlohnes13 There is a mix of basic and intermediate code in that project. Many people add the more difficult code later after completing other projects Sweet Coding :) @SweetCodingInc Feb 06 2018 18:21 UTC * means anything Gulsvi @gulsvi Feb 06 2018 18:21 UTC If you have a question about any of the code, feel free to ask @devlohnes13 dinesh @1532j0004kg Feb 06 2018 18:21 UTC @SweetCodingInc :+1: Sweet Coding :) @SweetCodingInc Feb 06 2018 18:21 UTC requests from any and all origin(s) are allowed dinesh @1532j0004kg Feb 06 2018 18:22 UTC thanks a lot :+1: Sweet Coding :) @SweetCodingInc Feb 06 2018 18:24 UTC @1532j0004kg :+1: dinesh @1532j0004kg Feb 06 2018 18:24 UTC we can give origin as codepen.io here Sweet Coding :) @SweetCodingInc Feb 06 2018 18:24 UTC you can... dinesh @1532j0004kg Feb 06 2018 18:24 UTC ? :+1: Sweet Coding :) @SweetCodingInc Feb 06 2018 18:25 UTC Read this if you want detailed technical specs - https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS CORS- Cross Origin Resource Sharing dinesh @1532j0004kg Feb 06 2018 18:25 UTC so its help the wikipedia to know from where the request is coming from right ? devlyn @devlohnes13 Feb 06 2018 18:25 UTC thank you @gulsvi ! CamperBot @camperbot Feb 06 2018 18:25 UTC devlohnes13 sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: :star2: 2586 | @gulsvi |http://www.freecodecamp.org/gulsvi Sweet Coding :) @SweetCodingInc Feb 06 2018 18:29 UTC @1532j0004kg wikipedia doesn't care where the request comes from. Your browser cares what responses to accept Singh Harpal @harry9656 Feb 06 2018 18:30 UTC Is up anyone?? need confirmation for a bug... dinesh @1532j0004kg Feb 06 2018 18:30 UTC then y we sending our domain in origin Sweet Coding :) @SweetCodingInc Feb 06 2018 18:30 UTC to let your browser know that it should accept responses from wikipedia, the wikipedia response must contain a header Access-Control-Allow-Origin that matches with your domain Singh Harpal @harry9656 Feb 06 2018 18:30 UTC Sweet Coding :) @SweetCodingInc Feb 06 2018 18:30 UTC so it can be either codepen.io or some random wildcard * dinesh @1532j0004kg Feb 06 2018 18:31 UTC :+1: so i must to learn the purpose of headers in request :smile: Sweet Coding :) @SweetCodingInc Feb 06 2018 18:32 UTC @1532j0004kg Yes Eric Weiss @eweiss17 Feb 06 2018 18:32 UTC @harry9656 what is your problem Singh Harpal @harry9656 Feb 06 2018 18:33 UTC test cases doesn't match with instructions Eric Weiss @eweiss17 Feb 06 2018 18:33 UTC did you do what is said, Initialize the three variables a, b, and c with 5, 10, and "I am a" Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:33 UTC Hello people!!! Eric Weiss @eweiss17 Feb 06 2018 18:34 UTC test cases is what the values equal after they perform the code in the 'do not change' section Singh Harpal @harry9656 Feb 06 2018 18:34 UTC ok it was my fault thanks I was doing something else Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 18:35 UTC Hi! @Boris-the-Llama Singh Harpal @harry9656 Feb 06 2018 18:35 UTC @eweiss17 thanks CamperBot @camperbot Feb 06 2018 18:35 UTC harry9656 sends brownie points to @eweiss17 :sparkles: :thumbsup: :sparkles: :cookie: 605 | @eweiss17 |http://www.freecodecamp.org/eweiss17 Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:35 UTC hi @MuhammedKarim ! does anyone know if in js when making the background a random colour you can exclude certain colours, like white? if you understand me Ghost @ghost~5a4a80acd73408ce4f859755 Feb 06 2018 18:37 UTC Sorry, I'm terrible at JS :( @Boris-the-Llama Gulsvi @gulsvi Feb 06 2018 18:38 UTC @Boris-the-Llama Yes, it's possible to do that in JS. How to do it depends on the code you are using ;) Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:39 UTC im using this; var hex = Math.floor(Math.random() * 0xFFFFFF); return "#" + ("000000" + hex.toString(16)).substr(-6); 95% of the time it is fine, though sometimes the background is white and you cant see the text Gulsvi @gulsvi Feb 06 2018 18:40 UTC Many people use an array of good background + text color combinations and pick a random one from the array In your example, you can check if hex = #ffffff if it does, generate another value Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:43 UTC there are a range of colours, like dirty whites and very light pinks that also make it difficult to see, is there a way of excluding all of them? like making sure the white value is above a certain level? sounds a long shot i know Gulsvi @gulsvi Feb 06 2018 18:44 UTC It's definitely possible, but gets more complicated. Stephen James @sjames1958gm Feb 06 2018 18:44 UTC @Boris-the-Llama You could do random 0-255 for each piece. But excluding certain colors, difficult? Gulsvi @gulsvi Feb 06 2018 18:44 UTC You could convert the hex to a HSL color - I think lots of people do that to ensure the right amount of contrast between two colors It's much, much easier to define an array of 20 or so color combinations though and just pick those combos at random Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:46 UTC yeah I think i am going to just do an array, thanks @gulsvi CamperBot @camperbot Feb 06 2018 18:46 UTC boris-the-llama sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: :star2: 2588 | @gulsvi |http://www.freecodecamp.org/gulsvi Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:46 UTC @sjames1958gm are you a chelsea fan? you have the chelsea badge as your avatar Stephen James @sjames1958gm Feb 06 2018 18:47 UTC @Boris-the-Llama I am a very sad chelsea fan :( Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:47 UTC did you know they lost last night? they didn't play well @sjames1958gm Stephen James @sjames1958gm Feb 06 2018 18:48 UTC @Boris-the-Llama Yes, I was at home, had to turn it off @Boris-the-Llama Thanks for the reminder :( lol CamperBot @camperbot Feb 06 2018 18:48 UTC sjames1958gm sends brownie points to @boris-the-llama :sparkles: :thumbsup: :sparkles: :cookie: 263 | @boris-the-llama |http://www.freecodecamp.org/boris-the-llama Sweet Coding :) @SweetCodingInc Feb 06 2018 18:48 UTC @Boris-the-Llama Specify range for random numbers. Exclude those that are close to white var hex = Math.floor(Math.random() * (0xEEEEEE - 0x000000)); return "#" + ("000000" + hex.toString(16)).substr(-6); the - 0x000000 part is unnecessary here, but you could replace that with other color offset Gulsvi @gulsvi Feb 06 2018 18:50 UTC That still gives cream colors, light pinks, etc Dany Din @danydin Feb 06 2018 18:50 UTC hey can i put bootstrap icon as rel link? Gulsvi @gulsvi Feb 06 2018 18:51 UTC @danydin Yes, there are instructions for that on the main page of the bootstrap docs <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous"> Dany Din @danydin Feb 06 2018 18:52 UTC @gulsvi i'm talking about rel="icon" mate Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:52 UTC @SweetCodingInc what is this bit doing? i dont really get what it was doing in the first place, i copied from something else i saw. is the EEEEEE bit excluding the white? Gulsvi @gulsvi Feb 06 2018 18:53 UTC @danydin sorry misunderstood - I think rel="icon" is just for favicons. You could use a bootstrap icon, or any icon you want for that. Dany Din @danydin Feb 06 2018 18:54 UTC so i need to download the icon first and then href it? Gulsvi @gulsvi Feb 06 2018 18:54 UTC or link to it, yes Look at how they do it on getbootstrap.com: <link rel="apple-touch-icon" href="/assets/img/favicons/apple-touch-icon.png" sizes="180x180"> <link rel="icon" href="/assets/img/favicons/favicon-32x32.png" sizes="32x32" type="image/png"> <link rel="icon" href="/assets/img/favicons/favicon-16x16.png" sizes="16x16" type="image/png"> <link rel="manifest" href="/assets/img/favicons/manifest.json"> <link rel="mask-icon" href="/assets/img/favicons/safari-pinned-tab.svg" color="#563d7c"> <link rel="icon" href="/favicon.ico"> Sweet Coding :) @SweetCodingInc Feb 06 2018 18:55 UTC @Boris-the-Llama color black has hexcode 000000 and white has ffffff Dany Din @danydin Feb 06 2018 18:55 UTC @gulsvi thanks!! how can i find the 'badge' one? CamperBot @camperbot Feb 06 2018 18:55 UTC danydin sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: :star2: 2589 | @gulsvi |http://www.freecodecamp.org/gulsvi Sweet Coding :) @SweetCodingInc Feb 06 2018 18:55 UTC to represent hexadecimal number in js you add 0x before the number is if your text color is light, you want dark backgrounds Dany Din @danydin Feb 06 2018 18:56 UTC Sweet Coding :) @SweetCodingInc Feb 06 2018 18:56 UTC here is a nice chart that represents colors with their hexadecimal number Look up this part RGB color codes chart at https://www.rapidtables.com/web/color/RGB_Color.html Gulsvi @gulsvi Feb 06 2018 18:57 UTC @danydin I thought you meant the actual bootstrap icon - the "B" :) Eric Weiss @eweiss17 Feb 06 2018 18:57 UTC why not just loop through a predetermined array of approved colors if what color being displayed is a big deal Dany Din @danydin Feb 06 2018 18:57 UTC check for the one called badge but even in the source it just says that it uses from the class ah lol :DD Gulsvi @gulsvi Feb 06 2018 18:57 UTC That's someone else's custom icons, don't know off the top of my head Dany Din @danydin Feb 06 2018 18:57 UTC i found them if it's interesting you: http://glyphicons.com/ Eric Weiss @eweiss17 Feb 06 2018 18:58 UTC google also has icons that are nice https://material.io/icons/ Gulsvi @gulsvi Feb 06 2018 18:58 UTC @danydin You'll need a .png file or a .ico file to do it Those icons are more like fonts than actual image files Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:59 UTC @SweetCodingInc thanks. what does the EEEEE - 00000 do? CamperBot @camperbot Feb 06 2018 18:59 UTC boris-the-llama sends brownie points to @sweetcodinginc :sparkles: :thumbsup: :sparkles: :cookie: 248 | @sweetcodinginc |http://www.freecodecamp.org/sweetcodinginc Dany Din @danydin Feb 06 2018 18:59 UTC @eweiss17 thanks! yes they let you download the free pckage Gulsvi @gulsvi Feb 06 2018 18:59 UTC take a screenshot, crop it in your favorite photo editor and make it into an image. Then host that image somewhere and link to it. Boris-the-Llama @Boris-the-Llama Feb 06 2018 18:59 UTC @eweiss17 what if people get upset there is only 20 colors, and not thousands?! Sweet Coding :) @SweetCodingInc Feb 06 2018 19:00 UTC @Boris-the-Llama it's meaningless in this context. It subtracts 0 from EEE (which in decimal is 3822) Eric Weiss @eweiss17 Feb 06 2018 19:01 UTC I'd rather have 20 colors that look good than thousands that may not Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:02 UTC @SweetCodingInc ok cool Sweet Coding :) @SweetCodingInc Feb 06 2018 19:02 UTC but say you want to generate random color between green and orange you'd do 0xFF3333 - 0x009900 color codes picked up from the chart I shared the link for Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:02 UTC @eweiss17 what 20 colors would you have that look good with white text? Eric Weiss @eweiss17 Feb 06 2018 19:04 UTC I don't know, you can find color helpers online for sure Gulsvi @gulsvi Feb 06 2018 19:04 UTC @Boris-the-Llama One of my favorite color combo generators: https://palettable.io Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:04 UTC @eweiss17 i was just being lazy and see if you could give me some hex or rgbs of colors Sweet Coding :) @SweetCodingInc Feb 06 2018 19:05 UTC @gulsvi they're using the random color generator as well :laughing: Gulsvi @gulsvi Feb 06 2018 19:05 UTC I thought they were using an API Sweet Coding :) @SweetCodingInc Feb 06 2018 19:06 UTC @gulsvi API or not, it's still random hexadecimal number generation... Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:06 UTC @gulsvi thanks, ill have a butchers CamperBot @camperbot Feb 06 2018 19:06 UTC boris-the-llama sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: api offline Gulsvi @gulsvi Feb 06 2018 19:07 UTC @SweetCodingInc They choose complementary colors though, at least they use to. A little more complicated than choosing a random color. Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:07 UTC how does one emoji? Gulsvi @gulsvi Feb 06 2018 19:08 UTC semi colon: : Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:08 UTC :shipit: is there a dancing emoji? Sweet Coding :) @SweetCodingInc Feb 06 2018 19:08 UTC @gulsvi yes... in range based on what you like $\int_{a}^{b} x^2 dx$ Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:09 UTC well i do like dancing, so i have one? :dancers: oh yes look! Eric Weiss @eweiss17 Feb 06 2018 19:11 UTC nice llol Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:13 UTC does anyone know where i can get more emojis? i would like one of a crocodile wearing a suit, though just a tie would do? can only find a plain crocodile Gulsvi @gulsvi Feb 06 2018 19:13 UTC not sure what an indefinite integral has to do with color palettes, but I'm curious to know more I believe this is the API they're using: http://www.colourlovers.com/api Sweet Coding :) @SweetCodingInc Feb 06 2018 19:14 UTC @gulsvi I meant to paste that in the other room Eric Weiss @eweiss17 Feb 06 2018 19:20 UTC helping someone with calc homework lol Gulsvi @gulsvi Feb 06 2018 19:22 UTC I failed my last semester of physics because of integrals lol algebra, geometry, trig - no problem. calculus...ugh Eric Weiss @eweiss17 Feb 06 2018 19:24 UTC well yeah, calculus is the most advanced of those Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:29 UTC so for putting good colors in an array, what values do i need? can i have an array of color names, or do i need hexs/rgbs? Gulsvi @gulsvi Feb 06 2018 19:31 UTC @Boris-the-Llama You can use names if you like the color - for specific shades of those colors, you'll need to use hex/rgb Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:32 UTC @gulsvi cool thanks, ill just use some color names CamperBot @camperbot Feb 06 2018 19:32 UTC boris-the-llama sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: api offline Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:32 UTC why is the api offline? does that mean no cookies for people? :cookie: Gulsvi @gulsvi Feb 06 2018 19:33 UTC cbot status CamperBot @camperbot Feb 06 2018 19:33 UTC All bot systems are go! botVersion: 0.0.12 env: prod botname: camperbot Gulsvi @gulsvi Feb 06 2018 19:33 UTC thanks @gulsvi! CamperBot @camperbot Feb 06 2018 19:33 UTC sorry gulsvi, you can't send brownie points to yourself! :sparkles: :sparkles: Gulsvi @gulsvi Feb 06 2018 19:33 UTC allyourbase CamperBot @camperbot Feb 06 2018 19:33 UTC Gulsvi @gulsvi Feb 06 2018 19:34 UTC Seems to be working, not sure what's up Oh, I know, it's because you already gave me cookies - there's a waiting period between cookie giving Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:35 UTC ok @gulsvi . did you know you have followers? Gulsvi @gulsvi Feb 06 2018 19:35 UTC On Github? I have 0 repositories, don't know what people are following :p If you need a follower though, I'm happy to oblige with #teamfollowback Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:37 UTC but what do followers do? i dont know if i could live up to their expectations to do stuff Gulsvi @gulsvi Feb 06 2018 19:37 UTC I'm not sure haha I don't follow anyone apparently, but thanks for pointing that out. Had never looked Eric Weiss @eweiss17 Feb 06 2018 19:39 UTC it just tells you when they make a commit when you follow someone or create a new repo, pretty much anything they do Markus Kiili @Masd925 Feb 06 2018 19:40 UTC @Boris-the-Llama Don't worry. I have 49 and don't know what they are or do. Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:40 UTC lets hope they dont start stalking, i heard about an obsessed stalker on the news Gulsvi @gulsvi Feb 06 2018 19:41 UTC wow, what are the odds. 49 followers 0 following just like me Markus Kiili @Masd925 Feb 06 2018 19:41 UTC @gulsvi :sparkles: Gulsvi @gulsvi Feb 06 2018 19:41 UTC at least your followers have something to look at @Masd925 :laughing: Fernando @lestairon Feb 06 2018 19:42 UTC Is codepen bugged? Gulsvi @gulsvi Feb 06 2018 19:42 UTC not for me Eric Weiss @eweiss17 Feb 06 2018 19:42 UTC yep the first step to stalking is following on github Fernando @lestairon Feb 06 2018 19:42 UTC My pen looks so different I didn't even changed anything wtf Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:42 UTC @lestairon bugged? like someone listening in on a microphone? Eric Weiss @eweiss17 Feb 06 2018 19:43 UTC could have been an update or something Gulsvi @gulsvi Feb 06 2018 19:43 UTC Or bugged, like you copy/pasted everything from your desktop into codepen and it doesn't look the same? Fernando @lestairon Feb 06 2018 19:44 UTC No I did everything in codepen Left for 9 days And now it looks so bad Eric Weiss @eweiss17 Feb 06 2018 19:45 UTC link it Fernando @lestairon Feb 06 2018 19:46 UTC Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:49 UTC what has changed fernando? cjlynch12 @cjlynch12 Feb 06 2018 19:51 UTC looks like just some CSS issues @lestairon, api is working correctly Fernando @lestairon Feb 06 2018 19:51 UTC The button doesn't look the same and the temperature is off Yeah Gulsvi @gulsvi Feb 06 2018 19:52 UTC @lestairon Add bootstrap to your CSS settings Remove the weather API from your JS Settings Then everything looks fine - not sure why those settings got changed though. Very strange. Eric Weiss @eweiss17 Feb 06 2018 19:54 UTC cool purple background Fernando @lestairon Feb 06 2018 19:56 UTC I didn't do it, but yeah, looks pretty cool ^^ @gulsvi Thanks, that fixed it CamperBot @camperbot Feb 06 2018 19:56 UTC lestairon sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: :star2: 2590 | @gulsvi |http://www.freecodecamp.org/gulsvi Gulsvi @gulsvi Feb 06 2018 19:56 UTC Cool, happy to help Boris-the-Llama @Boris-the-Llama Feb 06 2018 19:57 UTC @lestairon who did do it? is that a pokemon in your avatar? Gulsvi @gulsvi Feb 06 2018 19:57 UTC I guess the background is from there ^ Fernando @lestairon Feb 06 2018 19:58 UTC Gulsvi @gulsvi Feb 06 2018 19:59 UTC Very cool, another one! :) Fernando @lestairon Feb 06 2018 19:59 UTC @Boris-the-Llama Yeah, it is a Pokemon Eric Weiss @eweiss17 Feb 06 2018 20:01 UTC all my project backgrounds are gray, maybe i'll use those gradients instead Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:01 UTC who would like to have a butchers at my random quote machine? nobody? :cry: Fernando @lestairon Feb 06 2018 20:04 UTC Let me see Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:05 UTC here you go!!! Gulsvi @gulsvi Feb 06 2018 20:06 UTC Random comma gets added to tweets: We must become the change we want to see. , ~ Mahatma Gandhi The twitter icon color is hard to see - I suggest using the official twitter blue color Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:09 UTC i thought i was using the official twitter color? Gulsvi @gulsvi Feb 06 2018 20:09 UTC No, you're using a really, really light blue I can barely see it on my monitor, maybe easier to see on other monitors Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:10 UTC oh, i thought that was the proper color, ill change it now Eric Weiss @eweiss17 Feb 06 2018 20:10 UTC hit those buttons with some cursor: pointer, you know what i mean Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:10 UTC and i got rid of the quote on the tweet @eweiss17 ok thanks, will do CamperBot @camperbot Feb 06 2018 20:11 UTC boris-the-llama sends brownie points to @eweiss17 :sparkles: :thumbsup: :sparkles: :cookie: 606 | @eweiss17 |http://www.freecodecamp.org/eweiss17 Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:11 UTC @gulsvi can i give u cookie yet? Matej Bošnjak @mbosnjak01 Feb 06 2018 20:11 UTC @Boris-the-Llama :+1: Eric Weiss @eweiss17 Feb 06 2018 20:11 UTC is your color transitioning between each or just directly changing Gulsvi @gulsvi Feb 06 2018 20:11 UTC @Boris-the-Llama Also, this JS is confusing: if (!author == "") {$(".author").html("~ " + author); } else { $(".author").html("~ " + "Unknown"); } Do this instead: if (author !== "") {$(".author").html("~ " + author); } else { $(".author").html("~ " + "Unknown"); } Fernando @lestairon Feb 06 2018 20:12 UTC @Boris-the-Llama There's a problem with your code Try to share a quote with ";" and see what happens Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:12 UTC :crying_cat_face: @lestairon what is it? Eric Weiss @eweiss17 Feb 06 2018 20:13 UTC You could just omit that code completely if no author, up to you i guess Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:13 UTC oh no! why it do that? Fernando @lestairon Feb 06 2018 20:13 UTC Change window.open("https://twitter.com/intent/tweet?text="+ quotes + ", ~ " + author for window.open("https://twitter.com/intent/tweet?text="+ encodeURIComponent(quotes) + ", ~ " + author Gulsvi @gulsvi Feb 06 2018 20:14 UTC I would just do this: author = data.quoteAuthor || "Unknown";$(".author").html("~ " + author) Fernando @lestairon Feb 06 2018 20:16 UTC I had the same problem with semicolons Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:18 UTC ok thanks ppl, @lestairon @gulsvi @eweiss17 CamperBot @camperbot Feb 06 2018 20:18 UTC boris-the-llama sends brownie points to @lestairon and @gulsvi and @eweiss17 :sparkles: :thumbsup: :sparkles: api offline :star2: 2591 | @gulsvi |http://www.freecodecamp.org/gulsvi :cookie: 267 | @lestairon |http://www.freecodecamp.org/lestairon Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:18 UTC what about colors? was it a visual delight? Gulsvi @gulsvi Feb 06 2018 20:22 UTC Delightful for sure Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:23 UTC @gulsvi im glad it was pleasing on the eye Onome Sotu @onomesotu Feb 06 2018 20:28 UTC RegExp is so fun and come to think I was so freaking scared of it all these time :) Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:31 UTC how would one make sure that the background changes each time, ie that the same background is not picked by the random method im guessing an if statement, but when i do if colors[randomCol] == colors[randomCol] it breaks the code? Fernando @lestairon Feb 06 2018 20:42 UTC Hm Maybe having a variable that stores the previous color, if the new random color is the same one, run the random method again Idk, that's what i'd do Boris-the-Llama @Boris-the-Llama Feb 06 2018 20:45 UTC how would i go about storing the previous color in a variable? Feb 06 2018 21:05 UTC @Boris-the-Llama - without seeing the code, I would say that you declare a variable somewhere that will be accessible to your code, but only changed by the code that is selecting a new color. You can call it prevColor and store the index of the color that was last used with randomCol to set the color. Then you would need a loop that would generate a random number for the new randomCol and compare it to the prevColor and keep looping until you selected something different. Perhaps put a safety net counter in there to prevent infinite loops? But that would be the idea that I would try. Gulsvi @gulsvi Feb 06 2018 21:23 UTC @Boris-the-Llama Use RGB colors instead of color names and do: if (\$("body").css("backgroundColor") == colors[randomCol]) { getRandomColor(); } else { return colors[randomCol]; } Ask jQuery what the current background color is - you'll get a RGB value back. If it's the same one, call your function again and get a new random color to compare with again, else return the new color. @onomesotu I have never heard anyone say that Regex is fun :laughing: Fernando @lestairon Feb 06 2018 22:08 UTC Same disjfa @disjfa Feb 06 2018 22:29 UTC Bbbbbbbut regex is awesome Fernando @lestairon Feb 06 2018 22:38 UTC I always have a bad time with regx haha Maybe it's because i'm learning Fernando @lestairon Feb 06 2018 22:46 UTC How can i use icons from an api? I'm trying to use these icons on my page, but i don't know how exactly http://erikflowers.github.io/weather-icons/ I tried using <ul> <li> </li> </ul> But i think i'm doing something wrong Lee @LeeConnelly12 Feb 06 2018 22:52 UTC @lestairon The tells you to use the icons like this <i class="wi wi-night-sleet"></i> Feb 06 2018 23:05 UTC @lestairon - it's a great idea to use some of the sites that they recommend in the lessons, such as regex101.com or regexr.com or regexone.com - I don't know if one is better than another, but they walk you through regular expression examples and have a sandbox environment that you can use with the lessons or when you are developing your own expressions. Neat stuff! RobertGlick @RobertGlick Feb 06 2018 23:17 UTC @lestairon it needs to be <ul> not <u1> l is a letter not 1 the number @lestairon for un listed Gulsvi @gulsvi Feb 06 2018 23:21 UTC @RobertGlick ul stands for Unordered List :) <ol> = ordered list (1,2,3) RobertGlick @RobertGlick Feb 06 2018 23:22 UTC @gulsvi thanks :smile: CamperBot @camperbot Feb 06 2018 23:22 UTC robertglick sends brownie points to @gulsvi :sparkles: :thumbsup: :sparkles: :star2: 2592 | @gulsvi |http://www.freecodecamp.org/gulsvi Gulsvi @gulsvi Feb 06 2018 23:22 UTC But, that's just the font, it is a ul the way they have it above, not u1 RobertGlick @RobertGlick Feb 06 2018 23:23 UTC ah Gulsvi @gulsvi Feb 06 2018 23:23 UTC I had to copy/paste to be sure lol
2021-03-07 00:55:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3227989673614502, "perplexity": 9962.932439687318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375529.62/warc/CC-MAIN-20210306223236-20210307013236-00388.warc.gz"}
http://mathoverflow.net/revisions/5383/list
My favorite example is regular polytopes. The number of regular polytopes is almost monotone decreasing, from countably many in $\mathbb{R}^2$, to five in $\mathbb{R}^3$ to 3 for $\mathbb{R}^n$ for $n>4$. But in $n=4$, we get six, which is kind of weird.
2013-05-19 11:09:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7272737622261047, "perplexity": 226.34288423613893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697420704/warc/CC-MAIN-20130516094340-00020-ip-10-60-113-184.ec2.internal.warc.gz"}
https://canonijnetworktool.cc/canon-ij-network-tool-update/
Breaking News Home / Canon Ij Pixma / Canon IJ Network Tool Update # Canon IJ Network Tool Update ## Canon IJ Network Tool Update Canon IJ Network Tool Download Support for OS Windows and Mac – Canon IJ Network Tool Setup device is a utility that allows you to screen and modify the community settings within the instrument. Canon IJ Network Tool Update Canon Network Tools It’s put in in the event the machine is ready Canon IJ Network Tool Software Download – IJ Network Tool is a utility software that allows you to display and change network machine settings via LAN cable. This is installed when the machine is Canon IJ Network Tool Update canon ij network tool free download – Canon IJ Printer Driver Canon iP4200, Canon IJ Printer Driver Canon iP5200, Canon IJ Printer Driver Canon iP6600D, and Canon IJ Network Tool The Canon IJ Network Tool is a utility that enables you to display and modify the machine network settings. It is installed when the machine is set up. Canon Ij Network Tool that enable you to print and Scan from the wireless Canon IJ Network printer that is connected through a network. … Canon IJ Network Tool configuration. Canon IX6750 Driver DownloadCanon IX6750 Driver Download Canon IX6750 Driver Download – Canon PIXMAiX6750 is a full A3 gadget at a cost not exactly and intended to Download and run the IJ Scan Utility on a Windows computer . Description. … Steps to download the IJScan Utility. Go to the Canon Support page to follow these steps. … set up the network environment from IJ Scan Utility. Learn how to set network scan settings Canon IJ Network Tool Update This file is the LAN vehicle driver for Canon IJ Network. With this established up, you could publish from the Canon IJ Network ink-jet printer that is hooked up via a network.Utility that allows you to show and also make the network settings on the ink-jet printer engine. as well as could be defined when the equipment is mounted. appropriate for Windows 8.1/ 8/7/Vista/ XP/Server as well as Canon IJ Network Tool DownloadCanon IJ Network Tool DownloadIJ Network Driver Ver. 2.5.7 / Network Tool Ver. 2.5.7 for Windows. Outline This file is the LAN driver for Canon IJ Network. With the IJ printer, you can print from Canon IJ network connected through the network. Carefully Canon Mini320 Driver DownloadCanon Mini320 Driver Download – Canon Pixma mini320 isn’t too little for an extraordinary photograph printer. Truth be told, it’s greater than the Pixma mini260 it replaces, which is considerably greater than a warm color claim to fame photograph printer PIXMA Printer Software. Canon offers a selection of optional software available to our customers to enhance your PIXMA printing experience. Details of each software item and links to download the software are provided on this page. … Canon IJ Network Tool Using the Canon IJ Network Tool, you can install, view or configure the network settings Canon IJ Network Tool Update Canon IJ Network Tool is a small utility for usage with printers manufactured by Canon including inkjet and laser printers. Using the application requires first connection your printer with a USB cable and then locating it with the tool canon ij network tool free download – Canon IJ Printer Driver Canon iP4200, Canon IJ Printer Driver Canon iP5200, Canon IJ Printer Driver Canon iP6600D, and IJ Network Driver Ver. 2.5.7 / Network Tool Ver. 2.5.7 (Windows 10/8,1/8/Vista/XP/2000 32-64bit) This file is the LAN driver for Canon IJ Network . With this setup, you can print from the Canon IJ Networkprinter that is connected through a network IJ Network Tool Utility Download – Canon IJ Community Scan Utility is usually a program established by Canon. Following alternative and also configuration, it specifies an auto-start computer registry entrance makings this method operated on every Windows boot for all purchaser logins Canon IJ Network Tool Update ### Canon IJ Network Tool Update Hyperlink Canon IJ Network Tool Update ## Canon scanner driver lide 100 Canon scanner driver lide 100 Download drivers, software, firmware and manuals for your Canon product …
2020-09-23 11:31:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8686118125915527, "perplexity": 5273.699013659972}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00215.warc.gz"}
http://mathematica.stackexchange.com/questions/56543/how-to-draw-a-normalized-tangent-arrow
# How to draw a normalized tangent arrow [closed] I want to draw a normalized tangent arrow, so I use the Normalize command as follows: tangent = Table[{{t, Sin[t]}, {t, Sin[t]} + Normalize @ {1, Cos[t]}}, {t, -π, π, π/2}]; Plot[Sin[x], {x, -2 π, 2 π}, PlotRange -> 2, Epilog -> {Red, Arrowheads[0.02], Arrow /@ tangent}] and I get this plot: Seems good, but if take a close look at the length of the arrows, you'll see that the length is not normalized at all. I've tried the Show and Graphics command instead of Epilog, but got the same plot. Can someone tell me what I missed here? - ## locked by Mr.Wizard♦Jul 25 '15 at 12:57 This question exists because it has historical significance, but it is not considered a good, on-topic question for this site, so please do not use it as evidence that you can ask similar questions here. This question and its answers are frozen and cannot be changed. More info: help center. ## closed as off-topic by m_goldberg, Jens, ubpdqn, Yves Klett, RunnyKineAug 4 '14 at 4:50 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – m_goldberg, Jens, ubpdqn, Yves Klett, RunnyKine If this question can be reworded to fit the rules in the help center, please edit the question. Your plot is distorted by the default aspect ratio of 1/GoldenRatio. Add AspectRatio -> Automatic to your plot options – m_goldberg Aug 4 '14 at 3:56 This issue here is aspect ratio: Using the following slight adaptation of your code: tangent = Table[{{t, Sin[t]}, {t, Sin[t]} + Normalize@{1, Cos[t]}}, {t, -\[Pi], \[Pi], \[Pi]/2}]; Plot[Sin[x], {x, -2 \[Pi], 2 \[Pi]}, PlotRange -> 2, Epilog -> {{Red, Arrowheads[0.02], Arrow /@ tangent}, Circle[{#, Sin[#]}] & /@ Range[-\[Pi], \[Pi], \[Pi]/2]}] However, specifying aspect ratio: tangent = Table[{{t, Sin[t]}, {t, Sin[t]} + Normalize@{1, Cos[t]}}, {t, -\[Pi], \[Pi], \[Pi]/2}]; Plot[Sin[x], {x, -2 \[Pi], 2 \[Pi]}, PlotRange -> 2, Epilog -> {{Red, Arrowheads[0.02], Arrow /@ tangent}, Circle[{#, Sin[#]}] & /@ Range[-\[Pi], \[Pi], \[Pi]/2]}, AspectRatio -> Automatic] resolves matters: - Thanks a lot. I have another small question. Since many of the default values of options in Mathematica is Automatic, Why WRI choose the default value of the AspectRatio as 1/GoldenRatio? I mean, why not choose Automatic, then in the algorithms(or in the front-end or something like that), Automatic use 1/GoldenRatio as default value? This seems more coherent to me. – luyuwuli Aug 4 '14 at 4:30 @luyuwuli I am afraid I cannot answer that...as humans we make so many arbitrary decisions and adopt arbitrary conventions, August having 31 days, Julian v Gregorian calendar, tau v pi as well as the countless debates in everyday life...I am sure there were reasons, like appeal of golden ratio to human aesthetics etc – ubpdqn Aug 4 '14 at 4:52 Hahaha... Yes, I agree. I ask this because I've suffered a lot in the AspectRatio issue. – luyuwuli Aug 4 '14 at 5:02
2016-07-28 22:22:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.345552921295166, "perplexity": 4580.079490573734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829320.70/warc/CC-MAIN-20160723071029-00035-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/473190/unbiased-estimator-for-geometric-distribution
# unbiased estimator for geometric distribution Let $X_1,\ldots,X_n$ to be sample distributed geometric with parameter $p$. Find MLE. Is it unbiased? The distribution for each is $p(1-p)^{x_i-1}$ so the function is $$L(p)=\displaystyle\prod_{i=1}^np(1-p)^{X_i-1}.$$ After taking lns on both sides I got $$l(p)=\ln(L(p))=n\log(p)+\sum_{i=1}^n(X_i-1)\cdot \log(1-p).$$ I derivatied and found maximum in $p_m=\dfrac{n}{n+\sum_{i=1}^n(X_i-1)}$. Now I need to calculate $E[p_m]$: $$E[p_m]=nE\left[\frac{1}{\sum X_i}\right]$$ How can proceed? • First note that you expression simplifies, which you can see if you put parentheses where needed: $\sum_i (X_i - 1)$. This is an exponential family, and you will find that the MLE is the same as the method of moments estimator $\hat{p} = 1/\bar{X}$ where $\bar{X} = \frac1n \sum_i X_i$. – passerby51 Aug 22 '13 at 0:02 • Now, consider the case $n=1$. Is it true that $E[\frac{1}{X}] = p$? – passerby51 Aug 22 '13 at 0:04 • Why $E[\frac 1 X]=p$? – user65985 Aug 22 '13 at 0:06 • For $n=1$, the estimator is $\hat{p} = 1/X_1$. Being unbiased means $E[\hat{p}] = p$. I should have said that as: "Is $E[\frac{1}{X_1}] = p$ true, in which case the estimator is unbiased?" – passerby51 Aug 22 '13 at 0:10 • but as far as I know when we talk about discrete variable )X can get values $x_1... x_k$) $E[X]=\sum_{i=1}^k x*P(X=x_i)$ but here we have various xs. I still don't understand how can we answer the question you asked. – user65985 Aug 22 '13 at 0:17 Here is one way to answer this. Consider the case $n = 1$. The estimator in this case is $\hat{p} = 1/X_{1}$. Let us try to see what it is expectation is, $$E[\hat{p}] = E\Big[ \frac1{X_1}\Big] = \sum_{k=1}^\infty \frac{1}{k} P(X_1 = k) = \sum_{k=1}^\infty \frac1k p(1-p)^{k-1}$$ Hint: Note that for $\alpha \in (-1,1)$, we have $\sum_{k=1}^\infty \frac{\alpha^k}{k} = - \log(1-\alpha)$. EDIT2: You can obtain the exact expression, or use the following simple bound $$E(\hat{p}) = p + \sum_{k=2}^{\infty} \frac{1}{k} p (1-p)^{k-1} > p$$ for $p \in (0,1)$ since the sum above is strictly positive. • and then $E[\hat{p}]=p\cdot\sum\frac{1}{k}(1-p)^{k-1}< p\cdot\sum(1-p)^{k-1}=\frac{p}{1-(1-p)}=p$ proves the estimation is unbiased? – user65985 Aug 22 '13 at 0:51 • Sorry. I didn't check your bound. Your bound does not seem correct. In general bounding is easier, but in this case, the bound you have is $p \sum (1-p)^{k-1} = 1$ which is not enough. – passerby51 Aug 22 '13 at 1:15 • fine, another try: $E[\hat{p}]=\frac p {(1-p)} \sum\frac{(1-p)^k}{k}=-\frac{p\cdot log(p)}{1-p}$ which is less than p (since the log for small values of p tend to $-\infty$. Is this fine? – user65985 Aug 22 '13 at 1:21 • The expression $- p \log p / (1-p)$ seems correct to me. It is enough to say that this is not equal $p$ for all $p \in (0,1)$, which is obvious. However, it seems that in fact $-p \log(p) /(1-p)$ is $> p$ over $(0,1)$. Here is a link to a plot of $E[\hat{p}] / p$ suggesting this: wolfr.am/14kDWU9 – passerby51 Aug 22 '13 at 1:26
2019-12-11 08:54:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017422795295715, "perplexity": 284.7980742376427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530452.95/warc/CC-MAIN-20191211074417-20191211102417-00511.warc.gz"}
https://math.stackexchange.com/questions/3539455/expected-total-discounted-reward
# Expected Total Discounted Reward Let random variable $$R_k$$ denote the revenue received in the kth period. Suppose that $$R_1, R_2, . . .$$ are independent and identically distributed. The quantity $$Q = \sum_{k=1}^{\infty}\beta^{k-1}R_k$$ denotes the total discounted revenue with discount factor β. Let T denote a geometric random variable with success probability 1 − β and T takes values 1, 2, . . .. That is, $$P(T = k) = β^ {k−1} (1 − β)$$, k = 1, 2, . . . . We further assume that T, R1, R2, . . . are independent. (3 marks) Show that the expected total discount revenue is equal to the expected total (undiscounted) reward received by time T. In other words, show that $$E(\sum_{k=1}^{\infty}β^ {k−1} R_k)=E(\sum_{k=1}^T R_k)$$ I am unsure where to start or how to identify what to do. • if I'm reading this correctly, your setup is essentially the same as that as for proving the Wald Equation (as done in renewal theory). I.e. first consider the non-negative case $R_k':= \big \vert R_k\big \vert$ so $E(\sum_{k=1}^{\infty}β^ {k−1} R_k') = \sum_{k=1}^{\infty}β^ {k−1} E[R_k'] = E[R_1'] \cdot \sum_{k=1}^{\infty}β^ {k−1}=E[R_1'] E[T]$ where the interchange of limit and expectation is justified on monotone convergence. Then re-run the argument justifying the interchange on dominated convergence. That's the nice approach. There are uglier ones but it depends on what you know. – user8675309 Feb 8 at 21:01 First let the mean of $$R_k$$ be $$\mu$$. Then you have to identify that the RHS is the expectation of a random sum, which evaluates to the product of mean of $$R_k$$ and mean of T, so the RHS = $$E(T)E(R_k)$$. Since T is a geometric RV, the RHS is ultimately $$\mu / (1-\beta)$$ Then to prove that the LHS is also equal to $$\mu/(1-\beta)$$, you have to bring the expectation in on the $$R_k$$ then just sum up the converging series using the sum of gp formula on the left
2020-04-01 00:04:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9863699078559875, "perplexity": 140.20345222875127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00458.warc.gz"}
https://learn.careers360.com/school/question-give-reasons-for-the-following-a-transition-metals-show-variable-oxidation-statesb-eo-value-for-zn2-zn-is-negative-while-that-of-cu2-cu-is-positivec-higher-oxidation-state-of-mn-with-fluorine-is-4-whereas-with-oxygen-is-7-98548/
# Give reasons for the following :(a) Transition metals show variable oxidation states. (b) Eo value for (Zn2+/Zn) is negative while that of (Cu2+/Cu) is positive. (c) Higher oxidation state of Mn with fluorine is +4 whereas with oxygen is +7. (a)    It is because their d-orbitals are incompletely filled. (b)    Eo[Zn/Zn2+] is negative because of the conversion of $Zn\rightarrow Zn^{2+}$ gives it a filled d5 configuration which is very stable whereas, conversion of Cu to Cu2+ does not give any extra stabity, hence it has positive Eo value. (c)This is because in case of oxygen, Mn forms pπ-dπ multiple bonding using 2p orbitals of oxygen and 3d orbitals of Mn. With F, Mn shows +4 because of the single bond formation caused by the unavailability of 2p orbitals in F for multiple bonding. ## Related Chapters ### Preparation Products ##### Knockout KCET 2021 An exhaustive E-learning program for the complete preparation of KCET exam.. ₹ 4999/- ₹ 2999/- ##### Knockout KCET JEE Main 2021 It is an exhaustive preparation module made exclusively for cracking JEE & KCET. ₹ 27999/- ₹ 16999/- ##### Knockout NEET Sept 2020 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 15999/- ₹ 6999/- ##### Rank Booster NEET 2020 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 9999/- ₹ 4999/-
2020-08-05 22:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7952684760093689, "perplexity": 14491.141205789105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00157.warc.gz"}
https://www.broadinstitute.org/gatk/guide/tagged?tag=rnaseq&tab=forum
# Tagged with #rnaseq 2 documentation articles | 8 announcements | 23 forum discussions Created 2014-04-16 22:09:49 | Updated 2015-05-16 06:58:15 | Tags: best-practices rnaseq This is our recommended workflow for calling variants in RNAseq data from single samples, in which all steps are performed per-sample. In future we will provide cohort analysis recommendations, but these are not yet available. The workflow is divided in three main sections that are meant to be performed sequentially: • Variant discovery: from reads (BAM files) to variants (VCF files) • Refinement and evaluation: genotype refinement, functional annotation and callset QC Compared to the DNAseq Best Practices, the key adaptations for calling variants in RNAseq focus on handling splice junctions correctly, which involves specific mapping and pre-processing procedures, as well as some new functionality in the HaplotypeCaller, which are highlighted in the figure below. ### Pre-Processing The data generated by the sequencers are put through some pre-processing steps to make it suitable for variant calling analysis. The steps involved are: Mapping and Marking Duplicates; Split'N'Trim; Local Realignment Around Indels (optional); and Base Quality Score Recalibration (BQSR); performed in that order. #### Mapping and Marking Duplicates The sequence reads are first mapped to the reference using STAR aligner (2-pass protocol) to produce a file in SAM/BAM format sorted by coordinate. The next step is to mark duplicates. The rationale here is that during the sequencing process, the same DNA molecules can be sequenced several times. The resulting duplicate reads are not informative and should not be counted as additional evidence for or against a putative variant. The duplicate marking process identifies these reads as such so that the GATK tools know they should ignore them. #### Split'N'Trim Then, an RNAseq-specific step is applied: reads with N operators in the CIGAR strings (which denote the presence of a splice junction) are split into component reads and trimmed to remove any overhangs into splice junctions, which reduces the occurrence of artifacts. At this step, we also reassign mapping qualities from 255 (assigned by STAR) to 60 which is more meaningful for GATK tools. #### Realignment Around Indels Next, local realignment is performed around indels, because the algorithms that are used in the initial mapping step tend to produce various types of artifacts. For example, reads that align on the edges of indels often get mapped with mismatching bases that might look like evidence for SNPs, but are actually mapping artifacts. The realignment process identifies the most consistent placement of the reads relative to the indel in order to clean up these artifacts. It occurs in two steps: first the program identifies intervals that need to be realigned, then in the second step it determines the optimal consensus sequence and performs the actual realignment of reads. This step is considered optional for RNAseq. #### Base Quality Score Recalibration Finally, base quality scores are recalibrated, because the variant calling algorithms rely heavily on the quality scores assigned to the individual base calls in each sequence read. These scores are per-base estimates of error emitted by the sequencing machines. Unfortunately the scores produced by the machines are subject to various sources of systematic error, leading to over- or under-estimated base quality scores in the data. Base quality score recalibration is a process in which we apply machine learning to model these errors empirically and adjust the quality scores accordingly. This yields more accurate base qualities, which in turn improves the accuracy of the variant calls. The base recalibration process involves two key steps: first the program builds a model of covariation based on the data and a set of known variants, then it adjusts the base quality scores in the data based on the model. ### Variant Discovery Once the data has been pre-processed as described above, it is put through the variant discovery process, i.e. the identification of sites where the data displays variation relative to the reference genome, and calculation of genotypes for each sample at that site. Because some of the variation observed is caused by mapping and sequencing artifacts, the greatest challenge here is to balance the need for sensitivity (to minimize false negatives, i.e. failing to identify real variants) vs. specificity (to minimize false positives, i.e. failing to reject artifacts). It is very difficult to reconcile these objectives in a single step, so instead the variant discovery process is decomposed into separate steps: variant calling (performed per-sample) and variant filtering (also performed per-sample). The first step is designed to maximize sensitivity, while the filtering step aims to deliver a level of specificity that can be customized for each project. Our current recommendation for RNAseq is to run all these steps per-sample. At the moment, we do not recommend applying the GVCF-based workflow to RNAseq data because although there is no obvious obstacle to doing so, we have not validated that configuration. Therefore, we cannot guarantee the quality of results that this would produce. #### Per-Sample Variant Calling We perform variant calling by running the HaplotypeCaller on each sample BAM file (if a sample's data is spread over more than one BAM, then pass them all in together) to create single-sample VCFs containing raw SNP and indel calls. #### Per-Sample Variant Filtering For RNAseq, it is not appropriate to apply variant recalibration in its present form. Instead, we provide hard-filtering recommendations to filter variants based on specific annotation value thresholds. This produces a VCF of calls annotated with fiiltering information that can then be used in downstream analyses. ### Refinement and evaluation In this last section, we perform some refinement steps on the genotype calls (GQ estimation and transmission phasing), add functional annotations if desired, and do some quality evaluation by comparing the callset to known resources. None of these steps are absolutely required, and the workflow may need to be adapted quite a bit to each project's requirements. Important note on GATK versions The [Best Practices](http://www.broadinstitute.org/gatk/guide/best-practices) have been updated for GATK version 3. If you are running an older version, you should seriously consider upgrading. For more details about what has changed in each version, please see the [Version History](http://www.broadinstitute.org/gatk/guide/version-history) section. If you cannot upgrade your version of GATK for any reason, please look up the corresponding version of the GuideBook PDF (also in the [Version History](http://www.broadinstitute.org/gatk/guide/version-history) section) to ensure that you are using the appropriate recommendations for your version. Created 2014-03-06 07:15:51 | Updated 2015-12-07 11:08:30 | Tags: best-practices rnaseq ### Overview This document describes the details of the GATK Best Practices workflow for SNP and indel calling on RNAseq data. Please note that any command lines are only given as example of how the tools can be run. You should always make sure you understand what is being done at each step and whether the values are appropriate for your data. To that effect, you can find more guidance here. In brief, the key modifications made to the DNAseq Best Practices focus on handling splice junctions correctly, which involves specific mapping and pre-processing procedures, as well as some new functionality in the HaplotypeCaller. Here is a detailed overview: ### Caveats Please keep in mind that our DNA-focused Best Practices were developed over several years of thorough experimentation, and are continuously updated as new observations come to light and the analysis methods improve. We have been working with RNAseq for a somewhat shorter time, so there are many aspects that we still need to examine in more detail before we can be fully confident that we are doing the best possible thing. We know that the current recommended pipeline is producing both false positives (wrong variant calls) and false negatives (missed variants) errors. While some of those errors are inevitable in any pipeline, others are errors that we can and will address in future versions of the pipeline. A few examples of such errors are given in this article as well as our ideas for fixing them in the future. We will be improving these recommendations progressively as we go, and we hope that the research community will help us by providing feedback of their experiences applying our recommendations to their data. ### The workflow #### 1. Mapping to the reference The first major difference relative to the DNAseq Best Practices is the mapping step. For DNA-seq, we recommend BWA. For RNA-seq, we evaluated all the major software packages that are specialized in RNAseq alignment, and we found that we were able to achieve the highest sensitivity to both SNPs and, importantly, indels, using STAR aligner. Specifically, we use the STAR 2-pass method which was described in a recent publication (see page 43 of the Supplemental text of the Pär G Engström et al. paper referenced below for full protocol details -- we used the suggested protocol with the default parameters). In brief, in the STAR 2-pass approach, splice junctions detected in a first alignment run are used to guide the final alignment. Here is a walkthrough of the STAR 2-pass alignment steps: 1) STAR uses genome index files that must be saved in unique directories. The human genome index was built from the FASTA file hg19.fa as follows: genomeDir=/path/to/hg19 mkdir $genomeDir STAR --runMode genomeGenerate --genomeDir$genomeDir --genomeFastaFiles hg19.fa\ --runThreadN <n> 2) Alignment jobs were executed as follows: runDir=/path/to/1pass mkdir $runDir cd$runDir STAR --genomeDir $genomeDir --readFilesIn mate1.fq mate2.fq --runThreadN <n> 3) For the 2-pass STAR, a new index is then created using splice junction information contained in the file SJ.out.tab from the first pass: genomeDir=/path/to/hg19_2pass mkdir$genomeDir STAR --runMode genomeGenerate --genomeDir genomeDir --genomeFastaFiles hg19.fa \ --sjdbFileChrStartEnd /path/to/1pass/SJ.out.tab --sjdbOverhang 75 --runThreadN <n> 4) The resulting index is then used to produce the final alignments as follows: runDir=/path/to/2pass mkdirrunDir cd $runDir STAR --genomeDir$genomeDir --readFilesIn mate1.fq mate2.fq --runThreadN <n> The above step produces a SAM file, which we then put through the usual Picard processing steps: adding read group information, sorting, marking duplicates and indexing. java -jar picard.jar AddOrReplaceReadGroups I=star_output.sam O=rg_added_sorted.bam SO=coordinate RGID=id RGLB=library RGPL=platform RGPU=machine RGSM=sample java -jar picard.jar MarkDuplicates I=rg_added_sorted.bam O=dedupped.bam CREATE_INDEX=true VALIDATION_STRINGENCY=SILENT M=output.metrics #### 3. Split'N'Trim and reassign mapping qualities Next, we use a new GATK tool called SplitNCigarReads developed specially for RNAseq, which splits reads into exon segments (getting rid of Ns but maintaining grouping information) and hard-clip any sequences overhanging into the intronic regions. In the future we plan to integrate this into the GATK engine so that it will be done automatically where appropriate, but for now it needs to be run as a separate step. At this step we also add one important tweak: we need to reassign mapping qualities, because STAR assigns good alignments a MAPQ of 255 (which technically means “unknown” and is therefore meaningless to GATK). So we use the GATK’s ReassignOneMappingQuality read filter to reassign all good alignments to the default value of 60. This is not ideal, and we hope that in the future RNAseq mappers will emit meaningful quality scores, but in the meantime this is the best we can do. In practice we do this by adding the ReassignOneMappingQuality read filter to the splitter command. Please note that we recently (6/11/14) edited this to fix a documentation error regarding the filter to use. See this announcement for details. Finally, be sure to specify that reads with N cigars should be allowed. This is currently still classified as an "unsafe" option, but this classification will change to reflect the fact that this is now a supported option for RNAseq processing. java -jar GenomeAnalysisTK.jar -T SplitNCigarReads -R ref.fasta -I dedupped.bam -o split.bam -rf ReassignOneMappingQuality -RMQF 255 -RMQT 60 -U ALLOW_N_CIGAR_READS #### 4. Indel Realignment (optional) After the splitting step, we resume our regularly scheduled programming... to some extent. We have found that performing realignment around indels can help rescue a few indels that would otherwise be missed, but to be honest the effect is marginal. So while it can’t hurt to do it, we only recommend performing the realignment step if you have compute and time to spare (or if it’s important not to miss any potential indels). #### 5. Base Recalibration We do recommend running base recalibration (BQSR). Even though the effect is also marginal when applied to good quality data, it can absolutely save your butt in cases where the qualities have systematic error modes. Both steps 4 and 5 are run as described for DNAseq (with the same known sites resource files), without any special arguments. Finally, please note that you should NOT run ReduceReads on your RNAseq data. The ReduceReads tool will no longer be available in GATK 3.0. #### 6. Variant calling Finally, we have arrived at the variant calling step! Here, we recommend using HaplotypeCaller because it is performing much better in our hands than UnifiedGenotyper (our tests show that UG was able to call less than 50% of the true positive indels that HC calls). We have added some functionality to the variant calling code which will intelligently take into account the information about intron-exon split regions that is embedded in the BAM file by SplitNCigarReads. In brief, the new code will perform “dangling head merging” operations and avoid using soft-clipped bases (this is a temporary solution) as necessary to minimize false positive and false negative calls. To invoke this new functionality, just add -dontUseSoftClippedBases to your regular HC command line. Note that the -recoverDanglingHeads argument which was previously required is no longer necessary as that behavior is now enabled by default in HaplotypeCaller. Also, we found that we get better results if we lower the minimum phred-scaled confidence threshold for calling variants on RNAseq data, so we use a default of 20 (instead of 30 in DNA-seq data). java -jar GenomeAnalysisTK.jar -T HaplotypeCaller -R ref.fasta -I input.bam -dontUseSoftClippedBases -stand_call_conf 20.0 -stand_emit_conf 20.0 -o output.vcf #### 7. Variant filtering To filter the resulting callset, you will need to apply hard filters, as we do not yet have the RNAseq training/truth resources that would be needed to run variant recalibration (VQSR). We recommend that you filter clusters of at least 3 SNPs that are within a window of 35 bases between them by adding -window 35 -cluster 3 to your command. This filter recommendation is specific for RNA-seq data. As in DNA-seq, we recommend filtering based on Fisher Strand values (FS > 30.0) and Qual By Depth values (QD < 2.0). java -jar GenomeAnalysisTK.jar -T VariantFiltration -R hg_19.fasta -V input.vcf -window 35 -cluster 3 -filterName FS -filter "FS > 30.0" -filterName QD -filter "QD < 2.0" -o output.vcf Please note that we selected these hard filtering values in attempting to optimize both high sensitivity and specificity together. By applying the hard filters, some real sites will get filtered. This is a tradeoff that each analyst should consider based on his/her own project. If you care more about sensitivity and are willing to tolerate more false positives calls, you can choose not to filter at all (or to use less restrictive thresholds). An example of filtered (SNPs cluster filter) and unfiltered false variant calls: An example of true variants that were filtered (false negatives). As explained in text, there is a tradeoff that comes with applying filters: ### Known issues There are a few known issues; one is that the allelic ratio is problematic. In many heterozygous sites, even if we can see in the RNAseq data both alleles that are present in the DNA, the ratio between the number of reads with the different alleles is far from 0.5, and thus the HaplotypeCaller (or any caller that expects a diploid genome) will miss that call. A DNA-aware mode of the caller might be able to fix such cases (which may be candidates also for downstream analysis of allele specific expression). Although our new tool (splitNCigarReads) cleans many false positive calls that are caused by splicing inaccuracies by the aligners, we still call some false variants for that same reason, as can be seen in the example below. Some of those errors might be fixed in future versions of the pipeline with more sophisticated filters, with another realignment step in those regions, or by making the caller aware of splice positions. As stated previously, we will continue to improve the tools and process over time. We have plans to improve the splitting/clipping functionalities, improve true positive and minimize false positive rates, as well as developing statistical filtering (i.e. variant recalibration) recommendations. We also plan to add functionality to process DNAseq and RNAseq data from the same samples simultaneously, in order to facilitate analyses of post-transcriptional processes. Future extensions to the HaplotypeCaller will provide this functionality, which will require both DNAseq and RNAseq in order to produce the best results. Finally, we are also looking at solutions for measuring differential expression of alleles. [1] Pär G Engström et al. “Systematic evaluation of spliced alignment programs for RNA-seq data”. Nature Methods, 2013 Created 2015-12-08 08:22:39 | Updated 2015-12-09 18:42:17 | Tags: workshop presentations rnaseq slides The hands-on tutorial files and presentations slides from the Dec 8, 2015 workshop at VIB in Gent, Belgium are available at this link: Created 2014-08-15 09:41:46 | Updated | Tags: appistry rnaseq webinar Our partners at Appistry will be doing a webinar on RNAseq analysis next Thursday. The webinar will include a live presentation of the complete pipeline for RNAseq analysis, as well as question time open to all participants. As usual it's free and open to all, you just need to register at Appistry's website. Check it out! Created 2014-06-11 21:20:08 | Updated | Tags: best-practices bug error rnaseq topstory We discovered today that we made an error in the documentation article that describes the RNAseq Best Practices workflow. The error is not critical but is likely to cause an increased rate of False Positive calls in your dataset. The error was made in the description of the "Split & Trim" pre-processing step. We originally wrote that you need to reassign mapping qualities to 60 using the ReassignMappingQuality read filter. However, this causes all MAPQs in the file to be reassigned to 60, whereas what you want to do is reassign MAPQs only for good alignments which STAR identifies with MAPQ 255. This is done with a different read filter, called ReassignOneMappingQuality. The correct command is therefore: java -jar GenomeAnalysisTK.jar -T SplitNCigarReads -R ref.fasta -I dedupped.bam -o split.bam -rf ReassignOneMappingQuality -RMQF 255 -RMQT 60 -U ALLOW_N_CIGAR_READS In our hands we see a bump in the rate of FP calls from 4% to 8% when the wrong filter is used. We don't see any significant amount of false negatives (lost true positives) with the bad command, although we do see a few more true positives show up in the results of the bad command. So basically the effect is to excessively increase sensitivity, at the expense of specificity, because poorly mapped reads are taken into account with a "good" mapping quality, where they would normally be discarded. This effect will be stronger in datasets with lower overall quality, so your results may vary. Let us know if you observe any really dramatic effects, but we don't expect that to happen. To be clear, we do recommend re-processing your data if you can, but if that is not an option, keep in mind how this affects the rate of false positive discovery in your data. We apologize for this error (which has now been corrected in the documentation) and for the inconvenience it may cause you. Created 2014-04-04 01:48:46 | Updated | Tags: appistry rnaseq webinar gvcf gatk3 Our partners at Appistry are putting on another webinar next week, and this one's going to be pretty special in our view -- because we're going to be doing pretty much all the talking! Titled "Speed, Cohorts, and RNAseq: An Insider Look into GATK 3" (see that link for the full program), this webinar will be all about the GATK 3 features, of course. And lest you think this is just another marketing pitch (no offense, marketing people), rest assured that we will be diving into the gory technical details of what happens under the hood. This is a great opportunity to get the inside scoop on how the new features (RNAseq, GVCF pipeline etc) work -- all the stuff that's fit to print, but that we haven't had time to write down in the docs yet. So don't miss it if that's the sort of thing that floats your boat! Or if you miss it, be sure to check out the recording afterward. As usual the webinar is completely free and open to everyone (not just Appistry customers or prospective for-profit users). All you need to do is register now and tune in on Thursday 4/10. Talk to you then! Created 2014-03-17 23:32:16 | Updated | Tags: release rnaseq version-highlights multisample reference-model joint-analysis Better late than never, here is the now-traditional "Highlights" document for GATK version 3.0, which was released two weeks ago. It will be a very short one since we've already gone over the new features in detail in separate articles --but it's worth having a recap of everything in one place. So here goes. ### Work smarter, not harder We are delighted to present our new Best Practices workflow for variant calling in which multisample calling is replaced by a winning combination of single-sample calling in gVCF mode and joint genotyping analysis. This allows us to both bypass performance issues and solve the so-called "N+1 problem" in one fell swoop. For full details of why and how this works, please see this document. In the near future, we will update our Best Practices page to make it clear that the new workflow is now the recommended way to go for calling variants on cohorts of samples. We've already received some pretty glowing feedback from early adopters, so be sure to try it out for yourself! ### Jumping on the RNAseq bandwagon All the cool kids were doing it, so we had to join the party. It took a few months of experimentation, a couple of new tools and some tweaks to the HaplotypeCaller, but you can now call variants on RNAseq with GATK! This document details our Best Practices recommendations for doing so, along with a non-trivial number of caveats that you should keep in mind as you go. Nice try, but no. This tool is obsolete now that we have the gVCF/reference model pipeline (see above). Note that this means that GATK 3.0 will not support BAM files that were processed using ReduceReads! ### Changes for developers We've switched the build system from Ant to Maven, which should make it much easier to use GATK as a library against which you can develop your own tools. And on a related note, we're also making significant changes to the internal structure of the GATK codebase. Hopefully this will not have too much impact on external projects, but there will be a doc very shortly describing how the new build system works and how the codebase is structured. ### Hardware optimizations held for 3.1 For reasons that will be made clear in the near future, we decided to hold the previously announced hardware optimizations until version 3.1, which will be released very soon. Stay tuned! Created 2014-03-06 07:24:03 | Updated | Tags: best-practices rnaseq topstory We’re excited to introduce our Best Practices recommendations for calling variants on RNAseq data. These recommendations are based on our classic DNA-focused Best Practices, with some key differences in the early data processing steps, as well as in the calling step. ### Best Practices workflow for RNAseq This workflow is intended to be run per-sample; joint calling on RNAseq is not supported yet, though that is on our roadmap. Please see the new document here for full details about how to run this workflow in practice. In brief, the key modifications made to the DNAseq Best Practices focus on handling splice junctions correctly, which involves specific mapping and pre-processing procedures, as well as some new functionality in the HaplotypeCaller. Now, before you try to run this on your data, there are a few important caveats that you need to keep in mind. Please keep in mind that our DNA-focused Best Practices were developed over several years of thorough experimentation, and are continuously updated as new observations come to light and the analysis methods improve. We have only been working with RNAseq for a few months, so there are many aspects that we still need to examine in more detail before we can be fully confident that we are doing the best possible thing. For one thing, these recommendations are based on high quality RNA-seq data (30 million 75bp paired-end reads produced on Illumina HiSeq). Other types of data might need slightly different processing. In addition, we have currently worked only on data from one tissue from one individual. Once we’ve had the opportunity to get more experience with different types (and larger amounts) of data, we will update these recommendations to be more comprehensive. Finally, we know that the current recommended pipeline is producing both false positives (wrong variant calls) and false negatives (missed variants) errors. While some of those errors are inevitable in any pipeline, others are errors that we can and will address in future versions of the pipeline. A few examples of such errors are given in this article as well as our ideas for fixing them in the future. We will be improving these recommendations progressively as we go, and we hope that the research community will help us by providing feedback of their experiences applying our recommendations to their data. We look forward to hearing your thoughts and observations! Created 2014-02-24 13:46:45 | Updated 2014-02-24 13:49:40 | Tags: rnaseq multisample topstory pairhmm Previously, we covered the spirit of GATK 3.0 (what our intentions are for this new release, and what we’re hoping to achieve). Let’s now have a look at the top three features you can look forward to in 3.0, in no particular order: 1. Optimized PairHMM algorithm to make GATK run faster 2. Single-sample pipeline for joint variant discovery 3. Best practices for calling variants on RNAseq data ### 1. Optimized PairHMM algorithm to make HaplotypeCaller faster At this point everyone knows that the HaplotypeCaller is fabulous (you know this, right?) but beyond a certain number of samples that you’re trying to call jointly, it just grinds to a crawl, and any further movement is on the scale of continental drift. Obviously this is a major obstacle if you’re trying to do any kind of work at scale beyond a handful of samples, and that’s why it hasn’t been used in recent large-cohort projects despite showing best-in-class performance in terms of discovery power. The major culprit in this case is the PairHMM algorithm, which takes up the lion’s share of HC runtime. With the help of external collaborators (to be credited in a follow-up post) we rewrote the code of the PairHMM to make it orders of magnitude faster, especially on specialized hardware like GPU and FPGA chips (but you’ll still see a speedup on “regular” hardware). We plan to follow up on this by doing similar optimizations on the other “slowpoke” algorithms that are responsible for long runtimes in GATK tools. ### 2. Single-sample pipeline for joint variant discovery Some problems in variant calling can’t be solved by Daft Punk hardware upgrades (better faster stronger) alone. Beyond the question of speed, a major issue with multi-sample variant discovery is that you have to wait until all the samples are available to call variants on them. Then, if later you want to add some more samples to your cohort, you have to re-call all of them together, old and new. This, also known as the “N+1 problem”, is a huge pain in the anatomy. The underlying idea of the “single-sample pipeline for joint variant discovery” is to decouple the two steps in the variant calling process: identifying evidence of variation, and interpreting the evidence. Only the second step needs to be done jointly on all samples, while the first step can be done just as well (and a heck of a lot faster) on one sample at a time. The new pipeline allows us to process each sample as it comes off the sequencing machine, up to the first step of variant calling. Cumulatively, this will produce a database of per-sample, per-site allele frequencies. Then it’s just a matter of running a joint analysis on the database, which can be done incrementally each time a new sample is added or at certain intervals or timepoints, depending on the research needs, because this step runs quickly and cheaply. We’ll go into the details of exactly how this works in a follow-up post. For now, the take-home message is that it’s a “single-sample pipeline” because you do the heavy-lifting per-sample (and just once, ever), but you are empowered to perform “joint discovery” because you interpret the evidence from each sample in light of what you see in all the other samples, and you can do this at any point in the project timeline. ### 3. Best practices for calling variants on RNAseq Our Best Practices recommendations for calling variants on DNA sequence data have proved to be wildly popular with the scientific community, presumably because it takes a lot of the guesswork out of running GATK, and provides a large degree of reproducibility. Now, we’re excited to introduce our Best Practices recommendations for calling variants on RNAseq data. These recommendations are based on our classic DNA-focused Best Practices, with some key differences the early data processing steps, as well as in the calling step. We do not yet have RNAseq-specific recommendations for variant filtering/recalibration, but will be developing those in the coming weeks. We’ll go into the details of the RNAseq Best Practices in a follow-up post, but in a nutshell, these are the key differences: use STAR for alignment, add an exon splitting and cleanup step, and tell the variant caller to take the splits into account. The latter involves some new code added to the variant callers; it is available to both HaplotypeCaller and UnifiedGenotyper, but UG is currently missing a whole lot of indels, so we do recommend using only HC in the immediate future. Keep in mind that our DNA-focused Best Practices were developed over several years of thorough experimentation, and are continuously updated as new observations come to light and the analysis methods improve. We have only been working with RNAseq for a few months, so there are many aspects that we still need to examine in more detail before we can be fully confident that we are doing the best possible thing. We will be improving these recommendations progressively as we go, and we hope that the researcher community will help us by providing feedback of their experiences applying our recommendations to their data. Created 2014-02-12 02:49:21 | Updated 2014-02-12 03:15:29 | Tags: rnaseq topstory joint-discovery Yep, you read that right, the next release of GATK is going to be the big Three-Oh! You may have noticed that the 2.8 release was really slim. We explained in the release notes, perhaps a tad defensively, that it was because we’d been working on some ambitious new features that just weren’t ready for prime time. And that was true. Now we’ve got a couple of shiny new toys to show for it that we think you’re really going to like. But GATK 3.0 is not really about the new features (otherwise we’d just call it 2.9). It’s about a shift in the way we approach the problems that we want to solve -- and to some extent, a shift in the scope of problems we choose to tackle. We’ll explain what this entails in much more detail in a series of blog posts over the next few days, but let me reassure you right now on one very important point: there is nothing in the upcoming release that will disrupt your existing workflows. What it will do is offer you new paths for discovery that we believe will empower research on a scale that has previously not been possible. And lest you think this is all just vaporware, here’s a sample of what we have in hand right now: variant calling on RNA-Seq, and a multisample variant discovery workflow liberated from the shackles of time and scaling issues. Stay tuned for details! Created 2015-09-17 00:02:05 | Updated 2015-09-17 00:02:45 | Tags: haplotypecaller rnaseq gatk-walkers Hi, I have been running HP for my RNA-seq data java -Xmx16g -jar GenomeAnalysisTK.jar \ -T HaplotypeCaller \ -R $ref \ -I INPUT.bam \ -stand_call_conf 50.0 \ -stand_emit_conf 10.0 \ -o output.vcf my process is killed when it was 82% completed. Is there a way to resume the run without running from the beginning ? Thanks Best Regards T. Hamdi Kitapci Created 2015-09-14 16:27:46 | Updated | Tags: haplotypecaller vcf bam rnaseq variant-calling Hello, I'm using GATK to call variants in my RNA-Seq data. I'm noticing something strange, perhaps someone can help? For a number of sites the VCF is reporting things I cannot replicate from BAMs. How can I recover the reads that contribute to a variant call? Here is an example for 1 site in 1 sample, but I've observed this at many sites/samples: $ grep 235068463 file.vcf chr1 235068463 . T C 1795.77 . AC=1;AF=0.500;AN=2;BaseQRankSum=-3.530;ClippingRankSum=-0.535;DP=60;FS=7.844;MLEAC=1;MLEAF=0.500;MQ=60.00;MQ0=0;MQRankSum=0.401;QD=29.93;ReadPosRankSum=3.557 GT:AD:DP:GQ:PL 0/1:5,55:60:44:1824,0,44 60 reads, 5 T, 55 C. samtools view -uh file.md.realn.bam chr1:235068463-235068463 |samtools mpileup - |grep 235068463 [mpileup] 1 samples in 1 input files <mpileup> Set max per-file depth to 8000 chr1 235068463 N 60 cCCccccCCCcccccCcccccccccCCCccCCCCCcCcccccCCCcCcCCccCCCCccCC >CA@B@>A>BA@BCABACCC:@@ACABBBCAACBBCABCB@CABBAB?>A?CBBAAAABA There are just 60 C's at that location. How do I decide what the genotype here is? C/C or C/T ? For methodology I'm using gatk/3.2.0. I tried using HC from gatk/3.3.1 and got the same result. The bam and vcf files come from the final two lines: -2 pass STAR -Mark Dups -SplitNCigarReads -RealignerTargetCreator -IndelRealigner -BaseRecalibrator -PrintReads -MergeSamFiles.jar -Mark Dups -RealignerTargetCreator -IndelRealigner -HaplotyeCaller Thanks, Kipp Created 2015-06-26 11:26:50 | Updated | Tags: bqsr knownsites rnaseq Hi, i read the entry about how to do BQSR without a knownSNP file and have some uncertainties how to apply it to RNAseq data. I am actually calling SNPs from RNA-seq data on a draft genome of a non-model organism and were wondering what best practice might be to do so (must sound like a nightmare to you working with human data :smile: ). I can think of following workflow for each of the RNA-seq samples: 1. best practice SNPcalling for RNA reads with HaplotypeCaller 2. filter variants for "high quality" (-window 25 -cluster 3 --filterExpression "MQ < 30.0" --filterExpression "QD < 2.0" --filterExpression "DP < 5" 3. select for PASS SNPs and biallelic SNPs (as sample is diploid) 4. use the selected SNPs as knownSNPs to do BQSR 5. run Haplotypecaller again on the recalibrated bam 6. go nuts with the gained vcf file... =) Should i include heterozygous SNPs to generate the BQSR-recalibration file? Would you agree on that workflow or alter the filters ( i know filtering for depth is not a good thing to do but for RNA-seq i think its good to have some minimal coverage of a site) Comments and recommendations are very welcome, Thank you, Michel Created 2015-06-24 18:55:35 | Updated | Tags: haplotypecaller multi-sample rnaseq pooled-calls Hi everybody! I've recently started working with GATK and after reading documents, tutorial and forum discussions, I set a pipeline for my experiment. I'm dealing with multi-sample RNAseq for which GATK tools are less improved and verified than DNAseq, thus I 'd like to have your suggestions. Briefly, this is the experiment: 2 phenotypes of Sparus aurata, 8 libary per phenotype, each library consists on a pool (not barcoded) of 3 animals. Thus I have a total of 16 samples. My goal is to find the total number of variant sites and compare the allele frequencies between the two phenotypes. I lack genome and SNPs database. Step by step: 1) I used STAR (not 2-pass) in order to map reads against my de novo assembly. STAR --runThreadN 16 --genomeDir ./GenomeIndex --readFilesIn XXX.fastq --alignIntronMax 19 --outSAMtype BAM SortedByCoordinate --outSAMmapqUnique 60 --outFilterMultimapNmax 5 --outFilterMismatchNmax 4 2) I used the picard-tools to Mark duplicates 3) I used the picard-tools to AddOrReplaceReadGroups 4) I used the picard-tools to BuildBamIndex 5) I called the haplotype for the 16 samples with the following command: GenomeAnalysisTK.jar -T HaplotypeCaller -R reference.fasta -I sample.bam -dontUseSoftClippedBases -ploidy 6 -ERC GVCF -o output.g.vcf 6) I used the GenotypeGVCFs to merge the samples from the same population in an unique vcf file as follow GenomeAnalysisTK.jar -T GenotypeGVCFs -R reference.fasta -stand_call_conf 20 -stand_emit_conf 20 -ploidy 6 --variant sample1.g.vcf --variant sample2.g.vcf --variant sample3.g.vcf (8 samples) -o output_HC.vcf Finally I'm going to filter the results with the Variant Filtration: GenomeAnalysisTK.jar -T VariantFiltration -R reference.fasta -V output.vcf -window 35 -cluster 3 -filterName FS -filter "FS > 30.0" -filterName QD -filter "QD < 2.0" -o utputFiltered.vcf What do you think? Now I'd like to compare the two populations, but how??? manually in excel files? vcf tools seem not to handle ploidy higher than 2. Does anyone deal with these issues and can kindly give some tips? Best Marianna Created 2015-06-19 19:20:58 | Updated | Tags: haplotypecaller rnaseq genotypegvcfs gvcf Hello, I was wondering if there is a way to output all annotations for all sites when running HaplotypeCaller with BP_RESOLUTION. Currently it outputs all annotations for only called variants. Thanks in advance. Created 2015-05-28 03:46:02 | Updated | Tags: snp rnaseq Hai, I have Exome-seq and RNA-seq data and I try to find SNP in those sample. I know that the SNP is not in the gene itself but in the promoter region. From what I know, Exome-seq and RNA-seq do not cover promoter region. What is your suggestion about this? Probably you can share some expreience how to find SNP in promoter regiuon with RNA-seq and Exome-seq. data Thank you. Created 2015-05-05 02:51:05 | Updated | Tags: rnaseq I am using multiple RNA seq samples from same individual and genotyping for dbsnp locations. I need to get all 0/0 0/1 and 1/1 in my matrix for all samples whichever have reasonable coverage. Emitting all confident sites force the program looking at every possible site across all samples and also get 0/0 annotation while using unified genotyper. On the other hand since I want to just genotype DBSNPs is it fine to activate genotype given allele? Will I still get 0/0 It should the look like java -Xmx4g -jar /mnt/projects/senguptad/ctc/K562-allele/GenomeAnalysisTK.jar \ -T UnifiedGenotyper \ -R /mnt/projects/senguptad/ctc/hg19/hg19.fa \ --dbsnp /mnt/AnalysisPool/libraries/genomes/hg19/dbsnp/dbsnp_137.hg19.vcf \ -I /mnt/projects/senguptad/ctc/GLIO/GLIO/unique/newresult4/ready_readgrp_SRR1294973.bam \ -I /mnt/projects/senguptad/ctc/GLIO/GLIO/unique/newresult4/ready_readgrp_SRR1294974.bam \ . . . . --out /mnt/projects/senguptad/ctc/GLIO/GLIO/unique/newresult4/finalX.vcf \ -stand_call_conf 30.0 \ -stand_emit_conf 10.0 \ -gt_mode GENOTYPE_GIVEN_ALLELES --alleles /mnt/AnalysisPool/libraries/genomes/hg19/dbsnp/dbsnp_137.hg19.vcf -out_mode EMIT_ALL_CONFIDENT_SITES \ -l INFO \ -A HaplotypeScore \ -A InbreedingCoeff \ -glm SNP \ -nt 1 \ Am I correct? Created 2015-02-12 08:13:28 | Updated | Tags: dbsnp rnaseq singlecell In a project I need see the allelic frequency for dbSNPs in the RNAseq data of a single cell. To be precise, given the dbsnp I need 0/0s as well if there is required coverage of reads. SNV calling pipeline normally does not report if it is 0/0. I have come all the way to recalibrated BAM following the RNAseq SNV calling best practice as suggested in GATK site. Help with the command would be highly appreciated. Created 2015-01-21 19:53:26 | Updated | Tags: baserecalibrator haplotypecaller vcf bam merge rnaseq Hi, I am working with RNA-Seq data from 6 different samples. Part of my research is to identify novel polymorphisms. I have generated a filtered vcf file for each sample. I would like to now combine these into a single vcf. I am concerned about sites that were either not covered by the RNA-Seq analysis or were no different from the reference allele in some individuals but not others. These sites will be ‘missed’ when haplotypeCaller analyzes each sample individually and will not be represented in the downstream vcf files. When the files are combined, what happens to these ‘missed’ sites? Are they automatically excluded? Are they treated as missing data? Is the absent data filled in from the reference genome? Alternatively, can BaseRecallibrator and/or HaplotypeCaller simultaneously analyze multiple bam files? Is it common practice to combine bam files for discovering sequence variants? Created 2014-12-17 18:04:27 | Updated | Tags: haplotypecaller gatk error rnaseq genotyping genotyping-mode Hi, I'm currently trying to use GATK to call variants from Human RNA seq data So far, I've managed to do variant calling in all my samples following the GATK best practice guidelines. (using HaplotypeCaller in DISCOVERY mode on each sample separately) But I'd like to go further and try to get the genotype in every sample, of each variant found in at least one sample. This, to differentiate for each variant, samples where that variant is absent (homozygous for reference allele) from samples where it is not covered (and therefore note genotyped). To do so, I've first used CombineVariants to merge variants from all my samples and to create the list of variants to be genotype{ALLELES}.vcf I then try to regenotype my samples with HaplotypeCaller using the GENOTYPE_GIVEN_ALLELES mode and the same settings as before: my command is the following: **java -jar ${GATKPATH}/GenomeAnalysisTK.jar -T HaplotypeCaller -R${GENOMEFILE}.fa -I ${BAMFILE_CALIB}.bam --genotyping_mode GENOTYPE_GIVEN_ALLELES -alleles${ALLELES}.vcf -out_mode EMIT_ALL_SITES -dontUseSoftClippedBases -stand_emit_conf 20 -stand_call_conf 20 -o ${SAMPLE}_genotypes_all_variants.vcf -mbq 25 -L${CDNA_BED}.bed --dbsnp ${DBSNP}.vc**f In doing so I invariably get the same error after calling 0.2% of the genome. ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR stack trace java.lang.IndexOutOfBoundsException: Index: 3, Size: 3 at java.util.ArrayList.rangeCheck(ArrayList.java:635) at java.util.ArrayList.get(ArrayList.java:411) at htsjdk.variant.variantcontext.VariantContext.getAlternateAllele(VariantContext.java:845) at org.broadinstitute.gatk.tools.walkers.haplotypecaller.HaplotypeCallerGenotypingEngine.assignGenotypeLikelihoods(HaplotypeCallerGenotypingEngine.java:248) at org.broadinstitute.gatk.tools.walkers.haplotypecaller.HaplotypeCaller.map(HaplotypeCaller.java:1059) at org.broadinstitute.gatk.tools.walkers.haplotypecaller.HaplotypeCaller.map(HaplotypeCaller.java:221) at org.broadinstitute.gatk.engine.traversals.TraverseActiveRegions$TraverseActiveRegionMap.apply(TraverseActiveRegions.java:709) at org.broadinstitute.gatk.engine.traversals.TraverseActiveRegions$TraverseActiveRegionMap.apply(TraverseActiveRegions.java:705) at org.broadinstitute.gatk.utils.nanoScheduler.NanoScheduler.executeSingleThreaded(NanoScheduler.java:274) at org.broadinstitute.gatk.utils.nanoScheduler.NanoScheduler.execute(NanoScheduler.java:245) at org.broadinstitute.gatk.engine.traversals.TraverseActiveRegions.traverse(TraverseActiveRegions.java:274) at org.broadinstitute.gatk.engine.traversals.TraverseActiveRegions.traverse(TraverseActiveRegions.java:78) at org.broadinstitute.gatk.engine.executive.LinearMicroScheduler.execute(LinearMicroScheduler.java:99) at org.broadinstitute.gatk.engine.GenomeAnalysisEngine.execute(GenomeAnalysisEngine.java:319) at org.broadinstitute.gatk.engine.CommandLineExecutable.execute(CommandLineExecutable.java:121) at org.broadinstitute.gatk.utils.commandline.CommandLineProgram.start(CommandLineProgram.java:248) at org.broadinstitute.gatk.utils.commandline.CommandLineProgram.start(CommandLineProgram.java:155) at org.broadinstitute.gatk.engine.CommandLineGATK.main(CommandLineGATK.java:107) ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR A GATK RUNTIME ERROR has occurred (version 3.3-0-g37228af): ##### ERROR ##### ERROR This might be a bug. Please check the documentation guide to see if this is a known problem. ##### ERROR If not, please post the error message, with stack trace, to the GATK forum. ##### ERROR Visit our website and forum for extensive documentation and answers to ##### ERROR commonly asked questions http://www.broadinstitute.org/gatk ##### ERROR ##### ERROR MESSAGE: Index: 3, Size: 3 ##### ERROR ------------------------------------------------------------------------------------------ because the problem seemed to originate from getAlternateAllele, I tried to play with --max_alternate_alleles by setting it to 2 or 10, without success. I also checked my${ALLELES}.vcf file to look for malformed Alternate alleles in the region where the GATK crashes (Chr 1, somewhere after 78Mb) , but I couldn't identify any... (I searched for Alternate alles that would not match the following extended regexpr '[ATGC,]+') Created 2014-12-11 18:37:39 | Updated | Tags: rnaseq cohort rna-seq Dear GATK team, Is there a value in cohort calling in RNA-Seq similar to what is recommended in the GATK DNA-Seq workflow? I am trying to understand why cohort calling is highly emphasized in DNA-Seq but not mentioned in the RNA-Seq workflow. Thank you! Joe Created 2014-12-04 18:54:17 | Updated | Tags: haplotypecaller vcf rnaseq Specifically, what does the 'start' component of this flag mean? Do the reads all have to start in exactly the same location? Alternatively, does the flag specify the total number of reads that must overlap a putative variant before that variant will be considered for calling? Created 2014-11-25 14:57:50 | Updated | Tags: rnaseq Hello, I am running the SNP calling pipeline from RNA-seq data. I used STAR for alignment. My reference genome is composed of 32012 scaffolds from which more than 30000 are contigs with length of less than 1000 bp. I first decided to delete these contigs and just keep the superscaffolds, scaffolds and contigs longer than 1000 bp but in the STAR manual, it was recommended to keep all the sequences because other types of RNA can be mapped to them, so, I have retained them. I have run the rest of the pipeline using these 32012 scaffolds as my reference. However, for the last step which is variant calling, I am going to extract only scaffolds that map to chromosome Z, so, I will have sthg around 40 scaffolds. As I don't know the algorithms behind GATK and picard, I was wondering whether using these large numbers of primary scaffolds may be problematic at some steps? I am sorry for this general question, I have run the pipeline and have not encountered any problem so far but I was wondering if there is anything specific that I should look into the outputs to make sure everything has gone very well. Created 2014-11-03 13:43:38 | Updated | Tags: haplotypecaller rnaseq pooled-calls Hello, First of all, thank you for your detailed best practice pipeline for SNP calling from RNA-seq data. I have pooled RNA seq data which I need to call SNP from. Each library consists of a pooled sample of 2-3 individuals of the same sex-tissue combination. I was wondering if Haplotype caller can handle SNP calling from pooled sequences or is it better if I use FreeBayes? I understand that these results come from experimenting with the data but it would be great if you could share your experiences with me on this. Cheers, Homa Created 2014-10-07 12:35:50 | Updated | Tags: rnaseq variant-calling number samples I have 90 exome samples coming from three sample groups. What would be the best idea to do variant calling using GATK? I am planning to call variant from three sample groups separately and and compare them. So, per group there are 30 samples; is this number of sample suffecient enough to merge BAM files and call variants ? I also have RNASeq data from the same samples. Is it good idea to call variants from RNASeq data too ? Created 2014-08-14 19:37:20 | Updated | Tags: best-practices rnaseq variant-calling Hi, Thank you for providing guidelines on RNA-Seq variant discovery. For our data, we are currently playing with multiple mapping methods and have noticed that 2-step alignments work "better" than 1-step alignments. By 2-step alignments, I mean using STAR as step1 and then take the unmapped from this and use another aligner (like Bowtie2) for alignment. If I use such a methodology, will there be an issue in variant calling when during splitting cigar strings I ask it convert the 255 MAPQ to another value (like 60 in the best practices example), since bowtie2 gives different MAPQ scores. Sorry if this seems like a stupid question, but I am just a little curious how such a thing might affect the variant calls. Any insights/comment on this will be greatly appreciated. Thanks! Created 2014-07-14 06:00:06 | Updated | Tags: multi-sample rnaseq Hi there, we are working on 454 RNaseq data......we have RNAseq data of four different tissues from six different individuals.... we want to call variants from all these data in a one job....but as per your recommendation we have to run each tissue data from each individual separately...so my query is can we join them as per the DNAseq guidelines or we have to find out some another way to do this kind of analysis.... Please help us if you have any suggestions to do Multisample Variant calling from RNAseq data..... Best Regards Created 2014-06-09 19:25:22 | Updated | Tags: haplotypecaller rnaseq stand-emit-conf stand-call-conf I have processed samples using GATK 3.1 Haplotypecaller according to the RNA Seq best practice. Haplotypecaller is set with --stand_emit_conf = 20 and --stand_call_conf = 20 options, but I could find variants which QUAL less than 20 in the output gVcf file and variants which its QUAL < 20 is not marked as LowQual. I wonder what those parameters do for Haplotypecaller. I have run with --stand_emit_conf = 20 and --stand_call_conf = 30 for testing. Output gvcf file appears to be identical. Thanks! Created 2014-03-12 15:22:35 | Updated | Tags: haplotypecaller error rnaseq Hi, I was trying to call variants in RNAseq data using GATK 3.0 when I got the following stack trace: ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR stack trace java.lang.NullPointerException at org.broadinstitute.sting.gatk.traversals.TraverseActiveRegions$TraverseActiveRegionMap.apply(TraverseActiveRegions.java:708) at org.broadinstitute.sting.gatk.traversals.TraverseActiveRegions$TraverseActiveRegionMap.apply(TraverseActiveRegions.java:704) at org.broadinstitute.sting.utils.nanoScheduler.NanoScheduler$ReadMapReduceJob.run(NanoScheduler.java:471) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ##### ERROR ------------------------------------------------------------------------------------------ ##### ERROR A GATK RUNTIME ERROR has occurred (version nightly-2014-03-10-gf78001a): ##### ERROR ##### ERROR This might be a bug. Please check the documentation guide to see if this is a known problem. ##### ERROR If not, please post the error message, with stack trace, to the GATK forum. ##### ERROR ##### ERROR MESSAGE: Code exception (see stack trace for error itself) ##### ERROR ------------------------------------------------------------------------------------------ Here are the command line arguments: Program Args: -T HaplotypeCaller -I in.bam -R ref.fa -o raw.snps.indels.vcf -nct 8 -recoverDanglingHeads -dontUseSoftClippedBases -stand_call_conf 20 -stand_emit_conf 20 As you can see, I got the error above from one of the nightly builds. Before that I also tried version 3.0-0-g6bad1c6, and this produced the exact same error. What's curious about this is that it didn't fail in the same place each time. I did this on 20 samples, and for the first run, 15 of the samples failed with this error. One of the samples failed after 7 minutes, so I decided to try that one again to see if I could reproduce it, but it went past the point (both in time and genomic position) where it failed the first time. I decided to download a nightly build (version nightly-2014-03-10-gf78001a) and see if this had been fixed, but again, 15 of the samples failed. However, it was not the same set of samples that failed as with the other version. The reads were aligned using STAR, and prior to this step I ran SplitNCigarReads and IndelRealigner. Thanks, Niklas Created 2013-10-24 11:31:46 | Updated 2013-10-24 11:38:15 | Tags: snp rnaseq snps mrnaseq Hi! I have worked some time on a mRNAseq set, single-end. Its a high quality set and lots of biological replicates (200+). My question is, how could I best contribute to the methodology used for SNPs call in mRNAseq? What do we need tested to improve this method? Created 2013-06-12 16:09:40 | Updated | Tags: commandlinegatk workflow rnaseq Hi all: I find that among all the work flows of GATK http://www.broadinstitute.org/gatk/guide/topic?name=methods-and-workflows there are no workflows for RNA-seq analysis. I understand that GATK mainly focuses on variant calling, can anyone tell me how to use GATK for RNA-seq analysis? thanks daniel Created 2013-06-10 04:38:43 | Updated | Tags: reducereads rnaseq Hi, I've been trying to get ReduceReads working in a pipeline I've made that incorporates GATK tools to call variants in RNA-seq data. After performing indel realignment and base recalibration I'm trying to use ReduceReads prior to calling variants using Unified Genotyper. I've been using GATK version 2.3.9. When I try to use ReduceReads on a 1.7Gb .bam file, I need to set aside 100Gb memory to perform the operation for the process to complete (otherwise I'll get an error saying I didn't provide enough memory to run the program and to adjust the maximum heap size using the -Xmx option etc). The problem isn't that ReduceReads doesn't work - it does, however of the 100Gb I set aside, it uses 80-90Gb of it. This means I can't run more than one job at a time due to the constraints of the machine I'm using etc. I've been looking through the GATK forum and understand it may be a GATK version issue, though I've tried using GATK 2.5.2 ReduceReads for this step and it still requires 70-80Gb memory. Can anyone provide any clues as to what I may be doing wrong? or whether I can do something to make it use less memory so I can run multiple jobs simultaneously? The command I'm using is:
2016-02-10 17:59:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2383103221654892, "perplexity": 2915.639564939156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159985.29/warc/CC-MAIN-20160205193919-00280-ip-10-236-182-209.ec2.internal.warc.gz"}
https://physics.com.hk/2008/04/06/nobel-prize-x-2-part-1/
# Nobel Prize x 2, part 1 The Nobel Prize in Physics in 1956 Bardeen brought only one of his three children to the Nobel Prize ceremony. His two sons were studying at Harvard University, and Bardeen didn’t wanted to disrupt their studies. King Gustav scolded Bardeen because of this, and Bardeen assured the King that the next time he would bring all his children to the ceremony. The Nobel Prize in Physics in 1972 In 1972, John Bardeen shared the Nobel Prize in Physics with Leon Neil Cooper of Brown University and John Robert Schrieffer of the University of Pennsylvania for their jointly developed theory of superconductivity, usually called the BCS-theory. Bardeen did bring all his children to the Nobel Prize ceremony in Stockholm, Sweden. — Wikipedia . Excellence is an art won by training and habituation. We do not act rightly because we have virtue or excellence, but we rather have those because we have acted rightly. We are what we repeatedly do. Excellence, then, is not an act but a habit. — Aristotle — Me@2022.09.12 01:07:54 PM . . 2008.04.06 Sunday $CHK_2$
2022-09-30 00:23:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47501444816589355, "perplexity": 3119.2257344838786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00521.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-early-transcendentals-8th-edition/chapter-5-section-5-3-the-fundamental-theorem-of-calculus-5-3-exercises-page-400/33
## Calculus: Early Transcendentals 8th Edition $\displaystyle\int\limits_0^1(1+r)^{3}dr=\dfrac{15}{4}$ $\displaystyle\int\limits_0^1(1+r)^{3}dr$ Let's apply the cube of a binomial rule to the integral. This rule is $(a+b)^{3}=a^{3}+3a^{2}b+3ab^{2}+b^{2}$ $\displaystyle\int\limits_0^1(1+r)^{3}dr=\int\limits_0^1(1+3r^{2}+3r+r^{3})dr=...$ Now, integrate each term separately and apply the second part of the fundamental theorem of calculus: $...=\displaystyle\int\limits_0^1dr+3\int\limits_0^1r^{2}dr+3\int\limits_0^1rdr+\int\limits_0^1r^{3}dr=...$ $...=r+(3)\Big(\dfrac{1}{3}\Big)r^{3}+(3)\Big(\dfrac{1}{2}\Big)r^{2}+\dfrac{1}{4}r^{4}\Big|_0^1=r+r^{3}+\dfrac{3}{2}r^{2}+\dfrac{1}{4}r^{4}\Big|_0^1$ $...=\Big[1+(1)^{3}+\dfrac{3}{2}(1)^{2}+\dfrac{1}{4}(1)^{4}\Big]-\Big[0+(0)^{3}+\dfrac{3}{2}(0)^{2}+\dfrac{1}{4}(0)^{4}\Big]$ $...=1+1+\dfrac{3}{2}+\dfrac{1}{4}=\dfrac{15}{4}$
2019-11-15 20:36:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9912981390953064, "perplexity": 136.3852617464026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00553.warc.gz"}
https://itprospt.com/num/12553910/016-part-1-of-2-10-0-pointsa-window-has-a-glass-surface
5 # 016 (part 1 of 2) 10.0 pointsA window has a glass surface of 4007 cm2 and a thickness of 3.1 mm_ Find the rate of energy transfer by conduc- tion through this pane ... ## Question ###### 016 (part 1 of 2) 10.0 pointsA window has a glass surface of 4007 cm2 and a thickness of 3.1 mm_ Find the rate of energy transfer by conduc- tion through this pane when the temperature of the inside surface of the glass is 758F and the outside temperature is 98 %F. Assume the thermal conductivity of window glass is 0.8 J/s - m C Answer in units of kW.017 (part 2 of 2) 10.0 pointsFind the rate of energy transfer for the same inside temperature and an outside tempera- ture of 0?F. Answer in units 016 (part 1 of 2) 10.0 points A window has a glass surface of 4007 cm2 and a thickness of 3.1 mm_ Find the rate of energy transfer by conduc- tion through this pane when the temperature of the inside surface of the glass is 758F and the outside temperature is 98 %F. Assume the thermal conductivity of window glass is 0.8 J/s - m C Answer in units of kW. 017 (part 2 of 2) 10.0 points Find the rate of energy transfer for the same inside temperature and an outside tempera- ture of 0?F. Answer in units of kW. #### Similar Solved Questions ##### Tetrahedron space 1s four sided figure each face 1S triangle mn SVaCe Define for each side/face the Vectol the normal vector of the face with length equal to the UPA of the face Show(10 points) They satisfy the closure property:L] + Lz+L3+Lhint: You will get partial credit for stating and proving the analogous result for triangle. (10 points) Show that the volume V of the tetrahedron given by:(Li Lz)hint: Here VOu assume that the VOme volume of the associated parallelpipedtetrahedronthe tetrahedron space 1s four sided figure each face 1S triangle mn SVaCe Define for each side/face the Vectol the normal vector of the face with length equal to the UPA of the face Show (10 points) They satisfy the closure property: L] + Lz+L3+L hint: You will get partial credit for stating and proving... ##### Let Xand Y be random variables of the continuous type having the joint pdff(xy) =20 <><x<1Draw a graph that illustrates the domain of this pdf_(a) Find the marginal pdfs of Xand Y.(b) Compute Ex: Ey. 0}.0}. Cov(X.Y) and p. (c) Determine the equation of the least squares regression line and draw it on your graph: Does the line make sense to you intuitively? Let Xand Y be random variables of the continuous type having the joint pdf f(xy) =2 0 <><x<1 Draw a graph that illustrates the domain of this pdf_ (a) Find the marginal pdfs of Xand Y. (b) Compute Ex: Ey. 0}.0}. Cov(X.Y) and p. (c) Determine the equation of the least squares regression ... ##### Compute the Taylor series of f (x) Vx about c = 13 Hint: The first two terms don't fit the pattern of the remaining terms, So you will need to write those two terms out before writing the rest of the series compactly as an infinite series. This one is a bit messier than most of the examples we've seen up to this pointWrite the value ofdx as an infinite series. Compute the Taylor series of f (x) Vx about c = 13 Hint: The first two terms don't fit the pattern of the remaining terms, So you will need to write those two terms out before writing the rest of the series compactly as an infinite series. This one is a bit messier than most of the examples we&... ##### 0/15 POINTS#RFvious ANSIFA 5WANEFMAC7 14.5.036.4YKOTESAsk YouSuppose reyenue Irom the sale ol nevahomescerain €ounto decreased dramanicafron 2005inownthe mcdelbillion dollans per Yeai{<5},where : Etne vejr gncl 2005nniatend werehjve coniinued into the incefinite huture Lomate the totjlrevenue (rom the salethe countr' Ircmn 2005HINT (SeL Exumple{RounjyCur Jnsyerthe nejres billoon dolljig-)Tantr 0/15 POINTS #RFvious ANSIFA 5 WANEFMAC7 14.5.036. 4YKOTES Ask You Suppose reyenue Irom the sale ol nevahomes cerain €ounto decreased dramanica fron 2005 inown the mcdel billion dollans per Yeai {<5}, where : Etne vejr gncl 2005 nniatend were hjve coniinued into the incefinite huture Lomate ... ##### Use the following points to compute the first derivative of f' (0.2) using Central difference: X Y 0 -10.2 -1.392 0.4 -1.736 0.6 -1.984 0.8 2.088-1.8441.72A1.96can't be computed Use the following points to compute the first derivative of f' (0.2) using Central difference: X Y 0 -1 0.2 -1.392 0.4 -1.736 0.6 -1.984 0.8 2.088 -1.84 41.72 A1.96 can't be computed... ##### Calculate the work WnJc done by the electrostatic force on the charged particle Expross your answer in terms of some or all the variables E, 9 L,and &moves fromAZdWBcSubmitRcoucstanseerPart €Calculale Ihe total aMouni Work Wanc done by thc oloctrostatic forceIne charged particle a5 it moves from A to B to C somo all the variablos E, % L, andExpress your answer in termsAZdWABCSubmltRequest Answverw]x]oxe Calculate the work WnJc done by the electrostatic force on the charged particle Expross your answer in terms of some or all the variables E, 9 L,and & moves from AZd WBc Submit Rcoucstanseer Part € Calculale Ihe total aMouni Work Wanc done by thc oloctrostatic force Ine charged particle a5... ##### Food and Beverages at Southwestem University Football Games Southucslcrn Univemsily (SWUA nge sule college SELLANG VARLAALE FERCENT Stcphcnville. Tcxes, 30 milcs southwesl o Ihc DallavFont MEM FRICEUNIT COSTIUNTT REVEUE Werth mztmplex_ cnrolls clox to 20JHftuents_ Thc schuol Ihc dotntnant funein Ihe sn ll cly #uth murc ~[udcnt dur- Solt Unnk SI 975 35 Ing pll and xpnng than Penrr meident Collcc 10 Us0 38 longtme Iaolhali pucrhanrc SWUI mctbcr olte Big Eletcn cunlcicnce and 4vuilly the lop Z0 mn Food and Beverages at Southwestem University Football Games Southucslcrn Univemsily (SWUA nge sule college SELLANG VARLAALE FERCENT Stcphcnville. Tcxes, 30 milcs southwesl o Ihc DallavFont MEM FRICEUNIT COSTIUNTT REVEUE Werth mztmplex_ cnrolls clox to 20JHftuents_ Thc schuol Ihc dotntnant funein Ihe... ##### Multiple choice: Select the best answer.Smokers doesn't live as long (an average) as nonsmokers, and heavy smokers don't live as long as light smokers. You perform least-squares repression on the age at death of a group of male smokers $y$ and the number of packs per day they smoked $x$. The slope of your regression line(a) will be greater than $0 .$(b) will be less than $0 .$(c) will be equal to 11 .(d) You can't perform regression on these data.(e) You can't tell without se Multiple choice: Select the best answer. Smokers doesn't live as long (an average) as nonsmokers, and heavy smokers don't live as long as light smokers. You perform least-squares repression on the age at death of a group of male smokers $y$ and the number of packs per day they smoked $x$. ... ##### Solve the given equation for $x .$ $$\ln \sqrt{x}-2 \ln 3=0$$ Solve the given equation for $x .$ $$\ln \sqrt{x}-2 \ln 3=0$$... ##### Write the standard form of the equation of the circle with the given center and radius.$$ext { Center }(-4,0), r=10$$ Write the standard form of the equation of the circle with the given center and radius. $$\text { Center }(-4,0), r=10$$... ##### For each of the following hypotheses, please state:• The most correct inferential statistical test• The test type • The number of variables involved• The specific variables involved from the survey and the levelof measurement scale each variable is expressed upon• A justification for why the proposed test is correct for thehypothesisiii. Hypothesis 3: Overall satisfaction with Stitched Up ispositively associated with the average number of times visiting theonline store per month. For each of the following hypotheses, please state: • The most correct inferential statistical test • The test type • The number of variables involved • The specific variables involved from the survey and the level of measurement scale each variable is expressed upon • ... ##### QUESTION 4flx) = tan(25x+5 Find f( 20 Be sure to use radian measurel Calculate answer to at least the nearest thousandth (3 decimal places)QUESTION 5Your budget is $2000_ Gold costs$25 per cm Silver costs $5 per cm You want to build a rectangular frame. The top and bottom of gold. The sides of silver For your budget what is the maximum area that your frame can enclose? Porthat maximum area (calculated to at least three decimal places) in the answer box) Don't put the cm'$, just put QUESTION 4 flx) = tan(25x+5 Find f( 20 Be sure to use radian measurel Calculate answer to at least the nearest thousandth (3 decimal places) QUESTION 5 Your budget is $2000_ Gold costs$25 per cm Silver costs \$5 per cm You want to build a rectangular frame. The top and bottom of gold. The sides o... ##### AndtyHoormanPooolaWndovAnonaDetastaaColter PhSnIncr SiqDAMe7et[ Aflenjecourses malne cdu JZ0ATquizzingluscr/atrcmpulqulz_slat[ tramc_Juto d2i70] -332598 sprve & Orc=08,41=127363*ctai-0adnbao Comton Aop accouni Fouioat & tcee Heltheanselrfe Alna Ahounst 1 {Lime 3-00.00Tnc Lctt253.42Habso Abdtizjk Atlemol LQuestion - (5 points) For the same charges as question What Is the magnitude the fOrcc exerted on one charge by the other?06.2x 10-4 N08.4x 10-3 N01.25 x 10-4 N03.75 * 10-3 NOisnOuettion Andty Hoorman Pooola Wndov Anona Detastaa Colter Ph SnIncr Siq DAMe 7et[ Aflenje courses malne cdu JZ0ATquizzingluscr/atrcmpulqulz_slat[ tramc_Juto d2i70] -332598 sprve & Orc=08,41=127363*ctai-0adnbao Comton Aop accouni Fouioat & tcee Heltheanselrfe Alna Ahoun st 1 {Lime 3-00.00 Tnc Lctt253.... ##### Among 50- below:55-year-olds, 31% say they have written an editorial letter while under the influence of Icohol: Suppose six 50- 55-year-olds are selected at random: Complete parts (a) through (d)What the probability Ihat all six have written an editorial letter while under the influence of alcohol?(Round t0decimal places as needed: )(b) What is the probabilily that at least one has not wrilten an editonal etter while under Ihe inlluence alcohol?(Round t0 four decimal placesneeded:)(c) What Ihe Among 50- below: 55-year-olds, 31% say they have written an editorial letter while under the influence of Icohol: Suppose six 50- 55-year-olds are selected at random: Complete parts (a) through (d) What the probability Ihat all six have written an editorial letter while under the influence of alcoho... ##### Let a, b, €, and d be reul numbers. Prove that if 0 2u < h, then (,b) F (2a, 2b). Prove that il a,b, €, and d are in the closed interval [0, H,a < € < d and b+d>1+a c; ten the closed intervals [&, 6] and [c, d | are not disjoint Let a, b, €, and d be reul numbers. Prove that if 0 2u < h, then (,b) F (2a, 2b). Prove that il a,b, €, and d are in the closed interval [0, H,a < € < d and b+d>1+a c; ten the closed intervals [&, 6] and [c, d | are not disjoint...
2022-06-25 21:00:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6077768206596375, "perplexity": 7550.666760763831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036099.6/warc/CC-MAIN-20220625190306-20220625220306-00638.warc.gz"}
https://aviation.stackexchange.com/questions/62261/why-is-the-constellations-nose-gear-so-long/62269
# Why is the Constellation's nose gear so long? The Lockheed Constellation has an enormously long nose gear, which causes the aircraft to slant appreciably backwards when sitting on the ground: L-049 (Image by Greg and Cindy at Flickr, modified by Cobatfor at Wikimedia Commons.) L-649 (Image by the San Diego Air and Space Museum, via Flickr, via Wikimedia Commons.) L-749 (Image by RuthAS at Wikimedia Commons.) L-1049 (Image by RuthAS at Wikimedia Commons.) L-1649 (Image by Robert Togni at Flickr, via JuergenKlueser at Wikimedia Commons. Note that, due to the gigantic nose gear, the fuselage is approximately level, despite the ground sloping downwards considerably towards the aircraft's nose.) In contrast, other airliners of the era had a much-less-ridiculous nose gear length, like the DC-7: (Image by Ted Quackenbush at airliners.net, modified by Fæ at Wikimedia Commons.) and the Stratocruiser: (Image by Bill Larkins at Flickr, via Wikimedia Commons.) Why is the Constellation's nose gear so much longer? • The Connie is one of the most beautiful airplanes ever IMO, saw several when they came in to the EAA one year, really graceful in the air. – GdD Apr 11 '19 at 9:51 • The original L-049 prototype had a much stubbier nosegear but test pilots Eddie Allen and Kelly Johnson quickly discovered that it did not reach the ground. – A. I. Breveleri Apr 11 '19 at 18:45 • From the picture, it seems like the main gear of the Constellation is also quite a bit taller than the others. Maybe the designers were concerned about prop clearance on rough fields? – jamesqf Apr 12 '19 at 16:18 The Connie's fuselage has a subtle S shaped contour which was intended to conform somewhat to the upwash ahead of the wing and downwash aft of the wing, with a final upturn at the end to place the horizontal tail at the desired vertical location. They also tapered the fuselage to the smallest cross sectional area possible at the nose, to part the air gently you might say, so the bottom ends up sloping up toward the nose. Then you have main gear legs that are fairly long because the R3350's propellers are quite large. The wing incidence is set to optimize the fuselage curvature's presentation into the airflow in cruise. At the same time, you want to have wing chord in a certain desirable AOA range sitting on the ground, and you want to keep the tail from sitting too high (the Connie has the 3 surfaces to keep the vertical height of the tail low enough to fit the common hangars of the day). Combine all those factors together and you end up having to the make the strut really long, and ending up with the most graceful airliner ever designed. • I already knew about the streamlining and the tail-height restrictions, but now I see how that necessitates tilting the fuselage back slightly! – Sean Apr 11 '19 at 2:49 • Could you expand on having the wing at a desirable AOA on the ground? – fooot Apr 11 '19 at 14:27 • The wing's angle-of-attack while rolling on all three wheels. You want to be close to zero or minimal lift with the nosewheel down but not have to rotate too far to get AOA for lift off. – John K Apr 12 '19 at 13:43 (Top, bottom) The Connie and DC-7 have the same engine (Wright R-3350), low-wing mounting, and main landing gear configuration (retraction into the inboard cowls). If you visually remove the nose landing gear (NLG) bay door on the DC-7, it too has a tall NLG. It's just not as tall because the big difference is the propeller diameters. Lockheed went with three bladed propellers, compared to the DC-7's four bladed propellers, resulting in a difference of 5.5 ft (1.7 m) in diameter (19 ft$$^{[1]}$$ vs 13.5 ft$$^{[2]}$$ propellers). The Connie also sat with a higher pitch angle, as evident by the 3-view drawing. The Stratocruiser on the other hand had a higher wing, and a taller two-level cross section, permitting the short NLG. The above answers the geometric reason. As for the design choice, fewer blades are more efficient, albeit bigger. As for the nose pitch on ground, it could mean the wing is attached at a lower angle of incidence, permitting a more level floor in cruise. $$^1$$ https://www.globalsecurity.org/military/systems/aircraft/l-049-specs.htm $$^2$$ http://www.deltamuseum.org/docs/site/aircraft-pages/dc-7_review_booklet_1954.pdf (page 4; PDF page 6) • Maybe it's just an optical illusion, but there seems to be a huge amount of ground clearance under the Constellation's propeller, compared to the DC-7. – David Richerby Apr 11 '19 at 12:59 • @DavidRicherby: That's down to the pitch angle on ground. Lower the nose angle (see the 3-view drawing) and the clearance will go down. What is interesting, is if you visually remove the bay door on the DC-7, it too has a tall NLG. – ymb1 Apr 11 '19 at 15:01 • The nose of the DC-7 is also a lot fatter. Give it the same diameter as the Constellation, and you'd wind up with a nose gear almost as long. – jamesqf Apr 12 '19 at 1:45 • @DavidRicherby: Be interesting to know if the DC-7 had issues with propeller debris ingestion at soft or gravel fields... – Sean Aug 20 '19 at 4:54 You can see that the underside of the Connie's fuselage ahead of the wing root is contoured upwards to begin the taper which ends at the tip of the plane's nose. The other planes had constant-section fuselages ahead of the wing root, in which the nose does not begin to taper down until just aft of the cockpit. To maintain the same propeller tip ground clearance, the Lockheed design then required a longer nose gear strut because the attach point for the nose wheel was higher in the air. (In the case of the Douglas aircraft, maintaining a constant fuselage cross-section forward and aft of the wing reduced tooling costs and enabled fuselage stretches in future revisions of the airframe.)
2020-01-22 18:01:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4002538323402405, "perplexity": 3951.6721310092744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607314.32/warc/CC-MAIN-20200122161553-20200122190553-00050.warc.gz"}
https://math.stackexchange.com/questions/75992/is-there-a-formula-for-the-determinant-of-the-wedge-product-of-two-matrices
# Is there a formula for the determinant of the wedge product of two matrices? I was going over the Wikipedia page for exterior products of vector spaces and we can define the determinant as the coefficient of the exterior product of vectors with respect to the standard basis when the vectors are elements in $\mathbb{R}^n$. I was wondering if there was a way to deduce the formula for the determinant of the exterior (wedge) product of two matrices from this definition. In particular let $V$ be a finite vector space and let $\wedge^k V$ be the $k$-th exterior power of $V$ that is $T^k(V)/A^k(V)$ where $A(V)$ is the ideal generated by all $v \otimes v$ for $v \in V$ and $T^k(V) = V \otimes V \otimes \cdots \otimes V$ is tensor product of $k$ vector spaces. Let $M$ be a square $m\times m$ matrix. Is there a known formula for $\det(M \wedge M)?$ I was thinking there must be some nice formula like $\det(M \wedge M) = \det(M)\det(M)$ but I have a feeling this does not generalize to higher powers of wedge products. • What does $\det(M\wedge M)$ stand for? – anon Oct 26, 2011 at 5:19 • Yes I am a little confused myself now. I am looking at an old qualifying exam problem and wondering if I just did not interpret it correctly. The original problem says to define $D_p$ to be the determinant of the square matrix $\wedge^p M$ and give a formula for $D_p$ in terms of a determinant for $det(X)$. based on you comment does it even make since to define a determinant function for $\wedge^p M$? Oct 26, 2011 at 5:30 • The matrix $M$ is not an element of the vector space $V$. You're actually talking about $\Lambda^k M_{n\times n}$, if I understand the problem here correctly. Since I only know of $\det$ as a function of matrices, could you explain what it is as a function of wedge products of matrices? – anon Oct 26, 2011 at 5:30 • Another 'basis heavy approach' - base change to $\mathbb{C}$ and choose coordinates in which its upper triangular; if you choose the 'dictionary ordering' for $e_{i_1} \wedge e_{i_2}$, it's again upper triangular Aug 3, 2013 at 17:00 Hint: Let $\{e_1,\ldots, e_n\}$ be a basis of $V$. Then the space $\wedge^p V$ has a basis consisting of vectors of the form $e_{i_1}\wedge e_{i_2}\wedge\cdots\wedge e_{i_p}$ for some strictly increasing sequence $i_1<i_2<\ldots<i_p$ of indices. The linear mapping $\wedge^pM$ maps the vector $e_{i_1}\wedge e_{i_2}\wedge\cdots\wedge e_{i_p}$ to $M(e_{i_1})\wedge M(e_{i_2})\wedge\cdots\wedge M(e_{i_p})$. Compute the determinant of this linear mapping in the following cases: 1. $M$ maps the basis vector $e_{i_0}$ to $\lambda e_{i_0}$ and the other basis vectors $e_i,i\neq i_0,$ to themselves. 2. $M$ interchanges two basis vectors, $e_{i_1}$ and $e_{i_2}$, and maps the other basis vectors $e_i, i\neq i_1, i\neq i_2,$ to themselves. 3. $M$ maps the basis vector $e_{i_0}$ to the vector $e_{i_0}+ae_{i_1}$ for some constant $a$ and $i_1\neq i_0$, and maps the other basis vectors $e_i, i\neq i_0$ to themselves. Then keep in mind (=functoriality) that $\wedge^p(M\circ M')=\wedge^p(M) \circ \wedge^p(M')$ for all linear mappings $M,M'$ from $V$ to itself. As a further hint: This approach is a bit about elementary combinatorics. You have to count the number of changes of a given type, and remember the rule used in forming Pascal's triangle. • Aha! +1, not least for making sense of the question to begin with. – anon Oct 26, 2011 at 7:09 • @anon I think you were the one who figured out what the question was :-) Oct 26, 2011 at 7:14 • I hope that it is clear that the idea is to show that for all three types of elementary matrices $M$ a formula of the type $$\det \wedge^pM=(\det M)^{k(n,p)}$$ holds, where the exponent $k(n,p)$ is the same for all the elementary matrices. Then use the functoriality. For the sake of completeness you also need an argument to cater for the singular matrices $M$. Oct 26, 2011 at 14:51 • This was very helpful. Even years after being answered. Thank you. Jul 25, 2019 at 1:02 Here is an argument using SVD (which might be easier than doing the combinatorics suggested by Jyrki Lahtonen). It gets rid of all the scalings, and reduces the problem to determining orientations. (I hope this can be done relatively easy, see comments at the end). Put an inner product on $V$ (This induces an inner product on $\bigwedge^k V$ so we have a notion of orthogonal maps $\bigwedge^k V \to \bigwedge^k V$). Then, if $Q \in \text{SO}$, then so is $\bigwedge^k Q$. Indeed $$(\bigwedge^k Q)^T=(\bigwedge^k Q^T)=(\bigwedge^k Q^{-1})=(\bigwedge^k Q)^{-1}.$$ Similarly, we can convince ourselves that $\bigwedge^k Q$ preserves orientation*, so we deduce that $$\bigwedge^k Q \in \text{SO}=\text{SO}(\bigwedge^k V,\bigwedge^k V).$$ Now, given an orientation-preserving map $A:V \to V$, write $A=U\Sigma V^T$, where $U,V \in \text{SO}$. By functoriality we get $$\bigwedge^k A=\bigwedge^k U \circ \bigwedge^k \Sigma \circ \bigwedge^k V^T,$$ so $$\det(\bigwedge^k A)=\det(\bigwedge^k U) \cdot \det(\bigwedge^k \Sigma) \cdot \det( \bigwedge^k V^T)=\det(\bigwedge^k \Sigma).$$ Write $\Sigma v_i=\sigma_i v_i$. ($\sigma_i$ are the singular values of $A$). Then, $$\bigwedge^k \Sigma(v_{i_1} \wedge \dots \wedge v_{i_k})=\Pi_{i=1}^k \sigma_{i_k} (v_{i_1} \wedge \dots \wedge v_{i_k}).$$ We need to multiply all this factors (over all $k$-tuples $i_1,\dots,i_k$). To find $\det (\bigwedge^k \Sigma)$, note that each $\sigma_i$ shows in exactly $\binom{d-1}{k-1}$ products, since we need to append to it $k-1$ indices out of the $d-1$ left. Hence, $$\det (\bigwedge^k A)=\det (\bigwedge^k \Sigma)=(\Pi_{i=1}^d \sigma_{i})^{\binom{d-1}{k-1}}=(\det \Sigma)^{\binom{d-1}{k-1}}=(\det A)^{\binom{d-1}{k-1}}.$$ This finishes the proof for orientation-preserving maps. For orientation-reversing maps, we can use SVD in a similar way, taking one of the orthogonal factors to be in $\text{O} \setminus \text{SO}$. Suppose $U \in \text{O} \setminus \text{SO}$. To repeat the argument, We only need to know if $\det(\bigwedge^k U)=1$ or $\det(\bigwedge^k U)=-1$. (We already know that $\bigwedge^k U$ is an isometry, so its determinant is $\pm 1$). In fact we need to show $\det(\bigwedge^k U)=1 \iff \binom{d-1}{k-1}$ is even. To summarize, my proof is missing two components: 1. $Q \in \text{SO} \Rightarrow \bigwedge^k Q \in \text{SO}.$ Equivalently, since we know $\bigwedge^k Q$ is an isometry, it suffices to show that if $A:V \to V$ is orientation-preserving, so is $\bigwedge^k A$. 2. If $A:V \to V$ is orientation-reversing, then $\bigwedge^k A$ is orientation-preserving iff $\binom{d-1}{k-1}$ is even. Perhaps there is an easy way to prove these two claims without too much combinatorics.
2022-05-29 11:42:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437428116798401, "perplexity": 112.47032404853803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00475.warc.gz"}
https://www.gatecseit.in/permutation-combination/
# Permutation and combination questions ## Permutation And Combination Question 1 A book-shelf can accommodate 6 books from left to right. If 10 identical books on each of the languages A,B,C and D are available, In how many ways can the book shelf be filled such that book on the same languages are not put adjacently. A (6P4) /2! B $4\times 3^5$ C (40P6) /6! D None of the above Question 1 Explanation: First place can be filled in 4 ways. The subsequent places can be filled in 3 ways each. Hence, the number of ways =4 *3 *3 *3 *3 *3 = 4 *35 Question 2 How many 4-letter words with or without meaning, can be formed out of the letters of the word, 'LOGARITHMS', if repetition of letters is not allowed? A 5040 B 40 C 2520 D 400 Question 2 Explanation: 'LOGARITHMS' contains 10 different letters. Required number of words = Number of arrangements of 10 letters, taking 4 at a time. = 10P4 = (10 x 9 x 8 x 7) = 5040. Question 3 How many 3-digit numbers can be formed from the digits 2, 3, 5, 6, 7 and 9, which are divisible by 5 and none of the digits is repeated? A 15 B 5 C 10 D 20 Question 3 Explanation: Since each desired number is divisible by 5, so we must have 5 at the unit place. So, there is 1 way of doing it. The tens place can now be filled by any of the remaining 5 digits (2, 3, 6, 7, 9). So, there are 5 ways of filling the tens place. The hundreds place can now be filled by any of the remaining 4 digits. So, there are 4 ways of filling it. ∴ Required number of numbers = (1 x 5 x 4) = 20. Question 4 How many positive integers 'n' can be form using the digits 3,4,4,5,6,6,7, if we want 'n' to exceed 60,00,000? A 360 B 320 C 720 D 540 Question 4 Explanation: As per the given condition, number in the highest position should be either 6 or 7, which can be done in 2 ways. If the first digit is 6, the other digits can be arranged in 6! /2! = 360 ways. If the first digit is 7, the other digits can be arranged in 6!/(2!$\times$2!)=180 ways. Thus required possibilities for n, = 360 +180 = 540 ways. Question 5 In a hockey championship, there are 153 matches played. Every two team played one match with each other. The number of teams participating in the championship is: A 19 B 18 C 16 D 17 Question 5 Explanation: Let there were x teams participating in the games, then total number of matches, nC2 = 153. On solving we get, => n =-17 and n =18. It cannot be negative so, n = 18 is the answer. Question 6 There are 6 equally spaced points A,B,C,D,E and F marked on a circle with radius R. How many convex pentagons of distinctly different areas can be drawn using these points as vertices? A None of these B 1 C 6P5 D 5 Question 6 Explanation: Since, all the points are equally spaced; hence the area of all the convex pentagons will be same. Question 7 12 chairs are arranged in a row and are numbered 1 to 12. 4 men have to be seated in these chairs so that the chairs numbered 1 to 8 should be occupied and no two men occupy adjacent chairs. Find the number of ways the task can be done. A 432 B 360 C 384 D 470 Question 7 Explanation: Given there are 12 numbered chairs, such that chairs numbered 1 to 8 should be occupied. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12. The various combinations of chairs that ensure that no two men are sitting together are listed. (1, 3, 5,...), The fourth chair can be 5,6,10,11 or 12, hence 5 ways. (1, 4, 8, ...), The fourth chair can be 6,10,11 or 12 hence 4 ways. (1, 5, 8, ...), the fourth chair can be 10,11 or 12 hence 3 ways. (1, 6, 8,...), the fourth chair can be 10,11 or 12 hence 3 ways. (1,8,10,12) is also one of the combinations. Hence, 16 such combinations exist. In case of each these combinations we can make the four men inter arrange in 4! ways. Hence, the required result =16$\times$4!= 384. Question 8 In how many different ways can the letters of the word 'LEADING' be arranged in such a way that the vowels always come together? A 480 B 720 C 360 D 5040 Question 8 Explanation: The word 'LEADING' has 7 different letters. When the vowels EAI are always together, they can be supposed to form one letter. Then, we have to arrange the letters LNDG (EAI). Now, 5 (4 + 1 = 5) letters can be arranged in 5! = 120 ways. The vowels (EAI) can be arranged among themselves in 3! = 6 ways. Therefore Required number of ways = (120 x 6) = 720. Question 9 There are five cards lying on the table in one row. Five numbers from among 1 to 100 have to be written on them, one number per card, such that the difference between the numbers on any two adjacent cards is not divisible by 4. The remainder when each of the 5 numbers is divided by 4 is written down on another card (the 6th card) in order. How many sequences can be written down on the 6th card? A 210 *33 B 42 *33 C 4 *34 D 210 Question 9 Explanation: The remainder on the first card can be 0,1,2 or 3 i.e 4 possibilities. The remainder of the number on the next card when divided by 4 can have 3 possible vales (except the one occurred earlier). For each value on the card the remainder can have 3 possible values. The total number of possible sequences is: 4 *34. Question 10 How many words of 4 consonants and 3 vowels can be made from 12 consonants and 4 vowels, if all the letters are different? A 12C3 *4C4 B 16C7 *7! C 12C4 *4C3 *7! D 11C4 *4C3 Question 10 Explanation: 4 consonants out of 12 can be selected in, 12C4 ways. 3 vowels can be selected in 4C3 ways. Therefore, total number of groups each containing 4 consonants and 3 vowels, = 12C4 *4C3 Each group contains 7 letters, which can be arranging in 7! ways. Therefore required number of words, = 12C4 *4C3 *7! There are 10 questions to complete.
2019-07-18 08:46:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.567025899887085, "perplexity": 586.2655411378723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00273.warc.gz"}
https://www.timlrx.com/tags/r/page/2/
## Mapping SG - Shiny App While my previous posts on the Singapore census data focused mainly on the distribution of religious beliefs, there are many interesting trends that could be observed on other characteristics. I decided to pool the data which I have cleaned and processed into a Shiny app. Took a little longer than I expected but it is done. Have fun with it and hope you learn a little bit more about Singapore! [Read More] ## Using Leaflet in R - Tutorial Here’s a tutorial on using Leaflet in R. While the leaflet package supports many options, the documentation is not the clearest and I had to do a bit of googling to customise the plot to my liking. This walkthrough documents the key features of the package which I find useful in generating choropleth overlays. Compared to the simple tmap approach documented in the previous post, creating a visualisation using leaflet gives more control over the final outcome. [Read More] ## Examining the Changes in Religious Beliefs - Part 2 In a previous post, I took a look at the distribution of religious beliefs in Singapore. Having compiled additional characteristics across 3 time periods (2000, 2010, 2015), I decided to write a follow-up post to examine the changes across time. The dataset that I will be using is aggregated from the 2000 and 2010 Census as well as the 2015 General Household Survey. [Read More] ## Mapping the Distribution of Religious Beliefs in Singapore Inspired by my thesis, I have been playing around with mapping tools over the past few days. While the maps showing the distribution of migrant groups across the United States did not make it to the final copy of my paper I had fun toying around with the various mapping packages. In this post, I decided to apply what I have learnt and take a look at the spatial distribution of Singapore’s population. [Read More] ## Thesis Thursday 7 - Conclusion Finally, the last installment of the Thesis Thursday series! Rather than going through what I have done since the previous post (basically more refinements and robustness checks), I decide share some miscellaneous thoughts and lessons learnt over the past few months. The completed research paper and accompanying slides can be downloaded from my website. ###On R and Stata I decided to code the entire project in R this time round and I have to say that I am quite won over by the capabilities of the various packages. [Read More] ## Update on the SG Economic Dashboard I have updated the SG-Dashboard with 2Q 2017 numbers. I also took the opportunity to add in a few new tables and charts. There is a new table that keeps track of value-added (VA) revisions of last quarter’s result. VA for certain industries such as construction are approximated based on early indicators and the actual numbers take a quarter or more to stream in. It is also interesting to see the actual economic performance and whether it matches up to the narrative of last quarter’s release. [Read More] ## Thesis Thursday 5 - From recipes to weights In the previous post, I provided an exploratory analysis of the allrecipe dataset. This post is a continuation and details the construction of product weights from the recipe corpus. TF-IDF To obtain a measure of how unique a particular word is to given recipe category, I calculate each word-region score using the TF-IDF approach which is given by the following formula: $TF\text{-}IDF_{t,d} =\frac{f_{t,d}}{\sum_{t'\in d}f_{t',d}} \cdot log \frac{N}{n_{t}+1}$ where $$f_{t,d}$$ is the frequency in which a term, $$t$$, appears in document $$d$$, $$N$$ is the total number of documents in the corpus and $$n_{t}$$ is the total number of documents where term $$t$$ is found. [Read More] ## Thesis Thursday 4 - Analysing Recipes One of the main component of my thesis is a mapping from consumers’ purchases to country related expenditure shares. This requires a method to associate each available product to a particular country. I have briefly discussed the issue in the introductory post but have made significant progress on this front that I think is worth sharing. The recipe dataset This recipe dataset was created by scraping recipes from allrecipes.com that are tagged to particular region or country. [Read More] ## Binscatter for R I was trying to find an R package that provides features similar to Stata’s binscatter user written program but there does not appear to be any good substitutes around. Hence, I decided to write a function that replicates it in R. Turns out it actually took longer than I thought and there are still many bugs to fix but the developmental version is worth sharing. It can be downloaded from my Github page. [Read More] ## Scraping SG's GDP data using SingStat's API I have been trying to catch-up on the latest release of Singapore’s economic results. Unfortunately, the official press release or media reports are not very useful. They either contain too much irrelevant information or not enough details for my liking. Maybe I just like looking at the numbers and letting the figures speak for themselves. Hence, I decided to obtain the data from the official SingStat’s Table Builder website. [Read More]
2020-09-18 19:55:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2029203325510025, "perplexity": 1004.1276514982767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00229.warc.gz"}
https://brilliant.org/problems/angle-bisector-theorem-4/
# Angle Bisector Theorem? Geometry Level 4 In the triangle $$ABC$$, the bisector of $$\angle A$$ intersects the bisector of $$\angle B$$ at the point $$I$$. $$D$$ is the foot of the perpendicular from $$I$$ onto $$BC$$. Let the bisector of $$\angle BIC$$ intersect $$BC$$ at $$H$$ and the bisector of $$AID$$ intersect $$AB$$ at $$J$$. Find $$\angle JIH$$ in degrees. ×
2018-06-22 21:04:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24236181378364563, "perplexity": 75.45121980371735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00617.warc.gz"}
https://byjus.com/question-answer/a-cricketer-can-throw-a-ball-to-a-maximum-horizontal-distance-of-100-m/
Question # A cricketer can throw a ball to a maximum horizontal distance of $100\text{m}$. How much high above the ground can the cricketer throw the same ball? Open in App Solution ## Step 1: GivenThe distance the cricketer can throw the ball horizontally, $R=100\text{m}$Step 2: Formulas usedFor a projectile motion, we know that the range (horizontal distance covered) is given as,$R=\frac{{V}^{2}\mathrm{sin}\left(2\theta \right)}{g}$where $V$ is the initial velocity, $\theta$ is the angle at which object was thrown at and $g$ is the acceleration due to gravity.And the maximum height reached is given as,$H=\frac{{V}^{2}{\mathrm{sin}}^{2}\left(\theta \right)}{2g}$where the symbols mean the same as described earlier.Step 3: Calculating heightWe know that the maximum horizontal distance is covered when $\theta =45°$.So we have,$\begin{array}{ccc}& R=\frac{{V}^{2}\mathrm{sin}\left(2\theta \right)}{g}& \\ ⇒& 100=\frac{{V}^{2}\mathrm{sin}\left(2×\frac{\mathrm{\pi }}{4}\right)}{g}& \left[\because 45°=\frac{\mathrm{\pi }}{4}\text{rad}\right]\\ ⇒& \frac{{V}^{2}}{g}=100& \left[\because \mathrm{sin}\left(\frac{\mathrm{\pi }}{2}\right)=1\right]\dots \left(1\right)\end{array}$We also have that,$\begin{array}{ccc}& H=\frac{{V}^{2}{\mathrm{sin}}^{2}\left(\theta \right)}{2g}& \\ ⇒& H=\frac{100{\mathrm{sin}}^{2}\left(\frac{\mathrm{\pi }}{4}\right)}{2}& \left[\because \left(1\right)\right]\\ ⇒& H=50{\left(\frac{1}{\sqrt{2}}\right)}^{2}& \left[\because \mathrm{sin}\left(\frac{\mathrm{\pi }}{4}\right)=\frac{1}{\sqrt{2}}\right]\\ ⇒& H=25\text{m}& \end{array}$Therefore, the maximum height the cricketer can throw the ball is $25\text{m}$. Suggest Corrections 0 Explore more
2023-02-03 00:07:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8921051621437073, "perplexity": 836.5392309140054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.2/warc/CC-MAIN-20230202232251-20230203022251-00424.warc.gz"}
https://ltwork.net/the-interior-plains-of-the-united-states-include-the-and--7787949
# The interior plains of the united states include the and the 15 pointss ###### Question: The interior plains of the united states include the and the 15 pointss ### Find the magnitude e of the electric field at a distance r from the axis of the cylinder for r Find the magnitude e of the electric field at a distance r from the axis of the cylinder for r... ### Nitrogen fixation is done mostly by Nitrogen fixation is done mostly by... ### Can you think of any reason(s) why china allows only one child per couple? should government intervene in population growth? Can you think of any reason(s) why china allows only one child per couple? should government intervene in population growth?... ### Which graph represents the solution set of the compound inequality -4 < 3x-1 and 2x+4 318?+-10-50S10+-10-55+10+-10-50510-10-5010 Which graph represents the solution set of the compound inequality -4 < 3x-1 and 2x+4 318? + -10 -5 0 S 10 + -10 -5 5 + 10 + -10 -5 0 5 10 -10 -5 0 10... ### How do you solve linear equations by subsitutions How do you solve linear equations by subsitutions... ### Planet X has the same diameter as Earth, but three times the mass of Earth.What would be the ratio of the force of gravity Planet X has the same diameter as Earth, but three times the mass of Earth. What would be the ratio of the force of gravity on the surface of Planet X to the force of gravity on the surface of Earth? A 9:1 B. 6:1 C. 3:1 D. 1:1 E 1:3... ### Which ratio is equivalent to StartFraction 8 Over 6 EndFraction Which ratio is equivalent to StartFraction 8 Over 6 EndFraction... ### In general, it is possible to eliminate risk by holding a large portfolio of assets. a) unsystematic In general, it is possible to eliminate risk by holding a large portfolio of assets. a) unsystematic b) systematic c) unsystematic and systematic d) market specific... ### Which statement best describes the Columbian Exchange? Which statement best describes the Columbian Exchange?... ### Is it A. and E., B. and F. , C. and G. , or D. and G. Is it A. and E., B. and F. , C. and G. , or D. and G. $Is it A. and E., B. and F. , C. and G. , or D. and G.$... ### How would you describe life for people who lived in medieval towns and cities? Explain. How would you describe life for people who lived in medieval towns and cities? Explain.... ### Solve by elimination -5/7 -11/7x=-y 2y=7+5x Solve by elimination -5/7 -11/7x=-y 2y=7+5x... ### All plain sandwiches at king’s sandwiches cost $4.75. If a customer wishes to add toppings to the sandwich, they pay an extra$0.50 per topping. Write All plain sandwiches at king’s sandwiches cost $4.75. If a customer wishes to add toppings to the sandwich, they pay an extra$0.50 per topping. Write a function (equation) describing the cost of a sandwich.... ### Triangle jkl is isosceles. the measure of angle j is 72° and the measure of angle k is 36°. which statement describes Triangle jkl is isosceles. the measure of angle j is 72° and the measure of angle k is 36°. which statement describes angle l?... ### Deangelo’s company recently moved from portland to seattle. while writing a letter one day, deangelo uses the automatic Deangelo’s company recently moved from portland to seattle. while writing a letter one day, deangelo uses the automatic portion of a document that his company has used for years. what can deangelo do to avoid making a mistake like this again? delete old templates delete old building blocks use th... The angle \theta_1θ 1 ​ theta, start subscript, 1, end subscript is located in Quadrant \text{I}Istart text, I, end text, and \cos(\theta_1)=\dfrac{10}{17}cos(θ 1 ​ )= 17 10 ​ cosine, left parenthesis, theta, start subscript, 1, end subscript, right parenthesis, equals, start fraction, 10, d... ### Letter to the press lack of food at school​ Letter to the press lack of food at school​... Use the graph and your knowledge of social studies to answer the question, select the statements below that are correct conclusions based on the information in the chart $Use the graph and your knowledge of social studies to answer the question, select the statements be$...
2022-12-09 06:55:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2605121433734894, "perplexity": 2496.6402986994713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00386.warc.gz"}
https://www.intechopen.com/chapters/53126
Open access peer-reviewed chapter # Traditional Wooden Buildings in China By Ze-li Que, Zhe-rui Li, Xiao-lan Zhang, Zi-ye Yuan and Biao Pan Submitted: May 17th 2016Reviewed: October 4th 2016Published: March 1st 2017 DOI: 10.5772/66145 ## Abstract Chinese ancient architecture, with its long history, unique systematic features and wide-spread employment as well as its abundant heritages, is a valuable legacy of the whole world. Due to the particularity of the material and structure of Chinese ancient architecture, relatively research results are mostly published in Chinese, which limits international communication. On account of the studies carried out in Nanjing Forestry University and many other universities and teams, this chapter emphatically introduces the development, structural evolution and preservation of traditional Chinese wooden structure; research status focuses on material properties, decay pattern, anti-seismic performance and corresponding conservation and reinforcement technologies of the main load-bearing members in traditional Chinese wooden structure. ### Keywords • materials and properties • anti-seismic performance • reinforcement techniques ## 1. Introduction Being one of the world’s three major architecture systems, Chinese ancient architecture plays an important role in the global history of architecture. With its long history, unique systematic features and wide-spread employment as well as its abundant heritages, Chinese ancient architecture keeps growing and developing. Emerging from a system using earth and wood to one using bricks and wood, it held on its tradition of taking wooden structure as the main structure and carpentry as the main technology. After over 2000 years of progression and evolution, it has formed a complete system of structure and construction, which includes regulations and standards inherited both from Song Dynasty (1103 AD) and from Qing Dynasty (1734 AD). Compared to Western ancient buildings constructed with stones, bricks and natural concrete, Chinese traditional wooden buildings lack durability and need frequent maintenance and renovation, and properties of the wood in use have a fairly big influence on the joints and the performance of the whole structure. However, under the influence of traditional Chinese philosophy, buildings have been more of an exhibition of social status and the materials and structures involved have not been taken seriously as a technology for a long time. The study with significance to the modern world in the field of Chinese traditional wooden buildings started in the 1920s and 1930s. Historic and artistic fields of the architecture attracted most attention and were often selected as main research directions over a long period of time. Up to now, limited number of fundamental studies on the structural behaviour of Chinese traditional timber structure and its typical joint connections can be found; hence there is an urgency to study and evaluate the seismic performance and structural behaviour of the existing historical timber buildings so as to prevent as much earthquake-inflicted damages as possible from occurring in the near future. Taking Dou-gong brackets and mortise and tenon joints of Chinese traditional timber structure as objects, the ongoing research project of our team includes structure performance and anti-seismic mechanism of different joint connections between columns and beams, reinforcement technology of the weak parts, along with the utilization and analysis of modern engineering wood products as alternative materials in the repair and new construction of Chinese traditional timber constructions. Material performance and structure behaviour researches of Chinese traditional wooden buildings are often based on specific emergency repairment and strengthening projects of historical buildings, which somewhat limits the systematicness and universality of the researches. On the other hand, taking convenience of cultural awareness and characteristic of oriental structural system into consideration, the results of relevant studies tend to be published domestically, which also increases the difficulty of international academic exchange and interaction. In consequence, the intention of this chapter is to collect and introduce relevant research status as well as phased achievements of my team systemically. And the publication of this book will be certain to generate a trend to study traditional wooden structure and encourage worldwide academic exchange and cooperation. ## 2. The structure and preservation of traditional Chinese wooden architecture Represented by traditional Chinese wooden architecture, oriental wooden structure stands out in the architecture world, and after a long course of development and accretion, it has reached a high level of standard theoretically and practically. Take the example of the Yingxian Wooden Pagoda, the highest wooden tower existing worldwide. Besides the fact of being 67.1-m high, it has also survived several major earthquakes and therefore embodied the perfect combination of techniques and aesthetics of wooden structures as well as the intelligence of ancient Chinese people. Consulting two significant building standards from Song Dynasty and Qing Dynasty, this chapter introduces the development and structural evolution of traditional Chinese wooden structure, focusing on three classic structures and via the examples of well-known wooden structures such as the Yingxian Wooden Pagoda, and presents the condition of study and preservation of historic buildings in modern China. ### 2.1. A brief guide to the evolution of traditional oriental wooden structures Due to different cultural backgrounds, ancient architecture used to have seven independent systems, of which some are extinct or never widely spread and thus had limited achievements and influences. That left Chinese architecture, European architecture and Islamic architecture to be considered the world’s three main architectural systems. And among them, Chinese architecture and European architecture are the most long-lasting, widely spread and successful ones. Ancient Chinese architecture had undergone primitive society, slave society and feudal society, among which the last one was the time when Chinese classic architecture developed the most. 1. Primitive society (7000 years ago to twenty-first century BC). The building types vary due to different climates, geographical features and materials. Among them, there are two typical types: wooden frame and mud wall buildings that emerged from cave houses in the Yellow River basin and the Ganlan-style buildings (wooden buildings that built on stilts) from nest houses in the Yangtze River basin. In the late stage of the primitive society, building sites already had trace of privatization and the walls and roofs of buildings were mostly interwoven branches or twigs with mud coating (see Ref. [1]). 2. Slave society (2070–476 BC). In the twenty-first century BC, the wooden frame and rammed earth construction and regular enclosed courtyard building groups came along, which showed great improvement in timber frame technology. The sixteenth century BC was the prime time for the development of the Chinese slave society and a time when documentary trace began. Based on the size of the rammed earth foundation of the palaces and temples, buildings at this point of the history had larger scale and stricter hierarchy and scale of cities, height of city walls, width of streets and other buildings of significance were required to be built according to their rank. In the Spring and Autumn periods (770–476 BC), the popularization of tiles and appearance of high-platform buildings for imperial and ducal palaces were the most important improvements. High-platform building means building a platform of tamped earth underneath the palace. As the leuds sought more magnificent palaces, the decoration and painting of ancient architecture were taken a step further (see Ref. [1]). 3. Feudal society (475 BC to 1911 AD). With the collapse of slavery, agriculture and handicraft rapidly grew and the utilization of ironware accelerated the improvement of structure technology and wooden structure's construction quality. Fireplaces, heated brick beds and cellars can be seen at this period of time. The Han Dynasty was a thriving time for classic Chinese architecture when the nowadays commonly seen beam-lifted frame and through-type frame wooden structures were formed. And at the same time, the traditional roof of Chinese buildings also flourished. Since then, the introduction of Buddhism greatly boosted the development of Buddhist architecture, one of the most important types of classic Chinese architecture. The Tang Dynasty was a time when the techniques and artistic qualities of classic architect were developing the fastest. Tang-style architecture demonstrates the extremity of size and regulations, extremity in architectural complex layout and features of large expansion and large volume. And the construction form and material requirements of wooden structures especially Dou-gong brackets were standardized. Tang-style architecture also produced a far-reaching influence on countries such as Japan. Later in the Song Dynasty, modular system was adopted and the book building standards were officially published which set standard rule for buildings' measurements and basic moduli so that the size of wooden components could be properly defined. In the late stage of feudal society, building forms were becoming more and more simplified and the entirety of the beam-column frame was enhanced. The buildings presented a serious and rigorous image with more ingrained decoration and painting. In Qing Dynasty, the ethnic diversity contributed to the blossoming of various residential building types. And the monomer building form of official architecture was set and therefore improved the standard of architectural complex design. The promulgated book construction practices enumerated 27 practices of monomer building and formulated new construction moduli, which contributed much to accelerate the design and construction process and controlling material consumption (see Ref. [2]). ### 2.2. The structural system and characteristics of traditional Chinese wooden structure Based on different construction frames and geographic features, the traditional Chinese wooden structure frame system can be divided into three types: through-type frame, beam-lifted frame and log-cabin-type frame (see Refs. [3, 4]), as seen in Figure 1. 1. Through-type frame. The through-type frame is constructed of vertical connection with separated frame and mostly used in rural housing. There is no reference to this type in the official building standards. The common practice of this type is to connect the columns with square crossbeams along the length of the house, forming a truss and then use square crossbeams to connect every two trusses, forming the frame of the house. The characteristics of this type include using materials with small cross section that are easy to obtain, using multiple square crossbeams along the length of the house that can be assembled beforehand, enhancing the entirety and stability of the structure and rendering the installation of walls convenient and saving manpower and materials with its simple practice, direct force transmission and ever-evolving and adaptable nature. 2. Beam-lifted frame. This frame type formed in the Spring and Autumn period kept evolving and then became a settled practice. This frame type varies in material size and frame combination according to different social ranks, which was strictly set in regulations such as building standards in Song Dynasty and construction practices in Qing Dynasty. Beam-lifted frame is usually composed of the frame layer, the Dou-gong brackets layer and the roof layer. Usually, it is constructed by placing a beam head on top of a column and then on top of that, using a shorter column to hold a shorter beam and another beam head and another column and so forth. And eventually, the short column on the short beam holds the weight of the purlin. This type was widely used in large-scale buildings such as palaces and temples in northern China. The characteristics of beam-lifted frame are long distance between columns along the length of the house, enclosing larger interior space and aesthetically pleasing structural features. 3. Log-cabin type. Log-cabin type is an ancient structural type that dates back to the primitive society. In China, it was found to be used in building the outer coffin in Shang Dynasty tombs from over 3000 years ago and in the caved patterns on Han Dynasty relics found in Yunnan province in south-western China. It is referred as ’Mukeden’ in north-eastern China, meaning to pile up caved logs (often cut into semi-cylinders) to build houses. This type of structures is often seen in areas such as Inner Mongolia, forests in north-eastern China and mountain areas in Sichuan province and Yunnan province in south-western China. Its characteristics are as follows: it can regulate the room temperature to fit the fickle climate in mountain areas and can withstand earthquakes to some extent; it requires only simple materials and minimum manpower but possesses great diversity and mobility; however, to build this type of houses, a great amount of wood is required and the size and location of doors and windows are greatly limited so it is not as widely spread as the other two types. ### 2.3. The preservation and research status of two typical remaining historic wooden buildings in China (1) Yingxian Wooden Pagoda. Yingxian Wooden Pagoda, originally known as the Yingxian Wooden Pagoda of Fogong Temple, was built in 1056 AD, Liao Dynasty, and is the largest and oldest high-rise wooden building in existence in the world (as seen in Figure 2). It is a 67.31-m tall pagoda of a multi-storied pavilion type with an octagonal cross section and nine floors that disguised as five. It has a diameter of 30.27 m, weighs 7400 tons and altogether consumed 3700 m3 of timber. With 54 types of Dou-gong brackets of different functions, shapes and sizes installed, it is often referred as a museum of Dou-gong bracket. However, as a consequence of multiple earthquakes during the recent thousand years, wars and unfit repair in modern times, the pagoda suffers from all kinds of problems such as a severe tilt of the main body and the twist of the column frame of the second and third floors. Based on the observation data of 2010, the overall slope was 1.25% and counting, especially of the second floor which accounted for 60–70% of the slope. Since 1933, Liang et al. began to conduct detailed researches and measurement on Yingxian Wooden Pagoda. In 1966, the book Yingxian Wooden Pagoda was published, and in 1973 (see Ref. [5]), architectural experts such as Yang Tingbao began their 10 years of restoration of this architectural treasure after discussing the issue of its partial tilt and setting basic rules and solutions regarding the repair and reinforcement of the pagoda. The Committee of Yingxian Wooden Pagoda Restoration and Preservation Construction Management was found in the 1990s, and after the early-stage study, it started monitoring the structural soundness of the pagoda in 2008 and continued till this day. Since the 1990s, many scholars and their teams have studied the structural state (e.g. Ref. [6]), damage dispersion, seismic reaction analysis (e.g. Ref. [7]) and material deformation (e.g. Ref. [8]) features under external forces. Refined finite element (FE) models were established, respectively, based on the Dou-gong bracket joints and the whole structural system and load-bearing quality analyses were conducted under lateral load (e.g. Ref. [9]). And the ideal restoration model of the pagoda was established through computer-aided design (CAD) drawings and three-dimensional (3D) models (see Ref. [10]). Yet, there are still issues to address in terms of repair and preservation. In recent years, scholars came up with plans such as major repair of the framework, total support of the pagoda and raised support of the upper section. But because of the significance, structural complexity and uniqueness of the pagoda, the present plan is to reinforce and repair the tilted parts and damaged components on the second and third floors (see Ref. [11]). (2) East palace of Foguang Temple. The palace is located in Wutai county, Shanxi province, in northern China and originally built in Northern Wei Dynasty (386–534 AD, one of the Northern Dynasties), as seen in Figure 3. With the remaining main hall rebuilt in 857 AD, Tang Dynasty, the palace is one of the remaining oldest Tong Dynasty wooden structures and acclaimed as ‘the primary national treasure of China’. With a building width of seven rooms and length of four, the roof, column frame and Dou-gong brackets all belong to the top rank and exhibit classic structural features of Tong Dynasty. The Dou-gong components have cross-sectional size of 210 × 300 cm, 10 times the size of the same type of components in Qing Dynasty. The eaves are 3.69 m long, and the triangle Y-shape support system in the beam frame is the first of its kind in China. In the palace, there are 61 m2 of Tang Dynasty wall paintings preserved and other treasures such as inscriptions from Tong Dynasty and painted sculptures. As a classic example of the structural frame and construction technology of Tang Dynasty architecture, multi-dimensional studies were carried out on the East Palace of Foguang Temple regarding spatial form, structural bearing capacity, anti-seismic reinforcement and artistic characteristics (e.g. [12, 13]). As to the protection of the palace, surrounding residents were moved out in 1954 and repair and reinforcement began. In 1985, local government built dams around it to protect it from mountain torrents and stone walls, flashing and gutters to reduce humidity. ## 3. Properties study on traditional Chinese wooden structure ### 3.1. Study of physical properties on wood materials from historic buildings In order to protect all the wooden buildings with hundreds of years of history across China, a research team was formed to carry out field study on the structure materials of historic buildings in 11 provinces, municipalities and autonomous regions. Experiments were carried out on the worn components after replacement and thus conclusions were drawn concerning the species and physical properties. Furthermore, a national mandatory standard Technical code for maintenance and strengthening of ancient timber buildings, see Ref. [14], was established. #### 3.1.1. Researches of the species of wooden materials used in historic buildings According to the field researches and some microstructure identifications, the main components in historic buildings such as beam, fang (a square pillar), column, purlin and rafter are mainly made of nanmu (Phoebe zhennan S. Lee), cypress (Cupressus funebris Endl.), China fir (Cunninghamia lanceolata (Lamb.) Hook.) and pinus (Pinus massoniana Lamb.) in southern China. In northern China, Chinese pine (P. tabulaeformis Carr.), larch (Larix gmelinii (Rupr.) Kuzen.) and pinus armandi (P. armandii Franch.) are widely used. And in common housing, poplar (PopulusL.) and elm (Ulmus pumila L.) are used while in large-scale historic buildings of great importance, there are wood materials such as nanmu from the south involved, which indicates that these buildings are mostly constructed with local materials except for important ones with higher standards. Buildings in Sichuan province in southwest China and Hubei province in the middle south are constructed of nanmu and cypress for there was a large reserve at the time. However, the value of this wood goes up as the reserve goes down nowadays. #### 3.1.2. Studies on the physical properties of the components in historic buildings The first concern of all the people working in the field of ancient architecture is the change pattern of the properties of load-bearing components. But studies concerning this topic or relevant ones have been hard to find. The leading difficulty lies in that the environmental conditions that the components are in play an important role in how their properties change and different species react largely differently to the environmental conditions. In addition, it is also difficult to manufacture viable specimens using modern materials as a control group due to the massive variation of wood materials. In 1977, Chen G.Y carried out physical properties experiments on a worn component from the Yingxian Wooden Pagoda, see Ref. [15]. It was a column on the horizontal slot of the two-raftered roof beam on the second floor that was 900 years old according to C14 dating. The column was 2.7-m high, 33 × 23 cm in section size and made of north China larch (L. principis-rupprechtii Mayr). Being hidden inside the pagoda, the column was spared from erosion by wind and rain and thus showed no obvious erosional furrows and darkening but demonstrated some split (being hit by artillery shells). The experiment results are seen in Table 1. In 1982, Chen experimented on the middle column from the Jing Qing Gate in Jinci Temple, see Ref. [15]. The column was about 600 years old, 6-m high, 40 × 40 cm in section size, and made of poplar (Populus L.). The column was not eroded by rain and demonstrated darkening and different levels of split. It showed trace of weathering and was eroded into powder at approximately 1 m above the root, rendering the root conical and leaving the upper half relatively intact. The results are seen in Table 2. ParametersOld wood in Yingxian Wooden PagodaNew woodOld/new (%) Compress strength parallel to thegrain (kgf/cm2)467.757681 Chordwise compress strength perpendicular to the grain (kgf/cm2)15.88419 Radial compress strengthperpendicular to the grain (kgf/cm2)20.64645 Tensile strength parallel to the grain (kgf/cm2)651.7129950 Bending strength perpendicularto the grain (kgf/cm2)964.7113385 Chordwise shear strength parallelto the grain (kgf/cm2)#96.268110 Radial shear strength parallel to the grain (kgf/cm2)89.1385105 Chordwise impact hardness (kgf)127.7425.730 End-face impact hardness (kgf/cm2)433377115 ### Table  1. Comparison between old wood in Yingxian Wooden Pagoda and new wood in Ref. [15]. ParametersOld wood in Jingqing GateNew woodOld/new (%) Compress strength parallel to the grain (kgf/cm2)539427126 Chordwise compress strength perpendicular to the grain (kgf/cm2)42.64987 Radial compress strength perpendicular to the grain (kgf/cm2)57.36588 Tensile strength parallel to the grain (kgf/cm2)450107042 Bending strength perpendicular tothe grain (kgf/cm2)26779634 Chordwise shear strength parallel to the grain (kgf/cm2)10873148 Radial shear strength parallel to the grain (kgf/cm2)10595110 Chordwise splitting strength (kgf/cm2)13.615.886 Chord plane hardness (kgf)372242154 End-face impact hardness (kgf)509306166 ### Table  2. Comparison between old wood in Jingqing Gate and new wood in Ref. [15]. Both experiments proved that after 600–900 years of load bearing, the tensile strength and compressive strength perpendicular to the grain of the material were weakened the most: the former by 50% and the latter by 80% in pinus and poplar, respectively. At the same time, the stiffness and shear strength were enhanced: the former by 11–16% and the latter by 15%. This indicates that old wood material has denser cell structure and therefore higher level of stiffness than new material. And due to the ageing of its internal structure, the material suffered from different level of degeneration concerning other physical properties. Properties relying on late-wood resistance such as compression strength parallel to the grain and bending strength degenerated less heavily and maintained good uniformity while properties relying on early-wood resistance such as tensile strength paralleled to grain degenerated more and maintained good uniformity as well. On the other hand, properties relying on both late wood and early wood such as splitting strength and impact hardness had much poorer uniformity. This points out that the timing takes a great toll on physical properties of wood material. In 1994, Ni et al. ran chemical component analysis on the replaced columns from the main hall of Bei Yue Temple and Da Bei Lou building in Chang Ling, Hebei province, in northern China during renovation, see Ref. [16]. The two columns were, respectively, 900 and 200 years old and made of Chinese spruce (Picea asperata) and cypress (C. funebris Endl.). Samples in the experiments were taken from the intact part of the column. Results are seen in Table 3. ComponentTree species SpruceCypress OldNewOld/new (%)OldNewOld/new (%) Moisture content (%)6.656.02110.46.197.5781.8 Ash content (%)0.420.7853.80.580.41141.4 Cold water extract (%)5.531.42389.46.693.42195.6 Hot water extract (%)7.272.68271.27.984.56175.0 Phenethyl alcohol extract (%)6.601.63404.96.356.9092.0 1% NaOH extract (%)25.112.4202.423.517.1137.4 Pentosane (%)11.511.699.116.610.7155.1 Lignin (%)30.028.410633.132.4102.1 Holocellulose (%)58.666.288.556.664.987.2 α-Cellulose (%)36.241.587.234.939.189.2 ### Table  3. Comparison of chemical composition between old wood and new wood in Ref. [16]. It can be inferred from the data that various extract amounts from old wood showed different levels of increase while the amount of holocellulose and α-cellulose decreased, which showed that the main components of cell walls in old wood had degraded and had looser structure than newly lumbered ones. Cellulose is the main cause of high tensile strength parallel to the grain, and hemicellulose and lignin give the material elasticity and compression strength so the decrease of these three components microscopically explains the macrolevel mechanical properties degeneration. ### 3.2. Study on the decay pattern of physical properties, residual strength and longevity of wood material Due to the special fact that ancient buildings are being reserved, old materials in studies are mostly the small components being replaced during renovation, which severely restrained the study on the strength of wooden structures. And the fact that the strength alters differently under different load conditions makes it even more difficult to study the decay of physical properties of old wood structures. In 2006, Liu et al. did a study on the relevance between chemical components and bending strength and degree of decay on the old materials from the Wu Ying Palace in the Forbidden City, see Ref. [17]. The experiment samples were from the beam and made of larch. And the decay degrees were determined according to GB/T 13942.2-92, see Ref. [18]. The relationship between decay degree and cellulose and the content of 1% NaOH extracts can be seen in Figure 4. Due to the limited amount of old wood materials, chemical components analysis was carried out on healthy wood with a bending strength of 90, 100 and 110 MPa. The results of the relevance between bending strength and cellulose and the content of 1% NaOH extracts are shown in Figure 5. As the results showed, as the decay degree elevates, 1% NaOH extracts content evidently rises. Positive proportional relation can also be observed between 1% NaOH extracts and bending strength. The study showed that not only can the alkali extracts be used to determine the preliminary decay degree but they can also be used to determine visually healthy materials' physical properties. To address the issue that in physical property experiments on ancient buildings, old materials are rare and the material qualities of new materials are different from those of the old ones, Xu et al. came up with the solution of accelerating the decay process via inoculation of fungus, see Ref. [19]. The process is to infect the wood with single fungus under suitable environment to accelerate the decay. This study provides the physical properties of wood from different decay degrees and the decay patterns of physical properties of wood from different decay degrees. And it offers a new way of thinking about the quantification of wood decay degree. ### 3.3. Study on modern engineering materials' application in traditional wooden structure In modern China, all the constructions of new palaces and temples involve reconstruction of pre-existing historic buildings or reference to classic elements. This requires similar construction method as well as high-quality materials. Among the new engineering wood materials, glued-laminated timber (glulam) shows all kinds of advantages such as natural wood texture, high quality of anti-corrosion, high usage and stable physical properties. In addition, glulam also has great plasticity and special expressive ability that can rival steel structure. Being well liked both worldwide and here in China, glulam has been applied to the construction of traditional structures, see Ref. [20]. Xiangji Temple is a historically famous temple in Hangzhou, Zhejiang province, in south-eastern China. First built in 1016 AD, the original building was destroyed in a fire and was rebuilt in 2010 AD. The major structures of the temple's bell tower, drum tower and the Kinnara hall were built in steel while the monastery, the guest house and the dorm rooms were in log structure and the rest were in glulam structure. Glulam combined with traditional roof made a column-free room possible. With the traditional multi-overhanging eaves, the main hall demonstrates a splendid momentum as well as openness and brightness, as seen in Figure 6(a). In the White Lotus Lore Temple in Shanghai, the Buddha hall has a glulam body. Different from the traditional temples, this temple represents its own era (as seen in Figure 6(b)). Built up high, with large overhanging eaves and mild-slope roof, it was totally built according to the proportion of buildings in Tang Dynasty. It is completely made of glulam and structured with grid structure to make a column-free indoor space. The Yu Xi Temple Tower in Chao Mountain in Hangzhou, Zhejiang province, in south-eastern China is a pavilion-style tower with an octagonal section and a pillar in the centre. The tower has five floors with four additional floors hidden inside. It was made of glulam and the hidden floors were of structural reinforcing purpose just like those in the Yingxian Wooden Pagoda. The Laojun tower in Qingcheng Mountain, Sichuan province, in south-western China is another wooden structure building built on top of a mountain. After reconstruction, it is 28.05-m high, with a reinforced concrete base. The first and second floors are made of concrete and glulam and from the third through ninth floors are made of glulam, with Douglas fir. Thanks to the variety of materials and connection types, traditional wooden structures are more frequently combined in modern construction. On the one hand, wood is combined with glass, concrete and steel, resulting in more flexibility in wooden structure design. On the other hand, wooden structure borrowed the steel system such as the grid structure in the White Lotus Lore Temple and the truss system in Kai Yuan Temple which extend the building scale. What's more, traditional mortise and tenon joints are combined with metal joints and adhesives. Through structural innovation and optimization, with advanced construction techniques, contemporary traditional wooden structures will be perfect in structural logic, creativity and details. ## 4. Anti-seismic performance study on traditional Chinese wooden structure Wooden structures have an outstanding advantage over other forms of structures when it comes to anti-seismic performance. Traditional Chinese wooden buildings have a unique form of structure that allows them to withstand earthquakes with remarkable stability, hence the saying ‘The building stands even though all its walls collapse’. One of the most significant features of traditional Chinese wooden structure is that it ‘emphasizes structural members rather than joints’, so the mechanical properties of joints take a huge toll on the performance of the whole building. This chapter shows studies on anti-seismic performance of key joints and whole buildings. ### 4.1. The anti-seismic structure and mechanics of ancient Chinese wooden structures After analysing the damage that past earthquakes did on existing ancient wooden structures, experts found that ancient Chinese wooden structures have their unique features in design concepts, structural layout and building techniques. Special building techniques such as the floated joint between a column and the stylobate, the semi-rigid mortise and tenon joint between a beam and a column and the tilt columns and raised columns of the column frame along with the Queti, a kind of trimming joists at the end of a beam, the Dou-gong bracket and the ‘grand roof’, make classic wooden buildings distinct from modern reinforced concrete structures regarding anti-seismic performance. With relatively high ratio of strength to weight, wooden material can maintain a certain level of resilience and ability to recover from deformation when external forces are applied. The mostly used joint formation between wooden components is semi-rigid mortise and tenon joint which not only improves the resilience of the whole structure but also effectively cancels the horizontal thrust and consumes a notable share of energy generated by the friction and rotation of mortises and tenons. Besides, classic Chinese wooden structure can also consume and absorb seismic energy through the auto-deformation of its load-bearing frame system. Looking at the small components of the structure, it can be found that the connection between the column and the floor is often smooth and horizontal with no embedment or adhesion, which allows the upper section of the building to slide independently and stably as a whole during an earthquake without collapsing. A tilt column means to make the bottom of a column into a gentle slope, resulting in the top of it tilting slightly inward, the mortises and tenons above pressed together and the deadweight of the mortises and tenons providing the original bending moment of the joint. It can also act as an effective limitation on the movement of the upper beam frame. As the transitional layer between the column frame layer and the beam frame layer, the Dou-gong bracket layer is constructed of many small components interlaced, forming an upside-down triangle by using less and less components from top to bottom. It functions as a spring cushion and reduces the earthquake effect. And because of the transition and separation of the Dou-gong bracket layer, the roof and the beam frame as a whole can be analysed as a rigid entirety with slopes, as seen in Figure 7. Before the 1990s, out of the purpose of reserving cultural heritage, studies on ancient buildings mostly highlighted their historical and artistic qualities rather than scientifically analysing their structures. In 1991, Wang T's analysis (see Ref. [21]) of the static load performance of the critical components, joints and the whole structure of ancient buildings marked the beginning of structural studies on ancient buildings. And studies on ancient Chinese structures have thrived so far. Focusing on the outstanding anti-seismic quality of classic wooden structures, Fan from the Harbin Institute of Technology, Yu and Xue from the Xi'an Jiaotong University, Zhao and Zhang from the Xian University of Architecture and Technology and Fang from the Tsinghua University conducted a large amount of experiments and theoretical analyses on the dynamic features, anti-seismic behaviours, destruction assessment and joint reinforcement. Li from the Taiyuan University of Technology and Zhou from the Peking University, respectively, conducted years of anti-seismic and reinforcing restoration experiments and studies on the Yingxian Wooden Pagoda and ancient buildings in the Forbidden City, as seen in Refs. [2225], for example. Currently, the most commonly used methods of analysing the anti-seismic behaviours of classic wooden structures are static procedure, response spectrum analysis, dynamic-timing analysis and nonlinear static procedure. After years of studying, scholars at home and abroad have come up with methods of building analytical modules such as the semi-rigid calculation module of mortise and tenon joints, the combination module of beam units, the single degree of freedom (SDOF) system module and the mechanics module. ### 4.2. The anti-seismic performance study on mortise and tenon joints and Dou-gong brackets #### 4.2.1. Mortise and tenon joints Mortise and tenon joints are often used in classic buildings to join beams and columns. These joints can bear some lateral load and joint-bending moment and allow some rotation and relative slide between the beam and the column. These are all ‘semi-rigid’ features that this type of joints demonstrates, which can consume part of the energy and reduce structural reaction to earthquakes. As to experiments, Gao et al., see Ref. [26], conducted lateral low-cyclic reversed-loading tests on three wooden structure models with Queti of the watchtower in Xi'an, Shaanxi province, in north-western China. They analysed the deformation features and destruction pattern of joints and found after calculation that the ductility coefficient changes within the range of 1.58–3.99. Xie et al., see Ref. [27], conducted the same experiments on dovetail joint models and discussed the effect on the anti-seismic performance of joints of vertical load, Queti, Pupai-fang components and the module size effect. As to calculation module, Wang simplified mortise and tenon joints as hinges and Queti as cantilevers with load focused on the tips in static calculation of wooden structures and double-checked the load-bearing capacity of components, as seen in Ref. [19]. Fang and Yu et al. built an FE model fit for ancient wooden buildings by defining 3D variable semi-rigid joints that reflect features of the Dou-gong bracket and mortise and tenon, based on studies on structural features of ancient wooden buildings, as seen in Ref. [28] and Figure 8. The module was first used in the calculation of the unequal settlement of the base of the watchtower in Xi'an and then used in the mechanics performance analysis of ancient buildings such as the drum tower in Xi'an, the Baoguo Temple in Ningbo and Zhejiang province, and performed quite a good job. Feng and Zhang et al. (see Ref. [29]) combined the shaking table test on the column frame unit model formed by four columns in the palace hall structure in building standards of Song Dynasty and low-cyclic reversed-loading tests on the model and numerical simulation, conducted lateral vibration analysis and random destruction theoretical analysis on the Dou-gong brackets and studied the features of semi-rigid mortise and tenon joints. And came up with the rigidity formula for mortise and tenon joints and the equivalent viscous damping coefficient. #### 4.2.2. Dou-gong brackets Dou-gong bracket, a special connection component between the column and the beam, plays a pivotal role in both structural force transmission and decorative function. Composed by many cantilever joists (named as Gong) staked one on top of another on crossed direction and connected by Dou members, Dou-gong bracket as a whole could just be regarded as a beam pad. This special structure forms as an inverted fixed-hinged support which has compression deflection and rotary movement on vertical plan as well as slip movement on horizontal plan. In respect of structural performance, because of the overhanging on two directions, Dou-gong bracket shortens the span and enhances the load-carrying capacity of upper beams; meanwhile, it helps to adjust the depth of eaves, making it more graceful and harmonious. On the other hand, instead of sticking together, all the Dou and Gong elements are connected by mortise and tenon joints. With the addition of its unique shape with overlapping cantilevers, Dou-gong bracket becomes a ductile connection to dissipate energy between column and beam, especially under lateral forces such as earthquakes. Chinese scholars mainly conducted studies on two types of Dou-gong brackets: the Song style and the Qing style. During the restoration project of the tower of the east city gate in Xi'an in 1996, with the help of the repair team, Yu et al. conducted and performed static and dynamic experiments on the two types of Dou-gong brackets, see Ref. [30]. Gao et al. performed six 1:3.52 models of the bottom two pieces of Song style 8 layers Dou-gong brackets with second-class material according to the regulations in building standards and came up with load-displacement calculation module, mass-spring-damper model and lateral force-displacement-restoring force model under vertical load via vertical monotonic-loading tests and lateral low-cyclic reversed-loading tests. They also calculated the vertical seismic transmission coefficient and the lateral energy consumption, which showed great two directional anti-seismic performance of the Dou-gong brackets, see Ref. [31]. Sui et al. drew the conclusion of a restoring force model that reflected the restoring features and stiffness variation regulation of Dou-gong through low-cyclic reversed-loading tests on singular-layer, double-layer and quadruple-layer Dou-gong models, see Ref. [32]. As to other transitional types of Dou-gong brackets in the transition periods, our team of the Nanjing Forestry University conducted shaking table tests on full-scale models made of Douglas fir and China fir based on the Dou-gong brackets in the Ming Dynasty Tian Wang Palace of Bao Sheng Temple in Luzhi and analysed factors such as the between-layer displacement reaction features, the contribution ratio of the rotation and slide deformation of the components and the structurally weak-part assessment as seen in Ref. [33] and Figure 9. As to numerical simulation, Wei, see Ref. [34], studied the non-linear variation patterns of the connection rigidity, calculated the ductility coefficient and the equivalent viscous-damping coefficient and compared results between the axial compression tests and the low-cyclic reversed-loading tests via ANSYS simulation based on the operating mechanism, destruction form and anti-seismic performance of Dou-gong. Du studied the Yingxian Wooden Pagoda, built the simplified models of the rigid connection and hinges in a Dou-gong bracket using the dynamic equivalent features method and calculated the range of the dynamic features though the two simplified models and then applied them to the calculation of the whole tower, see Ref. [35]. ## 5. Conservation and reinforcement techniques of historic wooden buildings in China ### 5.1. The principles of historic architecture restoration Since 1920s, China began to attach importance to the conservation of historic wooden buildings. In 1928, the Central Commission for the Preservation of Antiquitieswas established. In 1929, Zhu et al. founded the Society of the Study of Chinese Architecture. In 1930, the government issued the Regulation of Antiquities Conservation, which symbolized the start of legal management of antiquities. The Law of the People's Republic of China on the Protection of Cultural Relicswas promulgated in 1982, including the protection of antiquities in the law, which symbolized the standardization and internationalization of historic wooden buildings protection. At present, the protection of historic wooden buildings in China mainly follows the International Charter for the Conservation and Restoration of Monuments and Sites, Law of the People's Republic of China on the Protection of Cultural Relicsand the standard GB 50165-92. Maintenance and reinforcement construction comprises regular maintenance project, major project of historic preservation and maintenance, partial restoration project, relocation project and emergency project, and abides by the principle of maintaining the buildings’ original state, including (1) original architectural form such as plane layout, modelling, construction characteristic, artistic style and so on, (2) original building structure, (3) original building materials and (4) processing technology (see Ref. [18]). ### 5.2. Traditional reinforcement techniques of historic wooden buildings #### 5.2.1. Common damages of historic wooden structures The common damages of historic wooden structures include (1) component deformation under compression: column yielding such as splitting of column tips under compression, decay on the bottom of columns due to long-term exposure to humidity and splitting along the grain on the body of columns; bending and splitting of girders and square beams between column and Dou-gong brackets; breaking and splitting of subcomponents of Dou-gong brackets. (2) Components under tension: square beams through the columns on the top or the bottom usually get adrift or break off at the mortise and tenon joints. (3) Components under shear strength: the force analyses at the mortise and tenon joints are more complex and the joints tend to detach under long-term shearing action, as seen in Ref. [15]. #### 5.2.2. Traditional reinforcement techniques of ancient wooden buildings Regarding the whole-beam frame of wooden structures, reinforcement methods include major repair of the structure (disassembling the wooden frame completely of partially, repairing or replacing the damaged components and reassembling it while reinforcing the structure), restoration with external support (adding external support and restoring the tilted, twisted or detached components while reinforcing the structure without disassembling the frame) and overall reinforcement (direct reinforcement of the whole structure of projects with minimum structural deformation). Reinforcement methods are as follows (Figure 10): 1. Partial or complete replacement: (a) patching and reinforcing. It can be patched up with wooden powder and waterproof adhesives when the splits or corrosion of beams and columns are slight. (b) Reattachment of columns: Replace the rotten part of the column with new materials when the rotten part takes up more than a quarter of the height. The spot where reattachment is conducted is often reinforced with a semi-tenon and an iron hoop. (c) When the damage depth of the beam at both sides takes up more than a third of the height, it is appropriate to use the clamp connection method but when the depth takes up more than three-fifths, a replacement of the beam head is necessary. 2. Mechanical reinforcement: (a) Ironware reinforcement: Using flat iron to reinforce beams and columns or to connect joints between beams and columns. This way, the flat iron can improve the mechanical properties of components by bearing part of the tensile, compressing, bending and shearing forces. (b) When the deflection of beams and square pillars transcends normal limitation or the load-bearing capacity is insufficient or splits are found, it is appropriate to use tensile bars to form now load-bearing components. 3. Chemical reinforcement: Since the 1970s, unsaturated polyester resin filling was widely used in historic building restoration. Via the filling, soaking, patching or painting of chemicals, not only can the strength of the damaged wood be improved but the stability and antirot capacity can also be enhanced. In the restoration of the main palace of Nanchan Temple in Wutai, Shanxi province, 782 AD in 1974, epoxy resin was filled in the splits of two main beams and iron hoops were fixed on the beams. In the 1975 restoration of the main palace of Baoguo Temple in Ningbo, Zhejiang province, 1013 AD, without disassembling the wooden frame, the termite-ridden columns were filled with chemicals and wrapped in fibre-reinforced plastic (FRP). According to calculation, despite the larger expenses of chemical fillings, at least 30% of the budget was saved because disassembling the whole frame was avoided, see Ref. [36]. There are some disadvantages while using the above traditional reinforcement techniques. The ironware is usually applied inside the components and easily corroded so the appearance of the structure may be affected. The antirot chemicals greatly harm the health of the management staff. When using the reattaching method or the tensile bars, the original appearance of the building is inevitably damaged. ### 5.3. The study on and application of FRP reinforcement techniques in historic wooden structures Fibre-reinforced polymers (FRP) have advantages such as large tensile capacity and are light weight. It can also endure erosion, heat and freezing. Besides, it is highly plastic, easy to apply and inexpensive. As a result, it is widely used in the restoration of reinforced concrete and brick structures. The study on FRP started in the 1990s and has matured in the theory of reinforced concrete restoration. In the field of wooden structure reinforcement, FRP utilization especially carbon fibre-reinforced polymers (CFRP) and glass fibre-reinforced polymers (GFRP) utilization are becoming a heated study topic. Based on relevant studies and projects, FRP reinforcement can reduce several variation coefficients and strength indices of mechanical properties of wooden components. Variation coefficients after reinforcement are limited by 15%. FRP reinforcement can also improve the bearing capacity and reduce the long-term creep of wooden components, optimize the cross-section size and enhance the fire-resisting and antirot capacity. FRP reinforcement can improve the anti-seismic capacity of mortise-tenon joints. Zhou et al. from The Palace Museum made a special frame model with four beams and four columns at the ratio of 1:8 using the Korean Pine and tenon-mortise connections based on the actual size of the partial frame of the Hall of Great Harmony in the Imperial Palace and conducted exploratory experiments using CFRP fabric. Through low-cyclic reversed-loading tests, load-displacement hysteretic curves were drawn and skeleton curves, energy dissipation capability and stiffness degradation of the structure were analysed. Results show that although the structural energy dissipation capacity decreased slightly after the model is strengthened by CFRP sheets on tenon-mortise location, the cross-section size of the tenon that will pull out of the joint reduces. Its lateral stiffness and load-bearing capacity both improve and with slight stiffness degradation, the frame still has good deformation capacity. Besides, the team also considered to conduct performance comparison among mortise-tenon joint models using nails, iron hoops and CFRP reinforcement. The results showed good deformation capacity preservation in all three scenarios. CFRP fabric excels in joint-bearing capacity and energy dissipation capability but showed the most stiffness degradation. So, it is recommended to use CFRP in reinforcement of medium to small wooden structures (see Ref. [37]). Huang deducted the calculation of FRP anti-sheering reinforcement using basic materials mechanics formulas. Yang analysed the influence of CFRP and GFRP on the bend-resistant capacity and came up with the formula of ultimate bearing capacity for the analysing method based on failure strain. A team from the Xi'an University of Architecture and Technology did much work on the analysis of and tests on RFP reinforcement in historic building component units and structural joints. Case in point, Xie (see Ref. [38]) tested the bend-resistant capacity of square beams with CFRP reinforcement and compressive strength of cylinder columns and established the calculation of shear strength of beams and compressive strength of columns in different damage forms. Besides, he built scale models of the column frame according to the classic ancient architecture rules in Song Dynasty, 960–1270 AD and conducted low-cyclic reversed-loading tests on the original structure, CFRP sheets-reinforced structure and flat steel-reinforced structure, based on which the restoring force model of wooden structure was established. Hang et al. (see Ref. [39]) analysed the load-bearing performance of damaged joints reinforced with CFRP based on the above-mentioned-reinforcing approach and damage form of the joints, and came up with the calculation of the bend-resistant strength of the joints based on relevant experiments and calculation presumptions. In the field of construction application, CFRP wrap combined with traditional wooden structure reinforcement methods has already been used on historic buildings such as the Tiananmen Gate tower and explored in a practical manner. In the emergency restoration of Yingxian Wooden Pagoda, in Fogong Temple, Shanxi province, 1056 AD, experts recommended FRP materials to maintain its original appearance. Combined with the characteristics of Chinese wooden structures, FRP reinforcement’s performance in fire-resistant capacity and structural assessment under long-term load awaits further study. ## 6. Conclusions Chinese traditional wooden architecture, well known as a unique and independent system of the architecture world, has formed its typical structural styles and construction technology after over 7000 years’ development. Besides the research on and conservation and reinforcement status of many precious historical architectural heritages, this chapter is also focused on the analysis of structural features, anti-seismic behaviour and utilization of new materials in traditional wooden architecture based on a large number of studies in recent years. To better protect ancient wooden structure, many researchers have carried out a large amount of physical properties experiments on wood materials from historic buildings, and a method for predicting the degeneration pattern of physical properties, residual strength and longevity of wood material through studying wood decay has been proposed. In addition to traditional reinforcement techniques such as mechanical reinforcement and partial or complete replacement, new reinforcement materials and techniques have already been explored, among which FRP is becoming a heated academic topic. The superior seismic performance of Chinese traditional wooden architecture, owing to many unique characteristics of the structural design and constructional technique, has generated a great deal of interest at home and abroad. The objects and models of anti-seismic behaviour study also show characteristics of miniaturization and diversification. In addition to researches on historical wooden buildings, attempts of using new engineering wood products in the modern wooden architectures of traditional style are becoming a great upsurge nowadays. At present, researches of Chinese traditional wooden structures have made some headway, yet there are also some remaining issues. First, material performance and structural behaviour study of historical wooden buildings are often based on specific emergent repairment and strengthening projects of historical buildings, which somewhat limits the systematization and universality of the researches. Second, attentions have been focused on historic and artistic aspects for a long time, and limited number of fundamental studies on the structural performance of Chinese traditional wooden structure and its typical connections types can be found. What’s more, combine the excellent features of traditional construction technologies with modern materials, techniques, and then inherit and improve them, there is still much work for us to do. ## Acknowledgments This work was supported by the Special Fund of Top-Notch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP) and National ‘Twelfth Five-Year’ Plan for Science & Technology Support (2015BAD14B0503). chapter PDF Citations in RIS format Citations in bibtex format ## More © 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ## How to cite and reference ### Cite this chapter Copy to clipboard Ze-li Que, Zhe-rui Li, Xiao-lan Zhang, Zi-ye Yuan and Biao Pan (March 1st 2017). Traditional Wooden Buildings in China, Wood in Civil Engineering, Giovanna Concu, IntechOpen, DOI: 10.5772/66145. Available from: ### chapter statistics 1Crossref citations ### Related Content #### Wood in Civil Engineering Edited by Giovanna Concu Next chapter #### Experimental Analyses and Numerical Models of CLT Shear Walls under Cyclic Loading By Valeria Awad, Linda Giresini, Mikio Koshihara, Mario Lucio Puppio and Mauro Sassu #### Timber Buildings and Sustainability Edited by Giovanna Concu First chapter #### Introductory Chapter: Timber and Sustainability in Construction By Giovanna Concu We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
2022-01-24 01:09:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3419610261917114, "perplexity": 4381.0434359673645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304345.92/warc/CC-MAIN-20220123232910-20220124022910-00222.warc.gz"}
http://www.glennwillcoxphotography.com/treatment-of-bladder-stone-condition-and-all-the-disease/
# Treatment Of Bladder Stone Condition And All the Disease Cure for Urinary Stone Disease And also other Disease In the grow old of advanced ultrasound scanning, most renal stones (kidney stones) are diagnosed with symptoms. Renal stone situation is a significant medical condition in men. It is mainly found in persons where the diet is low with vitamins. The pain as a rule starts in your border or back, just using your ribs, and radiates to a lower abdomen and genitals (the area where its abdomen ends and those legs begin). Anyone that suffered with kidney gemstones knows how painful it makes them. A kidney stone starts out in the center in the kidney as a petite particle. As mặt phật đá spend time particles cling to the original particle, a stone styles of. These stones can be as large as any kind of inch in diameter. However, small stones are consistently excreted from the body, but stones larger then / of an ” are likely to visit within the kidney. Symptoms: ). Until a solution stone moves into unquestionably the ureter – the conduit connecting the kidney and as a result bladder – you may possibly not know you have so it ). Pain in along side it and back, beneath the ribs ). Imbalances in pain intensity, with periods about pain lasting of minutes. ). Annoyance waves radiating through side and to the lower midsection and groin. Facts: ). Men are influenced by renal stones typically than women. ). The male-to-female percentage is approximately to . ). Every single day two out every thousand people ). It location commonly between these – associated with age. Treatment option: ). Because of detecting renal stones X-rays or sonograms are . However, both of automobiles not pickup a friend all generally stones with urinary area. In fact the reaction to these assessing investigations have to be secured by a lucid spiral CT scan. In the CT diagnostic scan facility isn’t available, a very IVU (intravenous urogram) possibly be done. ). Surgery isn’t the beforehand option. Lots kidney gravel pass with the urinary set-up with a person’s drinking related sufficient standard water.
2020-02-19 06:29:45
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8103022575378418, "perplexity": 7995.0923193162735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144058.43/warc/CC-MAIN-20200219061325-20200219091325-00260.warc.gz"}
http://physics.aps.org/synopsis-for/10.1103/PhysRevB.81.245306
# Synopsis: Disorder and dissonance in nanostructures The phase coherence time of electrons in certain nanostructures may diverge at very low temperatures. In experiments the phase coherence time of electrons in mesoscopic systems saturates, i.e., approaches, a finite limit at very low temperatures. This contradicts Fermi liquid theory, according to which the coherence time should keep increasing as the system approaches absolute zero temperature. Alternative theories indicate that saturation may be caused by intrinsic electron-electron interactions, but some extrinsic influences (such as a trace of magnetic impurities) are consistent with Fermi liquid theory. Theories also differ on how the phase coherence time should depend on disorder in the system—expressed as the diffusion coefficient—but this has been difficult to measure. In a paper appearing in Physical Review B, Yasuhiro Niimi and collaborators from France, Japan, Germany, and Taiwan report success in measuring the coherence time in high-mobility $\text{GaAs/AlGaAs}$ heterostructures at temperatures down to $25\phantom{\rule{0.333em}{0ex}}\text{mK}$, while varying the diffusion coefficient by nearly three orders of magnitude. The researchers used a focused ion beam microscope to locally implant gallium ions into the heterostructure, tuning the disorder by varying the amount of implanted ions. No saturation was observed in the phase coherence time, indicating that extrinsic mechanisms caused the saturation observed in previous experiments. The results are consistent with Fermi liquid theory over the large parameter space of temperature and disorder examined. – Brad Rubin ### Announcements More Announcements » Optics ## Next Synopsis Atomic and Molecular Physics ## Related Articles Magnetism ### Synopsis: Measuring Spin One Atom at a Time Electron microscopy experiments have measured the spin state of individual metal atoms on a graphene layer, characterizing their potential for information storage applications.   Read More » Optics ### Synopsis: Through a Glass Densely A new model for light scattering explains why an unexpected amount of light propagates through materials containing densely packed scattering objects.   Read More » Plasmonics ### Viewpoint: All Together Now A “Schrödinger’s cat”-type effect entangles collective excitations in a semiconductor nanostructure, making a new infrared light source. Read More »
2015-11-29 03:23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4426843523979187, "perplexity": 2086.921362774684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00217-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.askphysics.com/category/digital-electronics/
Home » Digital Electronics # Category Archives: Digital Electronics ## HOW TO PROVE THAT A NAND GATE IS A UNIVERSAL GATE?? NAND GATE and NOR GATE can be used as universal gates because all the basic logic gates can be realized using NAND or NOR alone as detailed below.
2016-12-08 00:03:35
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9148917198181152, "perplexity": 1535.3157159985265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542288.7/warc/CC-MAIN-20161202170902-00312-ip-10-31-129-80.ec2.internal.warc.gz"}
https://uen.pressbooks.pub/introductorychemistry/chapter/classes-of-organic-compounds/
74 Classes of Organic Compounds LumenLearning Organic Molecules and Functional Groups Functional groups are groups of molecules attached to organic molecules and give them specific identities or functions. LEARNING OBJECTIVES Describe the importance of functional groups to organic molecules KEY TAKEAWAYS Key Points • Functional groups are collections of atoms that attach the carbon skeleton of an organic molecule and confer specific properties. • Each type of organic molecule has its own specific type of functional group. • Functional groups in biological molecules play an important role in the formation of molecules like DNA, proteins, carbohydrates, and lipids. • Functional groups include: hydroxyl, methyl, carbonyl, carboxyl, amino, phosphate, and sulfhydryl. Key Terms • hydrophobic: lacking an affinity for water; unable to absorb, or be wetted by water • hydrophilic: having an affinity for water; able to absorb, or be wetted by water Location of Functional Groups Functional groups are groups of atoms that occur within organic molecules and confer specific chemical properties to those molecules. When functional groups are shown, the organic molecule is sometimes denoted as “R.” Functional groups are found along the “carbon backbone” of macromolecules which is formed by chains and/or rings of carbon atoms with the occasional substitution of an element such as nitrogen or oxygen. Molecules with other elements in their carbon backbone are substituted hydrocarbons. Each of the four types of macromolecules—proteins, lipids, carbohydrates, and nucleic acids—has its own characteristic set of functional groups that contributes greatly to its differing chemical properties and its function in living organisms. Properties of Functional Groups A functional group can participate in specific chemical reactions. Some of the important functional groups in biological molecules include: hydroxyl, methyl, carbonyl, carboxyl, amino, phosphate, and sulfhydryl groups. These groups play an important role in the formation of molecules like DNA, proteins, carbohydrates, and lipids. Classifying Functional Groups Functional groups are usually classified as hydrophobic or hydrophilic depending on their charge or polarity. An example of a hydrophobic group is the non-polar methane molecule. Among the hydrophilic functional groups is the carboxyl group found in amino acids, some amino acid side chains, and the fatty acid heads that form triglycerides and phospholipids. This carboxyl group ionizes to release hydrogen ions ($\text{H}^+$) from the $\text{COOH}$ group resulting in the negatively charged $\text{COO}^-$ group; this contributes to the hydrophilic nature of whatever molecule it is found on. Other functional groups, such as the carbonyl group, have a partially negatively charged oxygen atom that may form hydrogen bonds with water molecules, again making the molecule more hydrophilic. Examples of functional groups: The functional groups shown here are found in many different biological molecules, where “R” is the organic molecule. Hydrogen Bonds between Functional Groups Hydrogen bonds between functional groups (within the same molecule or between different molecules) are important to the function of many macromolecules and help them to fold properly and maintain the appropriate shape needed to function correctly. Hydrogen bonds are also involved in various recognition processes, such as DNA complementary base pairing and the binding of an enzyme to its substrate. Hydrogen bonds in DNA: Hydrogen bonds connect two strands of DNA together to create the double-helix structure. The Chemical Basis for Life Carbon is the most important element to living things because it can form many different kinds of bonds and form essential compounds. LEARNING OBJECTIVES Explain the properties of carbon that allow it to serve as a building block for biomolecules KEY TAKEAWAYS Key Points • All living things contain carbon in some form. • Carbon is the primary component of macromolecules, including proteins, lipids, nucleic acids, and carbohydrates. • Carbon’s molecular structure allows it to bond in many different ways and with many different elements. • The carbon cycle shows how carbon moves through the living and non-living parts of the environment. Key Terms • : A rule stating that atoms lose, gain, or share electrons in order to have a full valence shell of 8 electrons (has some exceptions). • carbon cycle: the physical cycle of carbon through the earth’s biosphere, geosphere, hydrosphere, and atmosphere; includes such processes as photosynthesis, decomposition, respiration and carbonification • macromolecule: a very large molecule, especially used in reference to large biological polymers (e.g., nucleic acids and proteins) Carbon is the fourth most abundant element in the universe and is the building block of life on earth. On earth, carbon circulates through the land, ocean, and atmosphere, creating what is known as the Carbon Cycle. This global carbon cycle can be divided further into two separate cycles: the geological carbon cycles takes place over millions of years, whereas the biological or physical carbon cycle takes place from days to thousands of years. In a nonliving environment, carbon can exist as carbon dioxide ($\text{CO}_2$), carbonate rocks, coal, petroleum, natural gas, and dead organic matter. Plants and algae convert carbon dioxide to organic matter through the process of photosynthesis, the energy of light. Carbon is present in all life: All living things contain carbon in some form, and carbon is the primary component of macromolecules, including proteins, lipids, nucleic acids, and carbohydrates. Carbon exists in many forms in this leaf, including in the cellulose to form the leaf’s structure and in chlorophyll, the pigment which makes the leaf green. Carbon is Important to Life In its metabolism of food and respiration, an animal consumes glucose ($\text{C}_6\text{H}_12\text{O}_6$), which combines with oxygen ($\text{O}_2$) to produce carbon dioxide ($\text{CO}_2$), water ($\text{H}_2\text{O}$), and energy, which is given off as heat. The animal has no need for the carbon dioxide and releases it into the atmosphere. A plant, on the other hand, uses the opposite reaction of an animal through photosynthesis. It intakes carbon dioxide, water, and energy from sunlight to make its own glucose and oxygen gas. The glucose is used for chemical energy, which the plant metabolizes in a similar way to an animal. The plant then emits the remaining oxygen into the environment. Cells are made of many complex molecules called macromolecules, which include proteins, nucleic acids (RNA and DNA), carbohydrates, and lipids. The macromolecules are a subset of organic molecules (any carbon-containing liquid, solid, or gas) that are especially important for life. The fundamental component for all of these macromolecules is carbon. The carbon atom has unique properties that allow it to form covalent bonds to as many as four different atoms, making this versatile element ideal to serve as the basic structural component, or “backbone,” of the macromolecules. Structure of Carbon Individual carbon atoms have an incomplete outermost electron shell. With an atomic number of 6 (six electrons and six protons), the first two electrons fill the inner shell, leaving four in the second shell. Therefore, carbon atoms can form four covalent bonds with other atoms to satisfy the octet rule. The methane molecule provides an example: it has the chemical formula $\text{CH}_4$. Each of its four hydrogen atoms forms a single covalent bond with the carbon atom by sharing a pair of electrons. This results in a filled outermost shell. Structure of Methane: Methane has a tetrahedral geometry, with each of the four hydrogen atoms spaced 109.5° apart.
2023-04-01 11:47:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32812052965164185, "perplexity": 2585.808588285099}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00298.warc.gz"}
https://email.esm.psu.edu/pipermail/macosx-tex/2004-March/004991.html
# [OS X TeX] 9.5pt font size Bruno Voisin bvoisin at mac.com Tue Mar 23 09:18:57 EST 2004 ```Le 23 mars 04, à 08:29, Bruno Voisin a écrit : > What you see is actually (and sadly) normal. With Computer Modern > fonts, not all sizes are defined. If you look for example at the file > /Library/teTeX/share/texmf.tetex/tex/latex/base/ot1cmr.fd, you'll see > such lines as: > > \DeclareFontShape{OT1}{cmr}{m}{n}% > {<5><6><7><8><9><10><12>gen*cmr% > <10.95>cmr10% > <14.4>cmr12% > <17.28><20.74><24.88>cmr17}{} > > [...] > > To cure this you can try to put in the preamble of your document stuff > like: > > \DeclareFontShape{OT1}{cmr}{m}{n}% > {<-5>cmr5% > <5-6>cmr6% > <6-7>cmr7% > <7-8>cmr8% > <8-9>cmr9% > <9-11>cmr10% > <11-14>cmr12% > <14->cmr17}{} > > [...] > > But then you'll have to do this for all the fonts involved Thinking about it a bit more, I just realized that one easy way to do this is to use the Latin Modern fonts that are installed as part of Gerben Wierda's distribution. For example, /Library/teTeX/share/texmf.local/tex/latex/lm/t1lmr.fd contains: \DeclareFontShape{T1}{lmr}{m}{n}% {<-5.5> cork-lmr5 <5.5-6.5> cork-lmr6 <6.5-7.5> cork-lmr7 <7.5-8.5> cork-lmr8 <8.5-9.5> cork-lmr9 <9.5-11> cork-lmr10 <11-15> cork-lmr12 <15-> cork-lmr17 }{} Hence simply by putting in the preamble of your document: \usepackage[T1]{fontenc} \usepackage{textcomp} \usepackage{lmodern} you'll avoid having to redefine font shapes yourself, and still get all the intermediate sizes available. You'll also notice characters look "better" (less thin) on screen. However Latin Modern fonts include text fonts only, not math fonts, thus I don't know what will happen with these. Hope this helps, Bruno Voisin PS In case you don't know, Latin Modern fonts are a recent free PostScript equivalent of the EC fonts, similar to the commercial European Modern fonts from Y&Y, in the same way as the BSR/Y&Y/AMS version of CM fonts is a PostScript equivalent of the original CM fonts from Don Knuth, the inventor of TeX. As to EC fonts, they are an 8-bit extension (suitable for Western European languages) of the 7-bit CM fonts. -----------------------------------------------------
2020-09-28 19:47:12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893598198890686, "perplexity": 11648.384135505394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401604940.65/warc/CC-MAIN-20200928171446-20200928201446-00027.warc.gz"}
https://cs.stackexchange.com/questions/118455/proof-of-the-limit-colimit-coincidince
# Proof of the limit-colimit coincidince Note: I figured this out, but haven't had the time to write an answer for it: see the comment. For reference: the discussed material appears in http://www.cs.ru.nl/B.Jacobs/CLG/JacobsCoalgebraIntro.pdf page 111. On page 285 of "Introduction to Coalgebra" by Bart Jacobs, Proposition 5.3.3 is stated as follows: Let $$\mathbb{C}$$ be a dcpo-enriched category. Assume an $$\omega$$-chain $$X_0 \overset{f_0}{\longrightarrow} X_1 \overset{f_1} \longrightarrow X_2 \overset{f_2}{\longrightarrow} \cdots$$ with colimit $$A \in \mathbb C$$. If the maps $$f_i$$ are embeddings, then the colimit $$A$$ is also a limit in $$\mathbb C$$, namely the limit of the $$\omega$$-chain of associated projections $$f_i^p : X_{i+1} \to X_i$$. In order to prove this theorem, Jacobs first gives a partial proof of the following lemma: The coprojection maps $$\kappa_n : X_n \to A$$ associated with the colimit $$A$$ are embeddings, and their projections $$\pi_n = \kappa_n^{p} : A \to X_n$$ form a cone, i.e. satisfy $$f^p_n \circ \pi_{n+1} = \pi_n$$. Here is the partial proof: For each $$n \in \mathbb N$$ we first show that the object $$X_n$$ forms a cone: for $$m \in \mathbb N$$ there is a map $$f_{mn} : X_m \to X_n$$, namely: $$f_{mn} \overset{def}{=} \left \{ \begin{array}{ll} f_{n-1} \circ \cdots \circ f_m : X_m \to X_{m+1} \to \cdots \to X_n & \text{if}~~m \leq n \\ f_n^p \circ \cdots \circ f_{m-1}^p : X_m \to X_{m-1} \to \cdots \to X_n & \text{if}~~m > n \end{array} \right \}$$ These maps $$f_{mn}$$ commute with the maps $$f_i : X_i \to X_{i+1}$$ in the chain: $$f_{(m+1)n} \circ f_m = f_{mn}$$, and thus form a cone. Since $$A$$ is a colimit , there is a unique map $$\pi_n : A \to X_n$$ with $$\pi_n \circ \kappa_m = f_{m n}$$. In particular, we get $$\pi_n \circ \kappa_n = f_{n n} = \mathit{id}$$. We postpone the proof that $$\kappa_n \circ \pi_n \leq \mathit{id}_A$$ for a moment. It seems to me that the above proof doesn't go far enough: it does not prove that the proposed projections $$\pi_n$$ actually form a cone, i.e. that $$f^p_n \circ \pi_{n+1} = \pi_n$$. This fact is used immediately in the proof of the next lemma, so it cannot be postponed! Can anyone explain to me how to prove $$f^p_n \circ \pi_{n+1} = \pi_n$$? If our embeddings were epic then this would be straightforward, but I don't think that they are. • I figured it out: we just have to prove that $(f^p_n \circ \pi_{n+1}) \circ \kappa_m = f_{mn}$, because $\pi_n$ was defined as the unique morphism such that $\pi_n \circ \kappa_m = f_{mn}$. – Kevin Clancy Dec 16 '19 at 15:17
2021-01-21 02:42:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778698682785034, "perplexity": 162.6257046550326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522150.18/warc/CC-MAIN-20210121004224-20210121034224-00493.warc.gz"}
https://zbmath.org/?q=an%3A0955.42023
# zbMATH — the first resource for mathematics On the estimation of wavelet coefficients. (English) Zbl 0955.42023 In this paper the author studies the magnitude of wavelet coefficients by investigating the quantities $c_k(\psi)=\sup_{f\in A_k}{|(\psi, f)|\over \|\psi\|_2}.$ Here, the function classes $$A_k$$ are defined by $A_k=\{f|\|f^{(k)}\|_2 < 1\}\quad k\in {\mathbb{N}}.$ In particular, the expressions $$\lim_{m\rightarrow\infty} c_k(\psi_m)$$, for a fixed $$k$$, and $$\lim_{m\rightarrow\infty} c_m(\psi_m)$$ are explicitly computed for Daubechies orthonormal wavelets and for semiorthogonal spline wavelets, where $$m$$ denotes the number of vanishing moments of $$\psi_m$$. It turns out that these constants are considerably smaller for spline wavelets. ##### MSC: 42C40 Nontrigonometric harmonic analysis involving wavelets and other special systems 41A15 Spline approximation Full Text:
2021-03-04 04:40:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8214054703712463, "perplexity": 799.6004495464673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00354.warc.gz"}
https://wiki.seg.org/index.php?title=Amplitude/energy_of_reflections_and_multiples&oldid=141072
Amplitude/energy of reflections and multiples (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Series Geophysical References Series Problems in Exploration Seismology and their Solutions Lloyd P. Geldart and Robert E. Sheriff 3 47 - 77 http://dx.doi.org/10.1190/1.9781560801733 ISBN 9781560801153 SEG Online Store Problem 3.8a Assume horizontal layering (as shown in Figure 3.8a), a source just below interface ${\displaystyle A}$, and a geophone at the surface. Calculate (ignoring absorption and divergence) the relative amplitudes and energy densities of the primary reflections from ${\displaystyle B}$ and ${\displaystyle C}$ and the multiples ${\displaystyle BSA}$, ${\displaystyle BAB}$, and ${\displaystyle BSB}$ (where the letters denote the interfaces involved). Compare traveltimes, amplitudes, and energy densities of these five events for normal incidence. Background Multiples are events that have been reflected more than once. They are generally weak because the energy decreases at each reflection, but where the reflection coefficients are large, multiples may be strong enough to cause problems. Multiples are of two kinds as shown in Figure 3.8b: long-path multiples which arrive long enough after the primary reflection that they appear as separate events, and short-path multiples which arrive so soon after the primary wave that they add to it and change its shape. The most important short-path multiples are two in number: (i) ghosts (Figure 3.8b) where part of the energy leaving the source travels upward and is reflected downward either at the base of the LVL (see problem 4.16) or at the surface, (ii) peg-leg multiples resulting from the addition to a primary reflection of energy reflected from both the top and bottom of a thin bed, either on the way to or on the way back from the principal reflecting horizon. Short-path near-surface multiples are also called ghosts and long-path interformational multiples are also called peg-leg multiples. A notable example of the latter occurs in marine work when wave energy bounces back and forth within the water layer. The energy density of a wave (see problem 3.7) decreases continuously as the wave progresses because of two factors: absorption and spreading or divergence. The energy density is proportional to the square of the amplitude, so both effects are usually expressed in terms of the decrease in amplitude with distance. Figure 3.8a.  A layered model. Figure 3.8b.  Types of multiples. Absorption causes the amplitude to decrease exponentially, the relation being ${\displaystyle A=}$ ${\displaystyle A_{0}e^{-\eta x}}$ where the amplitude decreases from ${\displaystyle A_{0}}$ to ${\displaystyle A}$ over a distance ${\displaystyle x}$; the absorption coefficient ${\displaystyle \eta }$ is often expressed in terms of per wavelength, ${\displaystyle \lambda .}$ For a point source in an infinite constant-velocity medium, divergence causes the energy density to decrease inversely as the square of the distance from the source, the amplitude decreasing inversely as the first power of the distance from the source. Nepers and decibels are defined in problem 2.17. Solution We first calculate the impedances ${\displaystyle Z_{i}}$ for each layer, the coefficients of reflection and downgoing and upgoing transmission ${\displaystyle R,}$ ${\displaystyle T\downarrow }$, ${\displaystyle T\uparrow }$ (see problem 3.6), and the reflected and transmitted energy coefficients, ${\displaystyle E_{R}}$ and ${\displaystyle E_{T}}$, for each interface. The results are shown in Table 3.8a. Table 3.8a. Reflection and transmission coefficients. Interface ${\displaystyle Z}$ ${\displaystyle R^{*}}$ ${\displaystyle T\downarrow }$ ${\displaystyle T\uparrow }$ ${\displaystyle E_{R}}$ ${\displaystyle E_{T}}$ S 1.000 0.000 0.000 1.000 0.000 Layer 1 0.870 ${\displaystyle {\textit {A}}}$ 0.733 0.267 1.733 0.537 0.463 Layer 2 5.640 ${\displaystyle {\textit {B}}}$ 0.207 0.793 1.207 0.043 0.957 Layer 3 8.576 ${\displaystyle {\textit {C}}}$ 0.034 0.966 1.034 0.001 0.999 Layer 4 9.180 * Signs are for incidence from above. Assuming unit amplitude and unit energy density for the downgoing wave incident on interface ${\displaystyle B}$ and neglecting absorption and divergence, we arrive at the following values: Reflection ${\displaystyle B}$: Amplitude of reflection ${\displaystyle B=R_{B}T_{A}\uparrow =0.207\times 1.733=0.359.}$ Energy density ${\displaystyle =E_{RB}E_{TA}=0.043\times 0.463=0.020.}$ Arrival time ${\displaystyle t_{B}=2\times 0.600/2.400+0.010/0.600=0.517s.}$ Reflection ${\displaystyle C}$ Amplitude ${\displaystyle =T_{B}\downarrow R_{C}T_{B}\uparrow T_{A}\uparrow =0.793\times 0.034\times 1.207\times 1.733=0.056.}$ Energy density ${\displaystyle =E_{TB}^{2}E_{RC}E_{TA}=0,0004.}$ Arrival time ${\displaystyle t_{C}=t_{B}+2\times 0.800/3.20=1.017s.}$ Multiple BSA Amplitude ${\displaystyle =R_{B}T_{A}\uparrow \left(-R_{S}\right)R_{A}=-0.207\times 1.733\times 1.000\times 0.733=-0.263.}$ Energy density ${\displaystyle =0.043\times 0.463\times 1.000\times 0.537=0.011.}$ Arrival time ${\displaystyle t_{BSA}=2\times 0.600/2.40+3\times 0.010/0.600=0.550s.}$ Multiple BAB Amplitude ${\displaystyle =(R_{B})^{2}\left(-R_{A}\right)T_{A}\uparrow =0.207^{2}\times \left(-0.733\right)\times 1.733=0.0544.}$ Energy density ${\displaystyle =0.043^{2}\times 0.537\times 0.463=0.0005.}$ Arrival time ${\displaystyle t_{BAB}=4\times 0.600/2.40+0.010/0.600=1.017s.}$ Multiple BSB Amplitude ${\displaystyle =(R_{B})^{2}(T_{A}\uparrow )^{2}T_{A}\downarrow \left(R_{S}\right)=-0.0344.}$ Energy density ${\displaystyle =(E_{RB})^{2}(E_{TA})^{3}\left(E_{RS}\right)=0.0002.}$ Arrival time ${\displaystyle t_{BSB}=4\times 0.600/2.400+3\times 0.010/0.600=1.050.}$ The results are summarized in Table 3.8b. BSA arrives 33 ms after ${\displaystyle B}$ (one period for a 33-Hz wave) with reversed polarity and about 75% of the amplitude and 50% of the energy of ${\displaystyle B}$, so BSA will significantly alter the waveshape of B. BSA involves an extra bounce at the surface and is a type of ghost whose effect is mainly that of changing waveshape rather than showing up as a distinct event. ${\displaystyle C}$ and BAB arrive simultaneously with opposite polarities, ${\displaystyle C}$ being slightly stronger than BAB; the multiple will obscure and significantly alter the waveshape of the primary reflection. Table 3.8b. Amplitude/energy density of primary/multiple reflections. Event ${\displaystyle t}$ Amplitude ${\displaystyle 20\log \left(A/A_{B}\right)}$ Energy ${\displaystyle B}$ ${\displaystyle 0.517s}$ ${\displaystyle 0.359}$ ${\displaystyle 0dB}$ ${\displaystyle 0.020}$ ${\displaystyle BSA}$ ${\displaystyle 0.550}$ ${\displaystyle -0.263}$ ${\displaystyle -2.7}$ ${\displaystyle 0.011}$ ${\displaystyle C}$ ${\displaystyle 1.017}$ ${\displaystyle 0.056}$ ${\displaystyle -16.1}$ ${\displaystyle 0.0004}$ ${\displaystyle BAB}$ ${\displaystyle 1.017}$ ${\displaystyle -0.054}$ ${\displaystyle -16.4}$ ${\displaystyle 0.0004}$ ${\displaystyle {\textit {BSB}}}$ ${\displaystyle 1.050}$ ${\displaystyle -0.034}$ ${\displaystyle -20.5}$ ${\displaystyle 0.0002}$ The surface multiple BSB is smaller than the multiple from the base of the near-surface layer BAB; on land the base of the near-surface layer is often the most important interface in generating multiples. Problem 3.8b Recalculate for 15- and 75-Hz waves allowing for absorption. Solution The absorption coefficient ${\displaystyle \eta }$ has the values 0.45, 0.30, and 0.25 ${\displaystyle {\rm {dB}}/\lambda }$ in layers SA, ${\displaystyle AB}$, and ${\displaystyle BC}$, respectively. Using ${\displaystyle \Delta z}$ for the layer thicknesses, the results are given in Table 3.8c. Table 3.8c. Absorption for one-way travel in each layer. ${\displaystyle f=15}$ Hz ${\displaystyle f=75}$ Hz Layer Velocity ${\displaystyle \Delta z}$ ${\displaystyle \lambda }$ ${\displaystyle \Delta z/\lambda }$ ${\displaystyle \eta \Delta z}$ ${\displaystyle \lambda }$ ${\displaystyle \Delta z/\lambda }$ ${\displaystyle \eta \Delta z}$ SA 600 m/s 10 m 40 m 0.25 0.11 dB 8 m 1.25 0.56 dB AB 2400 600 160 3.75 1.12 32 18.8 5.64 BC 3200 800 213 3.76 0.94 43 18.6 4.65 For 15-Hz waves, the travelpath for reflection ${\displaystyle B}$ involves two-way travel through ${\displaystyle AB}$ and one-way travel through ${\displaystyle SA}$, hence attenuation due to absorption is ${\displaystyle 2\times 1.12+0.11=2.35\ {\rm {dB}}}$ the amplitude being decreased by the factor 0.763. For the multiple BSA we add attenuation for the extra two-way path through ${\displaystyle SA}$ to the attenuation for ${\displaystyle B}$, giving 2.57 ${\displaystyle dB}$, or an amplitude reduction of 0.744. For reflection ${\displaystyle C}$, we add to the attenuation of reflection ${\displaystyle B}$ the attenuation for the two-way travel through ${\displaystyle BC}$, giving 4.23 ${\displaystyle dB}$, and an amplitude ratio of 0.614. For the multiple BAB we get attenuation of 4.59 ${\displaystyle dB}$, an amplitude ratio of 0.590. For BSB, attenuation is 4.81 ${\displaystyle dB}$ and an amplitude ratio is 0.575. Attenuation for 75 Hz is 5 times that for 15 Hz because ${\displaystyle \lambda }$ is only one-fifth that for 15 Hz, hence ${\displaystyle \Delta z/\lambda \}$ will be five times greater. Table 3.8d repeats the reflection amplitudes in Table 3.8b to compare them with the amplitudes after allowing for absorption for 15 Hz and 75 Hz. Table 3.8d. Illustrating the effect of absorption. Event ${\displaystyle A}$ ${\displaystyle \mathop {\sum } \limits _{}^{}\eta \Delta z\left(15\right)}$ Ratio(15) ${\displaystyle A_{a}(15)}$ ${\displaystyle {\sum }\eta \Delta z\left(75\right)}$ Ratio(75) ${\displaystyle A_{a}(75)}$ ${\displaystyle B}$ 0.359 2. 35 dB 0.763 0.274 11.8 dB 0.257 0.092 ${\displaystyle BSA}$ -0.263 2.57 0.744 -0.196 12.8 0.229 0.060 ${\displaystyle C}$ 0.056 4.23 0.614 0.034 21.2 0.087 0.005 ${\displaystyle BAB}$ -0.054 4.59 0.590 -0.032 23.0 0.071 0.004 ${\displaystyle BSB}$ -0.034 4.81 0.575 -0.020 24.0 0.063 0.002 Problem 3.8c Recalculate amplitudes for divergence without absorption. Normalize values by letting the divergence effect of reflection ${\displaystyle B}$ be unity. Solution Divergence depends upon the distance traveled, not upon the traveltime. In Table 3.8e, ${\displaystyle L}$ is the distance traveled by the event in column 1 (assuming normal incidence), ${\displaystyle F}$ is the divergence factor obtained by dividing ${\displaystyle L_{B}}$ by ${\displaystyle L}$, ${\displaystyle A_{\rm {no~div}}}$ is the reflection amplitude from Table 3.8b, ${\displaystyle A_{\rm {div}}=F\times A_{\rm {no~div}}}$, and the column headed ${\displaystyle dB}$ is ${\displaystyle A_{\rm {div}}}$ expressed in decibels. Divergence generally affects multiples less than primaries with the same traveltime because they travel at lower velocities and therefore have not gone as far. Thus, allowing for divergence, ${\displaystyle C}$ is weaker than BAB rather than slightly stronger. Problem 3.8d Summarize your conclusions regarding (i) the importance of multiples and (ii) the relative importance of absorption and divergence. Solution The 3rd column of Table 3.8f gives the attenuation because of reflectivity only and the following columns also include the effects of reflectivity changes. The 4th column shows the changes because of absorption beginning at the source, whereas the 5th and following columns reference to reflection B. Comparing multiples with primaries involves considering interference, noting that the three multiples all have opposite polarity to the primaries. Multiples can strongly affect the wave-shape of primaries with which they interfere as well as being confused as primaries. As noted earlier, absorption and divergence effects for multiples are different than for primaries because of differences in the distances traveled. Table 3.8e. Effect of divergence. Event ${\displaystyle L\left(m\right)}$ ${\displaystyle F}$ ${\displaystyle A_{\rm {no~div}}}$ ${\displaystyle A_{\rm {div}}}$ dB ${\displaystyle {\textit {B}}}$ 1210 1.000 0.359 0.359 0.0 ${\displaystyle {\textit {BSA}}}$ 1230 0.984 –0.263 –0.259 –2.8 ${\displaystyle {\textit {C}}}$ 2810 0.431 0.056 0.024 –23.8 ${\displaystyle {\textit {BAB}}}$ 2410 0.502 –0.054 –0.027 –23.5 ${\displaystyle {\textit {BSB}}}$ 2430 0.498 –0.034 –0.017 –26.6 Note: Minus sign on amplitudes indicates 180° phase shift. Table 3.8f Effects of absorption and divergence. Event Time Reflect. only 75 Hz abs, no div 75 Hz abs, no div (ref B) Div, no abs (ref B) Abs and Div (ref B) B 0.517 s 0 dB ${\displaystyle -11.8dB}$ 0 dB 0 dB 0 dB BSA 0.550 ${\displaystyle -2.7}$ ${\displaystyle -12.8}$ ${\displaystyle -1.0}$ ${\displaystyle -2.8}$ ${\displaystyle -15.6}$ C 1.017 ${\displaystyle -16.1}$ ${\displaystyle -21.2}$ ${\displaystyle -9.4}$ ${\displaystyle -23.8}$ ${\displaystyle -45.0}$ BAB 1.017 ${\displaystyle -16.5}$ ${\displaystyle -23.0}$ ${\displaystyle -11.2}$ ${\displaystyle -22.5}$ ${\displaystyle -45.0}$ BSB 1.050 ${\displaystyle -20.5}$ ${\displaystyle -24.0}$ ${\displaystyle -12.2}$ ${\displaystyle -26.6}$ ${\displaystyle -50.6}$ Divergence is more important than absorption for early arrival times, whereas the opposite is true for longer arrival times. This effect is not well illustrated by this problem.
2020-09-22 02:05:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 156, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8158687353134155, "perplexity": 1691.569085244483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00209.warc.gz"}
http://gnu.wiki/man1/guestfish.1.php
GNU.WIKI: The GNU/Linux Knowledge Base #### NAME guestfish - the guest filesystem shell #### SYNOPSIS guestfish [--options] [commands] guestfish guestfish [--ro|--rw] -a disk.img guestfish [--ro|--rw] -a disk.img -m dev[:mountpoint] guestfish -d libvirt-domain guestfish [--ro|--rw] -a disk.img -i guestfish -d libvirt-domain -i #### WARNING Using guestfish in read/write mode on live virtual machines can be dangerous, potentially causing disk corruption. Use the --ro (read- only) option to use guestfish safely if the disk image or virtual machine might be live. #### DESCRIPTION Guestfish is a shell and command-line tool for examining and modifying virtual machine filesystems. It uses libguestfs and exposes all of the functionality of the guestfs API, see guestfs(3). scripts or the command line or interactively. If you want to rescue a broken virtual machine image, you should look at the virt-rescue(1) command. #### EXAMPLES As an interactive shell $guestfish Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems. Type: 'help' for a list of commands 'man' to read the manual 'quit' to quit the shell ><fs> add-ro disk.img ><fs> run ><fs> list-filesystems /dev/sda1: ext4 /dev/vg_guest/lv_root: ext4 /dev/vg_guest/lv_swap: swap ><fs> mount /dev/vg_guest/lv_root / ><fs> cat /etc/fstab # /etc/fstab # Created by anaconda [...] ><fs> exit From shell scripts Create a new "/etc/motd" file in a guest or disk image: guestfish <<_EOF_ add disk.img run mount /dev/vg_guest/lv_root / write /etc/motd "Welcome, new users" _EOF_ List the LVM logical volumes in a disk image: guestfish -a disk.img --ro <<_EOF_ run lvs _EOF_ List all the filesystems in a disk image: guestfish -a disk.img --ro <<_EOF_ run list-filesystems _EOF_ On one command line Update "/etc/resolv.conf" in a guest: guestfish \ add disk.img : run : mount /dev/vg_guest/lv_root / : \ write /etc/resolv.conf "nameserver 1.2.3.4" Edit "/boot/grub/grub.conf" interactively: guestfish --rw --add disk.img \ --mount /dev/vg_guest/lv_root \ --mount /dev/sda1:/boot \ edit /boot/grub/grub.conf Mount disks automatically Use the -i option to automatically mount the disks from a virtual machine: guestfish --ro -a disk.img -i cat /etc/group guestfish --ro -d libvirt-domain -i cat /etc/group Another way to edit "/boot/grub/grub.conf" interactively is: guestfish --rw -a disk.img -i edit /boot/grub/grub.conf As a script interpreter Create a 100MB disk containing an ext2-formatted partition: #!/usr/bin/guestfish -f sparse test1.img 100M run part-disk /dev/sda mbr mkfs ext2 /dev/sda1 Start with a prepared disk An alternate way to create a 100MB disk called "test1.img" containing a single ext2-formatted partition: guestfish -N fs To list what is available do: guestfish -N help | less Remote drives Access a remote disk using ssh: guestfish -a ssh://example.com/path/to/disk.img Remote control eval "guestfish --listen" guestfish --remote add-ro disk.img guestfish --remote run guestfish --remote lvs #### OPTIONS --help Displays general help on options. -h --cmd-help Lists all available guestfish commands. -h cmd --cmd-help cmd Displays detailed help on a single command "cmd". -a image --add image Add a block device or virtual machine image to the shell. The format of the disk image is auto-detected. To override this and force a particular format use the --format=.. option. Using this flag is mostly equivalent to using the "add" command, with "readonly:true" if the --ro flag was given, and with "format:..." if the --format=... flag was given. -a URI --add URI Add a remote disk. See "ADDING REMOTE STORAGE". -c URI --connect URI When used in conjunction with the -d option, this specifies the libvirt URI to use. The default is to use the default libvirt connection. --csh If using the --listen option and a csh-like shell, use this option. See section "REMOTE CONTROL AND CSH" below. -d libvirt-domain --domain libvirt-domain Add disks from the named libvirt domain. If the --ro option is also used, then any libvirt domain can be used. However in write mode, only libvirt domains which are shut down can be named here. Domain UUIDs can be used instead of names. Using this flag is mostly equivalent to using the "add-domain" command, with "readonly:true" if the --ro flag was given, and with "format:..." if the --format=... flag was given. --echo-keys When prompting for keys and passphrases, guestfish normally turns echoing off so you cannot see what you are typing. If you are not worried about Tempest attacks and there is no one else in the room you can specify this flag to see what you are typing. -f file --file file Read commands from "file". To write pure guestfish scripts, use: #!/usr/bin/guestfish -f --format=raw|qcow2|.. --format The default for the -a option is to auto-detect the format of the disk image. Using this forces the disk format for -a options which follow on the command line. Using --format with no argument switches back to auto-detection for subsequent -a options. For example: guestfish --format=raw -a disk.img forces raw format (no auto-detection) for "disk.img". guestfish --format=raw -a disk.img --format -a another.img forces raw format (no auto-detection) for "disk.img" and reverts to auto-detection for "another.img". If you have untrusted raw-format guest disk images, you should use this option to specify the disk format. This avoids a possible security problem with malicious guests (CVE-2010-3851). See also "add". -i --inspector Using virt-inspector(1) code, inspect the disks looking for an operating system and mount filesystems as they would be mounted on the real virtual machine. Typical usage is either: guestfish -d myguest -i (for an inactive libvirt domain called myguest), or: guestfish --ro -d myguest -i (for active domains, readonly), or specify the block device directly: guestfish --rw -a /dev/Guests/MyGuest -i Note that the command line syntax changed slightly over older versions of guestfish. You can still use the old syntax: guestfish [--ro] -i disk.img guestfish [--ro] -i libvirt-domain Using this flag is mostly equivalent to using the "inspect-os" command and then using other commands to mount the filesystems that were found. --keys-from-stdin Read key or passphrase parameters from stdin. The default is to try to read passphrases from the user by opening "/dev/tty". --listen Fork into the background and listen for remote commands. See section "REMOTE CONTROL GUESTFISH OVER A SOCKET" below. --live Connect to a live virtual machine. (Experimental, see "ATTACHING TO RUNNING DAEMONS" in guestfs(3)). -m dev[:mountpoint[:options[:fstype]]] --mount dev[:mountpoint[:options[:fstype]]] Mount the named partition or logical volume on the given mountpoint. If the mountpoint is omitted, it defaults to "/". You have to mount something on "/" before most commands will work. If any -m or --mount options are given, the guest is automatically launched. If you don't know what filesystems a disk image contains, you can either run guestfish without this option, then list the partitions, filesystems and LVs available (see "list-partitions", "list- filesystems" and "lvs" commands), or you can use the virt-filesystems(1) program. The third (and rarely used) part of the mount parameter is the list of mount options used to mount the underlying filesystem. If this is not given, then the mount options are either the empty string or "ro" (the latter if the --ro flag is used). By specifying the mount options, you override this default choice. Probably the only time you would use this is to enable ACLs and/or extended attributes if the filesystem can support them: -m /dev/sda1:/:acl,user_xattr Using this flag is equivalent to using the "mount-options" command. The fourth part of the parameter is the filesystem driver to use, such as "ext3" or "ntfs". This is rarely needed, but can be useful if multiple drivers are valid for a filesystem (eg: "ext2" and "ext3"), or if libguestfs misidentifies a filesystem. --network Enable QEMU user networking in the guest. -N [filename=]type --new [filename=]type -N help Prepare a fresh disk image formatted as "type". This is an alternative to the -a option: whereas -a adds an existing disk, -N creates a preformatted disk with a filesystem and adds it. See "PREPARED DISK IMAGES" below. -n --no-sync Disable autosync. This is enabled by default. See the discussion of autosync in the guestfs(3) manpage. --no-dest-paths Don't tab-complete paths on the guest filesystem. It is useful to be able to hit the tab key to complete paths on the guest filesystem, but this causes extra "hidden" guestfs calls to be made, so this option is here to allow this feature to be disabled. --pipe-error If writes fail to pipe commands (see "PIPES" below), then the command returns an error. The default (also for historical reasons) is to ignore such errors so that: ><fs> command_with_lots_of_output | head doesn't give an error. --progress-bars Enable progress bars, even when guestfish is used non- interactively. Progress bars are enabled by default when guestfish is used as an interactive shell. --no-progress-bars Disable progress bars. --remote[=pid] Send remote commands to$GUESTFISH_PID or "pid". See section "REMOTE CONTROL GUESTFISH OVER A SOCKET" below. -r --ro This changes the -a, -d and -m options so that disks are added and The option must always be used if the disk image or virtual machine might be running, and is generally recommended in cases where you Note that prepared disk images created with -N are not affected by this option. Also commands like "add" are not affected - you have to specify the "readonly:true" option explicitly if you need it. --selinux Enable SELinux support for the guest. See "SELINUX" in guestfs(3). -v --verbose Enable very verbose messages. This is particularly useful if you find a bug. -V --version Display the guestfish / libguestfs version number and exit. -w --rw This changes the -a, -d and -m options so that disks are added and See "OPENING DISKS FOR READ AND WRITE" below. -x Echo each command before executing it. #### COMMANDS ON COMMAND LINE Any additional (non-option) arguments are treated as commands to execute. Commands to execute should be separated by a colon (":"), where the colon is a separate parameter. Thus: guestfish cmd [args...] : cmd [args...] : cmd [args...] ... If there are no additional arguments, then we enter a shell, either an interactive shell with a prompt (if the input is a terminal) or a non- interactive shell. In either command line mode or non-interactive shell, the first command that gives an error causes the whole shell to exit. In interactive mode (with a prompt) if a command fails, you can continue to enter commands. #### USING launch (OR run) As with guestfs(3), you must first configure your guest by adding disks, then launch it, then mount any disks you need, and finally issue actions/commands. So the general order of the day is: · launch (aka run) · mount or -m/--mount · any other commands "run" is a synonym for "launch". You must "launch" (or "run") your guest before mounting or performing any other commands. The only exception is that if any of the -i, -m, --mount, -N or --new options were given then "run" is done automatically, simply because guestfish can't perform the action you asked for without doing this. #### OPENING DISKS FOR READ AND WRITE The guestfish, guestmount(1) and virt-rescue(1) options --ro and --rw affect whether the other command line options -a, -c, -d, -i and -m open disk images read-only or for writing. In libguestfs ≤ 1.10, guestfish, guestmount and virt-rescue defaulted to opening disk images supplied on the command line for write. To open a disk image read-only you have to do -a image --ro. This matters: If you accidentally open a live VM disk image writable then you will cause irreversible disk corruption. In a future libguestfs we intend to change the default the other way. Disk images will be opened read-only. You will have to either specify guestfish --rw, guestmount --rw, virt-rescue --rw, or change the configuration file in order to get write access for disk images specified by those other command line options. This version of guestfish, guestmount and virt-rescue has a --rw option which does nothing (it is already the default). However it is highly recommended that you use this option to indicate that you need write access, and prepare your scripts for the day when this option will be required for write access. Note: This does not affect commands like "add" and "mount", or any other libguestfs program apart from guestfish and guestmount. #### QUOTING You can quote ordinary parameters using either single or double quotes. For example: rm '/file name' rm '/"' A few commands require a list of strings to be passed. For these, use a whitespace-separated list, enclosed in quotes. Strings containing whitespace to be passed through must be enclosed in single quotes. A literal single quote must be escaped with a backslash. vgcreate VG "/dev/sda1 /dev/sdb1" command "/bin/echo 'foo bar'" command "/bin/echo \'foo\'" ESCAPE SEQUENCES IN DOUBLE QUOTED ARGUMENTS In double-quoted arguments (only) use backslash to insert special characters: "\a" "\b" Backspace character. " " Form feed character. " " Newline character. " " Carriage return character. " " Horizontal tab character. " " Vertical tab character. "\"" A literal double quote character. "\ooo" A character with octal value ooo. There must be precisely 3 octal digits (unlike C). "\xhh" A character with hex value hh. There must be precisely 2 hex digits. In the current implementation "" and "" cannot be used in strings. "\" A literal backslash character. #### OPTIONAL ARGUMENTS Some commands take optional arguments. These arguments appear in this documentation as "[argname:..]". You can use them as in these examples: Each optional argument can appear at most once. All optional arguments must appear after the required ones. #### NUMBERS This section applies to all commands which can take integers as parameters. SIZE SUFFIX When the command takes a parameter measured in bytes, you can use one of the following suffixes to specify kilobytes, megabytes and larger sizes: k or K or KiB The size in kilobytes (multiplied by 1024). KB The size in SI 1000 byte units. M or MiB The size in megabytes (multiplied by 1048576). MB The size in SI 1000000 byte units. G or GiB The size in gigabytes (multiplied by 2**30). GB The size in SI 10**9 byte units. T or TiB The size in terabytes (multiplied by 2**40). TB The size in SI 10**12 byte units. P or PiB The size in petabytes (multiplied by 2**50). PB The size in SI 10**15 byte units. E or EiB The size in exabytes (multiplied by 2**60). EB The size in SI 10**18 byte units. Z or ZiB The size in zettabytes (multiplied by 2**70). ZB The size in SI 10**21 byte units. Y or YiB The size in yottabytes (multiplied by 2**80). YB The size in SI 10**24 byte units. For example: truncate-size /file 1G would truncate the file to 1 gigabyte. Be careful because a few commands take sizes in kilobytes or megabytes (eg. the parameter to "memsize" is specified in megabytes already). Adding a suffix will probably not do what you expect. For specifying the radix (base) use the C convention: 0 to prefix an octal number or "0x" to prefix a hexadecimal number. For example: 1234 decimal number 1234 02322 octal number, equivalent to decimal 1234 0x4d2 hexadecimal number, equivalent to decimal 1234 When using the "chmod" command, you almost always want to specify an octal number for the mode, and you must prefix it with 0 (unlike the Unix chmod(1) program): chmod 0777 /public # OK chmod 777 /public # WRONG! This is mode 777 decimal = 01411 octal. Commands that return numbers usually print them in decimal, but some commands print numbers in other radices (eg. "umask" prints the mode in octal, preceded by 0). #### WILDCARDS AND GLOBBING Neither guestfish nor the underlying guestfs API performs wildcard expansion (globbing) by default. So for example the following will not do what you expect: rm-rf /home/* Assuming you don't have a directory called literally "/home/*" then the above command will return an error. To perform wildcard expansion, use the "glob" command. glob rm-rf /home/* runs "rm-rf" on each path that matches (ie. potentially running the command many times), equivalent to: rm-rf /home/jim rm-rf /home/joe rm-rf /home/mary "glob" only works on simple guest paths and not on device names. If you have several parameters, each containing a wildcard, then glob will perform a Cartesian product. Any line which starts with a # character is treated as a comment and ignored. The # can optionally be preceded by whitespace, but not by a command. For example: # this is a comment # this is a comment foo # NOT a comment Blank lines are also ignored. #### RUNNING COMMANDS LOCALLY Any line which starts with a ! character is treated as a command sent to the local shell ("/bin/sh" or whatever system(3) uses). For example: !mkdir local tgz-out /remote local/remote-data.tar.gz will create a directory "local" on the host, and then export the contents of "/remote" on the mounted filesystem to "local/remote-data.tar.gz". (See "tgz-out"). To change the local directory, use the "lcd" command. "!cd" will have no effect, due to the way that subprocesses work in Unix. LOCAL COMMANDS WITH INLINE EXECUTION If a line starts with <! then the shell command is executed (as for !), but subsequently any output (stdout) of the shell command is parsed and executed as guestfish commands. Thus you can use shell script to construct arbitrary guestfish commands which are then parsed by guestfish. For example it is tedious to create a sequence of files (eg. "/foo.1" through "/foo.100") using guestfish commands alone. However this is simple if we use a shell script to create the guestfish commands for us: <! for n in seq 1 100; do echo write /foo.$n$n; done or with names like "/foo.001": <! for n in seq 1 100; do printf "write /foo.%03d %d " $n$n; done When using guestfish interactively it can be helpful to just run the shell script first (ie. remove the initial "<" character so it is just an ordinary ! local command), see what guestfish commands it would run, and when you are happy with those prepend the "<" character to run the guestfish commands for real. #### PIPES Use "command <space> | command" to pipe the output of the first command (a guestfish command) to the second command (any host command). For example: cat /etc/passwd | awk -F: '$3 == 0 { print }' (where "cat" is the guestfish cat command, but "awk" is the host awk program). The above command would list all accounts in the guest filesystem which have UID 0, ie. root accounts including backdoors. Other examples: hexdump /bin/ls | head list-devices | tail -1 tgz-out / - | tar ztf - The space before the pipe symbol is required, any space after the pipe symbol is optional. Everything after the pipe symbol is just passed straight to the host shell, so it can contain redirections, globs and anything else that makes sense on the host side. To use a literal argument which begins with a pipe symbol, you have to quote it, eg: echo "|" #### HOME DIRECTORIES If a parameter starts with the character "~" then the tilde may be expanded as a home directory path (either "~" for the current user's home directory, or "~user" for another user). Note that home directory expansion happens for users known on the host, not in the guest filesystem. To use a literal argument which begins with a tilde, you have to quote it, eg: echo "~" #### ENCRYPTED DISKS Libguestfs has some support for Linux guests encrypted according to the Linux Unified Key Setup (LUKS) standard, which includes nearly all whole disk encryption systems used by modern Linux guests. Currently only LVM-on-LUKS is supported. Identify encrypted block devices and partitions using "vfs-type": ><fs> vfs-type /dev/sda2 crypto_LUKS Then open those devices using "luks-open". This creates a device- mapper device called "/dev/mapper/luksdev". ><fs> luks-open /dev/sda2 luksdev Enter key or passphrase ("key"): <enter the passphrase> Finally you have to tell LVM to scan for volume groups on the newly created mapper device: vgscan vg-activate-all true The logical volume(s) can now be mounted in the usual way. Before closing a LUKS device you must unmount any logical volumes on it and deactivate the volume groups by calling "vg-activate false VG" on each one. Then you can close the mapper device: vg-activate false /dev/VG luks-close /dev/mapper/luksdev #### WINDOWS PATHS If a path is prefixed with "win:" then you can use Windows-style drive letters and paths (with some limitations). The following commands are equivalent: file /WINDOWS/system32/config/system.LOG file win:\windows\system32\config\system.log file WIN:C:\Windows\SYSTEM32\CONFIG\SYSTEM.LOG The parameter is rewritten "behind the scenes" by looking up the position where the drive is mounted, prepending that to the path, changing all backslash characters to forward slash, then resolving the result using "case-sensitive-path". For example if the E: drive was mounted on "/e" then the parameter might be rewritten like this: win:e: oo\bar => /e/FOO/bar This only works in argument positions that expect a path. #### UPLOADING AND DOWNLOADING FILES For commands such as "upload", "download", "tar-in", "tar-out" and others which upload from or download to a local file, you can use the special filename "-" to mean "from stdin" or "to stdout". For example: upload - /foo reads stdin and creates from that a file "/foo" in the disk image, and: tar-out /etc - | tar tf - writes the tarball to stdout and then pipes that into the external "tar" command (see "PIPES"). When using "-" to read from stdin, the input is read up to the end of stdin. You can also use a special "heredoc"-like syntax to read up to some arbitrary end marker: upload -<<END /foo input line 1 input line 2 input line 3 END Any string of characters can be used instead of "END". The end marker must appear on a line of its own, without any preceding or following characters (not even spaces). Note that the "-<<" syntax only applies to parameters used to upload local files (so-called "FileIn" parameters in the generator). #### EXIT ON ERROR BEHAVIOUR By default, guestfish will ignore any errors when in interactive mode (ie. taking commands from a human over a tty), and will exit on the first error in non-interactive mode (scripts, commands given on the command line). If you prefix a command with a - character, then that command will not cause guestfish to exit, even if that (one) command returns an error. #### REMOTE CONTROL GUESTFISH OVER A SOCKET Guestfish can be remote-controlled over a socket. This is useful particularly in shell scripts where you want to make several different changes to a filesystem, but you don't want the overhead of starting up a guestfish process each time. Start a guestfish server process using: eval "guestfish --listen" and then send it commands by doing: guestfish --remote cmd [...] To cause the server to exit, send it the exit command: guestfish --remote exit Note that the server will normally exit if there is an error in a command. You can change this in the usual way. See section "EXIT ON ERROR BEHAVIOUR". CONTROLLING MULTIPLE GUESTFISH PROCESSES The "eval" statement sets the environment variable$GUESTFISH_PID, which is how the --remote option knows where to send the commands. You can have several guestfish listener processes running using: eval "guestfish --listen" pid1=$GUESTFISH_PID eval "guestfish --listen" pid2=$GUESTFISH_PID ... guestfish --remote=$pid1 cmd guestfish --remote=$pid2 cmd REMOTE CONTROL AND CSH When using csh-like shells (csh, tcsh etc) you have to add the --csh option: eval "guestfish --listen --csh" REMOTE CONTROL DETAILS Remote control happens over a Unix domain socket called "/tmp/.guestfish-$UID/socket-$PID", where $UID is the effective user ID of the process, and$PID is the process ID of the server. Guestfish client and server versions must match exactly. Older versions of guestfish were vulnerable to CVE-2013-4419 (see "CVE-2013-4419" in guestfs(3)). This is fixed in the current version. USING REMOTE CONTROL ROBUSTLY FROM SHELL SCRIPTS From Bash, you can use the following code which creates a guestfish instance, correctly quotes the command line, handles failure to start, and cleans up guestfish when the script exits: #!/bin/bash - set -e guestfish[0]="guestfish" guestfish[1]="--listen" guestfish[2]="--ro" guestfish[3]="-a" guestfish[4]="disk.img" GUESTFISH_PID= eval $("${guestfish[@]}") if [ -z "$GUESTFISH_PID" ]; then echo "error: guestfish didn't start up, see error messages above" exit 1 fi cleanup_guestfish () { guestfish --remote -- exit >/dev/null 2>&1 ||: } trap cleanup_guestfish EXIT ERR guestfish --remote -- run # ... REMOTE CONTROL DOES NOT WORK WITH -a ETC. OPTIONS Options such as -a, --add, -N, --new etc don't interact properly with remote support. They are processed locally, and not sent through to the remote guestfish. In particular this won't do what you expect: guestfish --remote --add disk.img Don't use these options. Use the equivalent commands instead, eg: guestfish --remote add-drive disk.img or: guestfish --remote ><fs> add disk.img REMOTE CONTROL RUN COMMAND HANGING Using the "run" (or "launch") command remotely in a command substitution context hangs, ie. don't do (note the backquotes): a=guestfish --remote run Since the "run" command produces no output on stdout, this is not useful anyway. For further information see https://bugzilla.redhat.com/show_bug.cgi?id=592910. #### PREPARED DISK IMAGES Use the -N [filename=]type or --new [filename=]type parameter to select one of a set of preformatted disk images that guestfish can make for you to save typing. This is particularly useful for testing purposes. This option is used instead of the -a option, and like -a can appear multiple times (and can be mixed with -a). The new disk is called "test1.img" for the first -N, "test2.img" for the second and so on. Existing files in the current directory are overwritten. You can use a different filename by specifying "filename=" before the type (see examples below). The type briefly describes how the disk should be sized, partitioned, how filesystem(s) should be created, and how content should be added. Optionally the type can be followed by extra parameters, separated by ":" (colon) characters. For example, -N fs creates a default 100MB, sparsely-allocated disk, containing a single partition, with the partition formatted as ext2. -N fs:ext4:1G is the same, but for an ext4 filesystem on a 1GB disk instead. Note that the prepared filesystem is not mounted. You would usually have to use the "mount /dev/sda1 /" command or add the -m /dev/sda1 option. If any -N or --new options are given, the libguestfs appliance is automatically launched. EXAMPLES Create a 100MB disk with an ext4-formatted partition, called "test1.img" in the current directory: guestfish -N fs:ext4 Create a 32MB disk with a VFAT-formatted partition, and mount it: guestfish -N fs:vfat:32M -m /dev/sda1 Create a blank 200MB disk: guestfish -N disk:200M Create a blank 200MB disk called "blankdisk.img" (instead of "test1.img"): guestfish -N blankdisk.img=disk:200M -N disk - create a blank disk "guestfish -N [filename=]disk[:size]" Create a blank disk, size 100MB (by default). The default size can be changed by supplying an optional parameter. The optional parameters are: Name Default value size 100M the size of the disk image -N part - create a partitioned disk "guestfish -N [filename=]part[:size[:partition]]" Create a disk with a single partition. By default the size of the disk is 100MB (the available space in the partition will be a tiny bit smaller) and the partition table will be MBR (old DOS-style). These defaults can be changed by supplying optional parameters. The optional parameters are: Name Default value size 100M the size of the disk image partition mbr partition table type -N fs - create a filesystem "guestfish -N [filename=]fs[:filesystem[:size[:partition]]]" Create a disk with a single partition, with the partition containing an empty filesystem. This defaults to creating a 100MB disk (the available space in the filesystem will be a tiny bit smaller) with an MBR (old DOS-style) partition table and an ext2 filesystem. These defaults can be changed by supplying optional parameters. The optional parameters are: Name Default value filesystem ext2 the type of filesystem to use size 100M the size of the disk image partition mbr partition table type -N lv - create a disk with logical volume "guestfish -N [filename=]lv[:name[:size[:partition]]]" Create a disk with a single partition, set up the partition as an LVM2 physical volume, and place a volume group and logical volume on there. This defaults to creating a 100MB disk with the VG and LV called "/dev/VG/LV". You can change the name of the VG and LV by supplying an alternate name as the first optional parameter. Note this does not create a filesystem. Use 'lvfs' to do that. The optional parameters are: Name Default value name /dev/VG/LV the name of the VG and LV to use size 100M the size of the disk image partition mbr partition table type -N lvfs - create a disk with logical volume and filesystem "guestfish -N [filename=]lvfs[:name[:filesystem[:size[:partition]]]]" Create a disk with a single partition, set up the partition as an LVM2 physical volume, and place a volume group and logical volume on there. Then format the LV with a filesystem. This defaults to creating a 100MB disk with the VG and LV called "/dev/VG/LV", with an ext2 filesystem. The optional parameters are: Name Default value name /dev/VG/LV the name of the VG and LV to use filesystem ext2 the type of filesystem to use size 100M the size of the disk image partition mbr partition table type -N bootroot - create a boot and root filesystem "guestfish -N [filename=]bootroot[:bootfs[:rootfs[:size[:bootsize[:partition]]]]]" Create a disk with two partitions, for boot and root filesystem. Format the two filesystems independently. There are several optional parameters which control the exact layout and filesystem types. The optional parameters are: Name Default value bootfs ext2 the type of filesystem to use for boot rootfs ext2 the type of filesystem to use for root size 100M the size of the disk image bootsize 32M the size of the boot filesystem partition mbr partition table type -N bootrootlv - create a boot and root filesystem using LVM "guestfish -N [filename=]bootrootlv[:name[:bootfs[:rootfs[:size[:bootsize[:partition]]]]]]" This is the same as "bootroot" but the root filesystem (only) is placed on a logical volume, named by default "/dev/VG/LV". There are several optional parameters which control the exact layout. The optional parameters are: Name Default value name /dev/VG/LV the name of the VG and LV for root bootfs ext2 the type of filesystem to use for boot rootfs ext2 the type of filesystem to use for root size 100M the size of the disk image bootsize 32M the size of the boot filesystem partition mbr partition table type #### ADDING REMOTE STORAGE For API-level documentation on this topic, see "guestfs_add_drive_opts" in guestfs(3) and "REMOTE STORAGE" in guestfs(3). On the command line, you can use the -a option to add network block devices using a URI-style format, for example: guestfish -a ssh://root@example.com/disk.img URIs cannot be used with the "add" command. The equivalent command using the API directly is: ><fs> add /disk.img protocol:ssh server:tcp:example.com username:root The possible -a URI formats are described below. -a disk.img -a file:///path/to/disk.img Add the local disk image (or device) called "disk.img". -a ftp://[user@]example.com[:port]/disk.img -a ftps://[user@]example.com[:port]/disk.img -a http://[user@]example.com[:port]/disk.img -a https://[user@]example.com[:port]/disk.img -a tftp://[user@]example.com[:port]/disk.img Add a disk located on a remote FTP, HTTP or TFTP server. The equivalent API command would be: ><fs> add /disk.img protocol:(ftp|...) server:tcp:example.com -a gluster://example.com[:port]/volname/image Add a disk image located on GlusterFS storage. The server is the one running "glusterd", and may be "localhost". The equivalent API command would be: ><fs> add volname/image protocol:gluster server:tcp:example.com -a iscsi://example.com[:port]/target-iqn-name[/lun] Add a disk located on an iSCSI server. The equivalent API command would be: ><fs> add target-iqn-name/lun protocol:iscsi server:tcp:example.com -a nbd://example.com[:port] -a nbd://example.com[:port]/exportname -a nbd://?socket=/socket -a nbd:///exportname?socket=/socket Add a disk located on Network Block Device (nbd) storage. The /exportname part of the URI specifies an NBD export name, but is usually left empty. The optional ?socket parameter can be used to specify a Unix domain socket that we talk to the NBD server over. Note that you cannot mix server name (ie. TCP/IP) and socket path. The equivalent API command would be (no export name): ><fs> add "" protocol:nbd server:[tcp:example.com|unix:/socket] -a rbd:///pool/disk -a rbd://example.com[:port]/pool/disk Add a disk image located on a Ceph (RBD/librbd) storage volume. Although libguestfs and Ceph supports multiple servers, only a single server can be specified when using this URI syntax. The equivalent API command would be: ><fs> add pool/disk protocol:rbd server:tcp:example.com:port -a sheepdog://[example.com[:port]]/volume/image Add a disk image located on a Sheepdog volume. The server name is optional. Although libguestfs and Sheepdog supports multiple servers, only at most one server can be specified when using this URI syntax. The equivalent API command would be: ><fs> add volume protocol:sheepdog [server:tcp:example.com] -a ssh://[user@]example.com[:port]/disk.img Add a disk image located on a remote server, accessed using the Secure Shell (ssh) SFTP protocol. SFTP is supported out of the box by all major SSH servers. The equivalent API command would be: ><fs> add /disk protocol:ssh server:tcp:example.com [username:user] #### PROGRESS BARS Some (not all) long-running commands send progress notification messages as they are running. Guestfish turns these messages into progress bars. When a command that supports progress bars takes longer than two seconds to run, and if progress bars are enabled, then you will see one appearing below the command: ><fs> copy-size /large-file /another-file 2048M / 10% [#####-----------------------------------------] 00:30 The spinner on the left hand side moves round once for every progress notification received from the backend. This is a (reasonably) golden assurance that the command is "doing something" even if the progress bar is not moving, because the command is able to send the progress notifications. When the bar reaches 100% and the command finishes, the spinner disappears. Progress bars are enabled by default when guestfish is used interactively. You can enable them even for non-interactive modes using --progress-bars, and you can disable them completely using --no-progress-bars. #### PROMPT You can change or add colours to the default prompt ("><fs>") by setting the "GUESTFISH_PS1" environment variable. A second string ("GUESTFISH_OUTPUT") is printed after the command has been entered and before the output, allowing you to control the colour of the output. A third string ("GUESTFISH_INIT") is printed before the welcome message, allowing you to control the colour of that message. A fourth string ("GUESTFISH_RESTORE") is printed before guestfish exits. A simple prompt can be set by setting "GUESTFISH_PS1" to an alternate string:$ GUESTFISH_PS1='(type a command) ' $export GUESTFISH_PS1$ guestfish [...] (type a command) ▂ You can also use special escape sequences, as described in the table below: \ A literal backslash character. (These should only be used in "GUESTFISH_PS1".) Place non-printing characters (eg. terminal control codes for colours) between "$...$". What this does it to tell the readline(3) library that it should treat this subsequence as zero- width, so that command-line redisplay, editing etc works. \a A bell character.  An ASCII ESC (escape) character. A newline. A carriage return. \NNN The ASCII character whose code is the octal value NNN. \xNN The ASCII character whose code is the hex value NN. EXAMPLES OF PROMPTS Note these these require a terminal that supports ANSI escape codes. · GUESTFISH_PS1='$$><fs>$$ ' A bold black version of the ordinary prompt. · GUESTFISH_PS1='$$><fs>$$ ' GUESTFISH_OUTPUT='' GUESTFISH_RESTORE="$GUESTFISH_OUTPUT" GUESTFISH_INIT='' Blue welcome text, green prompt, red commands, black command output. #### WINDOWS 8 Windows 8 "fast startup" can prevent guestfish from mounting NTFS partitions. See "WINDOWS HIBERNATION AND WINDOWS 8 FAST STARTUP" in guestfs(3). #### GUESTFISH COMMANDS The commands in this section are guestfish convenience commands, in other words, they are not part of the guestfs(3) API. help help help cmd Without any parameter, this provides general help. With a "cmd" parameter, this displays detailed help for that command. exit quit This exits guestfish. You can also use "^D" key. alloc allocate alloc filename size This creates an empty (zeroed) file of the given size, and then adds so it can be further examined. For more advanced image creation, see "disk-create". Size can be specified using standard suffixes, eg. "1M". To create a sparse file, use "sparse" instead. To create a prepared disk image, see "PREPARED DISK IMAGES". copy-in copy-in local [local ...] /remotedir "copy-in" copies local files or directories recursively into the disk image, placing them in the directory called "/remotedir" (which must exist). This guestfish meta-command turns into a sequence of "tar-in" and other commands as necessary. Multiple local files and directories can be specified, but the last parameter must always be a remote directory. Wildcards cannot be used. copy-out copy-out remote [remote ...] localdir "copy-out" copies remote files or directories recursively out of the disk image, placing them on the host disk in a local directory called "localdir" (which must exist). This guestfish meta-command turns into a sequence of "download", "tar-out" and other commands as necessary. Multiple remote files and directories can be specified, but the last parameter must always be a local directory. To download to the current directory, use "." as in: copy-out /home . Wildcards cannot be used in the ordinary command, but you can use them with the help of "glob" like this: glob copy-out /home/* . delete-event delete-event name Delete the event handler which was previously registered as "name". If multiple event handlers were registered with the same name, they are all deleted. See also the guestfish commands "event" and "list-events". display display filename Use "display" (a graphical display program) to display an image file. It downloads the file, and runs "display" on it. To use an alternative program, set the "GUESTFISH_DISPLAY_IMAGE" environment variable. For example to use the GNOME display program: export GUESTFISH_DISPLAY_IMAGE=eog See also display(1). echo echo [params ...] This echos the parameters to the terminal. edit vi emacs edit filename This is used to edit a file. It downloads the file, edits it locally using your editor, then uploads the result. The editor is$EDITOR. However if you use the alternate commands "vi" or "emacs" you will get those corresponding editors. event event name eventset "shell script ..." Register a shell script fragment which is executed when an event is raised. See "guestfs_set_event_callback" in guestfs(3) for a discussion of the event API in libguestfs. The "name" parameter is a name that you give to this event handler. It can be any string (even the empty string) and is simply there so you can delete the handler using the guestfish "delete-event" command. The "eventset" parameter is a comma-separated list of one or more events, for example "close" or "close,trace". The special value "*" means all events. The third and final parameter is the shell script fragment (or any external command) that is executed when any of the events in the eventset occurs. It is executed using "$SHELL -c", or if$SHELL is not set then "/bin/sh -c". The shell script fragment receives callback parameters as arguments $1,$2 etc. The actual event that was called is available in the environment variable $EVENT. event "" close "echo closed" event messages appliance,library,trace "echo$@" event "" progress "echo progress: $3/$4" event "" * "echo $EVENT$@" glob glob command args... Expand wildcards in any paths in the args list, and run "command" repeatedly on each matching path. See "WILDCARDS AND GLOBBING". hexedit hexedit <filename|device> hexedit <filename|device> <max> hexedit <filename|device> <start> <max> Use hexedit (a hex editor) to edit all or part of a binary file or block device. editing it locally, then uploading it. If the file or device is large, you have to specify which part you wish to edit by using "max" and/or "start" "max" parameters. "start" and "max" are specified in bytes, with the usual modifiers allowed such as "1M" (1 megabyte). For example to edit the first few sectors of a disk you might do: hexedit /dev/sda 1M which would allow you to edit anywhere within the first megabyte of the disk. To edit the superblock of an ext2 filesystem on "/dev/sda1", do: hexedit /dev/sda1 0x400 0x400 (assuming the superblock is in the standard location). This command requires the external hexedit(1) program. You can specify another program to use by setting the "HEXEDITOR" environment variable. lcd lcd directory Change the local directory, ie. the current directory of guestfish itself. Note that "!cd" won't do what you might expect. list-events list-events List the event handlers registered using the guestfish "event" command. man manual man Opens the manual page for guestfish. more less more filename less filename This is used to view a file. The default viewer is $PAGER. However if you use the alternate command "less" you will get the "less" command specifically. reopen reopen Close and reopen the libguestfs handle. It is not necessary to use this normally, because the handle is closed properly when guestfish exits. However this is occasionally useful for testing. setenv setenv VAR value Set the environment variable "VAR" to the string "value". To print the value of an environment variable use a shell command such as: !echo$VAR sparse sparse filename size This creates an empty sparse file of the given size, and then adds so it can be further examined. In all respects it works the same as the "alloc" command, except that the image file is allocated sparsely, which means that disk blocks are not assigned to the file until they are needed. Sparse disk files only use space when written to, but they are slower and there is a danger you could run out of real disk space during a write operation. For more advanced image creation, see "disk-create". Size can be specified using standard suffixes, eg. "1M". supported supported This command returns a list of the optional groups known to the daemon, and indicates which ones are supported by this build of the libguestfs appliance. time time command args... Run the command as usual, but print the elapsed time afterwards. This can be useful for benchmarking operations. unsetenv unsetenv VAR Remove "VAR" from the environment. #### COMMANDS acl-delete-def-file acl-delete-def-file dir This function deletes the default POSIX Access Control List (ACL) attached to directory "dir". acl-get-file acl-get-file path acltype This function returns the POSIX Access Control List (ACL) attached to "path". The ACL is returned in "long text form" (see acl(5)). The "acltype" parameter may be: "access" Return the ordinary (access) ACL for any file, directory or other filesystem object. "default" Return the default ACL. Normally this only makes sense if "path" is a directory. acl-set-file acl-set-file path acltype acl This function sets the POSIX Access Control List (ACL) attached to "path". The "acltype" parameter may be: "access" Set the ordinary (access) ACL for any file, directory or other filesystem object. "default" Set the default ACL. Normally this only makes sense if "path" is a directory. The "acl" parameter is the new ACL in either "long text form" or "short text form" (see acl(5)). The new ACL completely replaces any previous ACL on the file. The ACL must contain the full Unix permissions (eg. "u::rwx,g::rx,o::rx"). If you are specifying individual users or groups, then the mask field is also required (eg. "m::rwx"), followed by the "u:ID:..." and/or "g:ID:..." field(s). A full ACL string might therefore look like this: u::rwx,g::rwx,o::rwx,m::rwx,u:500:rwx,g:500:rwx \ Unix permissions / \mask/ \ ACL / You should use numeric UIDs and GIDs. To map usernames and groupnames to the correct numeric ID in the context of the guest, use the Augeas functions (see "aug-init"). This function adds a virtual CD-ROM disk image to the guest. The image is added as read-only drive, so this function is equivalent This function is deprecated. In new code, use the "add-drive-ro" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. domain This function adds the disk(s) attached to the named libvirt domain "dom". It works by connecting to libvirt, requesting the domain and domain XML from libvirt, parsing it for disks, and calling "add-drive- opts" on each one. The number of disks added is returned. This operation is atomic: if an error is returned, then no disks are added. This function does some minimal checks to make sure the libvirt domain is not running (unless "readonly" is true). In a future version we will try to acquire the libvirt lock on each disk. Disks must be accessible locally. This often means that adding disks from a remote libvirt connection (see http://libvirt.org/remote.php) will fail unless those disks are accessible via the same device path locally too. The optional "libvirturi" parameter sets the libvirt URI (see http://libvirt.org/uri.php). If this is not set then we connect to the default libvirt URI (or one set through an environment variable, see the libvirt documentation for full details). The optional "live" flag controls whether this call will try to connect to a running virtual machine "guestfsd" process if it sees a suitable <channel> element in the libvirt XML definition. The default (if the flag is omitted) is never to try. See "ATTACHING TO RUNNING DAEMONS" If the "allowuuid" flag is true (default is false) then a UUID may be passed instead of the domain name. The "dom" string is treated as a UUID first and looked up, and if that lookup fails then we treat "dom" as a name as usual. The optional "readonlydisk" parameter controls what we do for disks which are marked <readonly/> in the libvirt XML. Possible values are: The whole call is aborted with an error if any disk with the If "readonly" is true or false: Disks with the <readonly/> flag are skipped. The other optional parameters are passed directly through to "add- drive-opts". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". This function adds a disk image called "filename" to the handle. "filename" may be a regular host file or a host device. When this function is called before "launch" (the usual case) then the first time you call this function, the disk appears in the API as "/dev/sda", the second time as "/dev/sdb", and so on. In libguestfs ≥ 1.20 you can also call this function after launch (with some restrictions). This is called "hotplugging". When hotplugging, you must specify a "label" so that the new disk gets a predictable You don't necessarily need to be root when using libguestfs. However you obviously do need sufficient permissions to access the filename for whatever operations you want to perform (ie. read access if you just want to read the image or write access if you want to modify the image). This call checks that "filename" exists. "filename" may be the special string "/dev/null". See "NULL DISKS" in guestfs(3). The optional arguments are: If true then the image is treated as read-only. Writes are still allowed, but they are stored in a temporary snapshot overlay which is discarded at the end. The disk that you add is not modified. "format" This forces the image format. If you omit this (or use "add-drive" or "add-drive-ro") then the format is automatically detected. Possible formats include "raw" and "qcow2". Automatic detection of the format opens you up to a potential security hole when dealing with untrusted raw-format images. See CVE-2010-3851 and RHBZ#642934. Specifying the format closes this security hole. "iface" This rarely-used option lets you emulate the behaviour of the "name" The name the drive had in the original guest, e.g. "/dev/sdb". This is used as a hint to the guest inspection process if it is available. "label" Give the disk a label. The label should be a unique, short string using only ASCII characters "[a-zA-Z]". As well as its usual name in the API (such as "/dev/sda"), the drive will also be named "/dev/disk/guestfs/label". See "DISK LABELS" in guestfs(3). "protocol" The optional protocol argument can be used to select an alternate source protocol. "protocol = "file"" "filename" is interpreted as a local file or device. This is the default if the optional protocol parameter is omitted. "protocol = "ftp"|"ftps"|"http"|"https"|"tftp"" Connect to a remote FTP, HTTP or TFTP server. The "server" parameter must also be supplied - see below. "protocol = "gluster"" Connect to the GlusterFS server. The "server" parameter must also be supplied - see below. "protocol = "iscsi"" Connect to the iSCSI server. The "server" parameter must also be supplied - see below. "protocol = "nbd"" Connect to the Network Block Device server. The "server" parameter must also be supplied - see below. "protocol = "rbd"" Connect to the Ceph (librbd/RBD) server. The "server" parameter must also be supplied - see below. The "username" parameter may be supplied. See below. The "secret" parameter may be supplied. See below. "protocol = "sheepdog"" Connect to the Sheepdog server. The "server" parameter may also be supplied - see below. "protocol = "ssh"" Connect to the Secure Shell (ssh) server. The "server" parameter must be supplied. The "username" parameter may be supplied. See below. "server" For protocols which require access to a remote server, this is a list of server(s). Protocol Number of servers required -------- -------------------------- file List must be empty or param not used at all ftp|ftps|http|https|tftp Exactly one gluster Exactly one iscsi Exactly one nbd Exactly one rbd Zero or more sheepdog Zero or more ssh Exactly one Each list element is a string specifying a server. The string must be in one of the following formats: hostname hostname:port tcp:hostname tcp:hostname:port unix:/path/to/socket If the port number is omitted, then the standard port number for the protocol is used (see "/etc/services"). For the "ftp", "ftps", "http", "https", "iscsi", "rbd", "ssh" and "tftp" protocols, this specifies the remote username. If not given, then the local username is used for "ssh", and no authentication is attempted for ceph. But note this sometimes may give unexpected results, for example if using the libvirt backend and if the libvirt backend is configured to start the qemu appliance as a special user such as "qemu.qemu". If in doubt, specify the remote username you want. "secret" For the "rbd" protocol only, this specifies the 'secret' to use when connecting to the remote device. If not given, then a secret matching the given username will be looked up in the default keychain locations, or if no username is given, then no authentication will be used. "cachemode" Choose whether or not libguestfs will obey sync operations (safe but slow) or not (unsafe but fast). The possible values for this string are: "cachemode = "writeback"" This is the default. Write operations in the API do not return until a write(2) call has completed in the host [but note this does not imply that anything gets written to disk]. Sync operations in the API, including implicit syncs caused by filesystem journalling, will not return until an fdatasync(2) call has completed in the host, indicating that data has been committed to disk. "cachemode = "unsafe"" In this mode, there are no guarantees. Libguestfs may cache anything and ignore sync requests. This is suitable only for scratch or temporary disks. Enable or disable discard (a.k.a. trim or unmap) support on this drive. If enabled, operations such as "fstrim" will be able to discard / make thin / punch holes in the underlying host file or device. Disable discard support. This is the default. Enable discard support if possible, but don't fail if it is not supported. Since not all backends and not all underlying systems support discard, this is a good choice if you want to use discard if possible, but don't mind if it doesn't work. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". This function is the equivalent of calling "add-drive-opts" with the This is the same as "add-drive-ro" but it allows you to specify the QEMU interface emulation to use at run time. This function is deprecated. In new code, use the "add-drive" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. scratch This command adds a temporary scratch drive to the handle. The "size" parameter is the virtual size (in bytes). The scratch drive is blank initially (all reads return zeroes until you start writing to it). The drive is deleted when the handle is closed. The optional arguments "name" and "label" are passed through to "add- drive". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". This is the same as "add-drive" but it allows you to specify the QEMU interface emulation to use at run time. This function is deprecated. In new code, use the "add-drive" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. aug-clear aug-clear augpath Set the value associated with "path" to "NULL". This is the same as the augtool(1) "clear" command. aug-close aug-close Close the current Augeas handle and free up any resources used by it. After calling this, you have to call "aug-init" again before you can use any other Augeas functions. aug-defnode aug-defnode name expr val Defines a variable "name" whose value is the result of evaluating "expr". If "expr" evaluates to an empty nodeset, a node is created, equivalent to calling "aug-set" "expr", "value". "name" will be the nodeset containing that single node. On success this returns a pair containing the number of nodes in the nodeset, and a boolean flag if a node was created. aug-defvar aug-defvar name expr Defines an Augeas variable "name" whose value is the result of evaluating "expr". If "expr" is NULL, then "name" is undefined. On success this returns the number of nodes in "expr", or 0 if "expr" evaluates to something which is not a nodeset. aug-get aug-get augpath Look up the value associated with "path". If "path" matches exactly one node, the "value" is returned. aug-init aug-init root flags Create a new Augeas handle for editing configuration files. If there was any previous Augeas handle associated with this guestfs session, then it is closed. You must call this before using any other "aug-*" commands. "root" is the filesystem root. "root" must not be NULL, use "/" The flags are the same as the flags defined in <augeas.h>, the logical or of the following integers: "AUG_SAVE_BACKUP" = 1 Keep the original file with a ".augsave" extension. "AUG_SAVE_NEWFILE" = 2 Save changes into a file with extension ".augnew", and do not overwrite original. Overrides "AUG_SAVE_BACKUP". "AUG_TYPE_CHECK" = 4 Typecheck lenses. This option is only useful when debugging Augeas lenses. Use of this option may require additional memory for the libguestfs appliance. You may need to set the "LIBGUESTFS_MEMSIZE" environment variable or call "set-memsize". "AUG_NO_STDINC" = 8 Do not use standard load path for modules. "AUG_SAVE_NOOP" = 16 Make save a no-op, just record what would have been changed. Do not load the tree in "aug-init". To close the handle, you can call "aug-close". To find out more about Augeas, see http://augeas.net/. aug-insert aug-insert augpath label true|false Create a new sibling "label" for "path", inserting it into the tree before or after "path" (depending on the boolean flag "before"). "path" must match exactly one existing node in the tree, and "label" must be a label, ie. not contain "/", "*" or end with a bracketed index "[N]". aug-label aug-label augpath The label (name of the last element) of the Augeas path expression "augpath" is returned. "augpath" must match exactly one node, else this function returns an error. See "aug_load" in the Augeas documentation for the full gory details. aug-ls aug-ls augpath This is just a shortcut for listing "aug-match" "path/*" and sorting the resulting nodes into alphabetical order. aug-match aug-match augpath Returns a list of paths which match the path expression "path". The returned paths are sufficiently qualified so that they match exactly one node in the current tree. aug-mv aug-mv src dest Move the node "src" to "dest". "src" must match exactly one node. "dest" is overwritten if it exists. aug-rm aug-rm augpath Remove "path" and all of its children. On success this returns the number of entries which were removed. aug-save aug-save This writes all pending changes to disk. The flags which were passed to "aug-init" affect exactly how files are saved. aug-set aug-set augpath val Set the value associated with "path" to "val". In the Augeas API, it is possible to clear a node by setting the value to NULL. Due to an oversight in the libguestfs API you cannot do that with this call. Instead you must use the "aug-clear" call. aug-setm aug-setm base sub val Change multiple Augeas nodes in a single operation. "base" is an expression matching multiple nodes. "sub" is a path expression relative to "base". All nodes matching "base" are found, and then for each node, "sub" is changed to "val". "sub" may also be "NULL" in which case the "base" nodes are modified. This returns the number of nodes modified. available available 'groups ...' This command is used to check the availability of some groups of functionality in the appliance, which not all builds of the libguestfs appliance will be able to provide. The libguestfs groups, and the functions that those groups correspond to, are listed in "AVAILABILITY" in guestfs(3). You can also fetch this list at runtime by calling "available-all-groups". The argument "groups" is a list of group names, eg: "["inotify", "augeas"]" would check for the availability of the Linux inotify functions and Augeas (configuration file editing) functions. The command returns no error if all requested groups are available. It fails with an error if one or more of the requested groups is unavailable in the appliance. If an unknown group name is included in the list of groups then an error is always returned. Notes: · "feature-available" is the same as this call, but with a slightly simpler to use API: that call returns a boolean true/false instead of throwing an error. · You must call "launch" before calling this function. The reason is because we don't know what groups are supported by the appliance/daemon until it is running and can be queried. · If a group of functions is available, this does not necessarily mean that they will work. You still have to check for errors when calling individual API functions even if they are available. · It is usually the job of distro packagers to build complete functionality into the libguestfs appliance. Upstream libguestfs, if built from source with all requirements satisfied, will support everything. · This call was added in version 1.0.80. In previous versions of libguestfs all you could do would be to speculatively execute a "version". available-all-groups available-all-groups This command returns a list of all optional groups that this daemon knows about. Note this returns both supported and unsupported groups. To find out which ones the daemon can actually support you have to call "available" / "feature-available" on each member of the returned list. guestfs(3). base64-in base64-in (base64file|-) filename This command uploads base64-encoded data from "base64file" to "filename". base64-out base64-out filename (base64file|-) local file "base64file" encoded as base64. This discards all blocks on the block device "device", giving the free space back to the host. This operation requires support in libguestfs, the host filesystem, qemu and the host kernel. If this support isn't present it may give an error or even appear to run but do nothing. You must also set the This call returns true if blocks on "device" that have been discarded by a call to "blkdiscard" are returned as blocks of zero bytes when If it returns false, then it may be that discarded blocks are read as stale or random data. blkid blkid device This command returns block device attributes for "device". The following fields are usually present in the returned hash. Other fields may also be present. "UUID" The uuid of this device. "LABEL" The label of this device. "VERSION" The version of blkid command. "TYPE" The filesystem type or RAID of this device. "USAGE" The usage of this device, for example "filesystem" or "raid". blockdev-flushbufs blockdev-flushbufs device This tells the kernel to flush internal buffers associated with "device". This uses the blockdev(8) command. blockdev-getbsz blockdev-getbsz device This returns the block size of a device. Note: this is different from both size in blocks and filesystem block size. Also this setting is not really used by anything. You should probably not use it for anything. Filesystems have their own idea about what block size to choose. This uses the blockdev(8) command. blockdev-getro blockdev-getro device Returns a boolean indicating if the block device is read-only (true if This uses the blockdev(8) command. blockdev-getsize64 blockdev-getsize64 device This returns the size of the device in bytes. This uses the blockdev(8) command. blockdev-getss blockdev-getss device This returns the size of sectors on a block device. Usually 512, but can be larger for modern devices. (Note, this is not the size in sectors, use "blockdev-getsz" for that). This uses the blockdev(8) command. blockdev-getsz blockdev-getsz device This returns the size of the device in units of 512-byte sectors (even if the sectorsize isn't 512 bytes ... weird). See also "blockdev-getss" for the real sector size of the device, and "blockdev-getsize64" for the more useful size in bytes. This uses the blockdev(8) command. Reread the partition table on "device". This uses the blockdev(8) command. blockdev-setbsz blockdev-setbsz device blocksize This call does nothing and has never done anything because of a bug in blockdev. Do not use it. If you need to set the filesystem block size, use the "blocksize" option of "mkfs". This function is deprecated. In new code, use the "mkfs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. blockdev-setro blockdev-setro device Sets the block device named "device" to read-only. This uses the blockdev(8) command. blockdev-setrw blockdev-setrw device Sets the block device named "device" to read-write. This uses the blockdev(8) command. Add the list of device(s) in "devices" to the btrfs filesystem mounted at "fs". If "devices" is an empty list, this does nothing. btrfs-device-delete btrfs-device-delete 'devices ...' fs Remove the "devices" from the btrfs filesystem mounted at "fs". If "devices" is an empty list, this does nothing. btrfs-filesystem-balance btrfs-filesystem-balance fs Balance the chunks in the btrfs filesystem mounted at "fs" across the underlying devices. btrfs-filesystem-resize btrfs-filesystem-resize mountpoint [size:N] This command resizes a btrfs filesystem. Note that unlike other resize calls, the filesystem has to be mounted and the parameter is the mountpoint not the device (this is a requirement of btrfs itself). The optional parameters are: "size" The new size (in bytes) of the filesystem. If omitted, the filesystem is resized to the maximum size. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". btrfs-filesystem-sync btrfs-filesystem-sync fs Force sync on the btrfs filesystem mounted at "fs". btrfs-fsck btrfs-fsck device [superblock:N] [repair:true|false] Used to check a btrfs filesystem, "device" is the device file where the filesystem is stored. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". btrfs-set-seeding btrfs-set-seeding device true|false Enable or disable the seeding feature of a device that contains a btrfs filesystem. btrfs-subvolume-create btrfs-subvolume-create dest Create a btrfs subvolume. The "dest" argument is the destination directory and the name of the snapshot, in the form "/path/to/dest/name". btrfs-subvolume-delete btrfs-subvolume-delete subvolume Delete the named btrfs subvolume. btrfs-subvolume-list btrfs-subvolume-list fs List the btrfs snapshots and subvolumes of the btrfs filesystem which is mounted at "fs". btrfs-subvolume-set-default btrfs-subvolume-set-default id fs Set the subvolume of the btrfs filesystem "fs" which will be mounted by default. See "btrfs-subvolume-list" to get a list of subvolumes. btrfs-subvolume-snapshot btrfs-subvolume-snapshot source dest Create a writable snapshot of the btrfs subvolume "source". The "dest" argument is the destination directory and the name of the snapshot, in the form "/path/to/dest/name". canonical-device-name canonical-device-name device This utility function is useful when displaying device names to the user. It takes a number of irregular device names and returns them in a consistent format: "/dev/hdX" "/dev/vdX" These are returned as "/dev/sdX". Note this works for device names and partition names. This is approximately the reverse of the algorithm described in "BLOCK DEVICE NAMING" in guestfs(3). "/dev/mapper/VG-LV" "/dev/dm-N" Converted to "/dev/VG/LV" form using "lvm-canonical-lvm-name". Other strings are returned unmodified. cap-get-file cap-get-file path This function returns the Linux capabilities attached to "path". The capabilities set is returned in text form (see cap_to_text(3)). If no capabilities are attached to a file, an empty string is returned. cap-set-file cap-set-file path cap This function sets the Linux capabilities attached to "path". The capabilities set "cap" should be passed in text form (see cap_from_text(3)). case-sensitive-path case-sensitive-path path This can be used to resolve case insensitive paths on a filesystem which is case sensitive. The use case is to resolve paths which you have read from Windows configuration files or the Windows Registry, to the true path. The command handles a peculiarity of the Linux ntfs-3g filesystem driver (and probably others), which is that although the underlying filesystem is case-insensitive, the driver exports the filesystem to Linux as case-sensitive. One consequence of this is that special directories such as "c:\windows" may appear as "/WINDOWS" or "/windows" (or other things) depending on the precise details of how they were created. In Windows itself this would not be a problem. Bug or feature? You decide: http://www.tuxera.com/community/ntfs-3g-faq/#posixfilenames1 "case-sensitive-path" attempts to resolve the true case of each element in the path. It will return a resolved path if either the full path or its parent directory exists. If the parent directory exists but the full path does not, the case of the parent directory will be correctly resolved, and the remainder appended unmodified. For example, if the file "/Windows/System32/netkvm.sys" exists: "case-sensitive-path" ("/windows/system32/netkvm.sys") "Windows/System32/netkvm.sys" "case-sensitive-path" ("/windows/system32/NoSuchFile") "Windows/System32/NoSuchFile" "case-sensitive-path" ("/windows/system33/netkvm.sys") ERROR Note: Because of the above behaviour, "case-sensitive-path" cannot be used to check for the existence of a file. Note: This function does not handle drive names, backslashes etc. cat cat path Return the contents of the file named "path". Because, in C, this function returns a "char *", there is no way to differentiate between a "" character in a file and end of string. To checksum checksum csumtype path This call computes the MD5, SHAx or CRC checksum of the file named "path". The type of checksum to compute is given by the "csumtype" parameter which must have one of the following values: "crc" Compute the cyclic redundancy check (CRC) specified by POSIX for the "cksum" command. "md5" Compute the MD5 hash (using the "md5sum" program). "sha1" Compute the SHA1 hash (using the "sha1sum" program). "sha224" Compute the SHA224 hash (using the "sha224sum" program). "sha256" Compute the SHA256 hash (using the "sha256sum" program). "sha384" Compute the SHA384 hash (using the "sha384sum" program). "sha512" Compute the SHA512 hash (using the "sha512sum" program). The checksum is returned as a printable string. To get the checksum for a device, use "checksum-device". To get the checksums for many files, use "checksums-out". checksum-device checksum-device csumtype device This call computes the MD5, SHAx or CRC checksum of the contents of the device named "device". For the types of checksums supported see the "checksum" command. checksums-out checksums-out csumtype directory (sumsfile|-) This command computes the checksums of all regular files in "directory" and then emits a list of those checksums to the local output file "sumsfile". This can be used for verifying the integrity of a virtual machine. However to be properly secure you should pay attention to the output of the checksum command (it uses the ones from GNU coreutils). In particular when the filename is not printable, coreutils uses a special file. chmod chmod mode path Change the mode (permissions) of "path" to "mode". Only numeric modes are supported. Note: When using this command from guestfish, "mode" by default would be decimal, unless you prefix it with 0 to get octal, ie. use 0700 not 700. The mode actually set is affected by the umask. chown chown owner group path Change the file owner to "owner" and group to "group". Only numeric uid and gid are supported. If you want to use names, you will need to locate and parse the password file yourself (Augeas support makes this relatively easy). command command 'arguments ...' This call runs a command from the guest filesystem. The filesystem must be mounted, and must contain a compatible operating system (ie. something Linux, with the same or compatible processor architecture). The single parameter is an argv-style list of arguments. The first element is the name of the program to run. Subsequent elements are parameters. The list must be non-empty (ie. must contain a program name). Note that the command runs directly, and is not invoked via the shell (see "sh"). The return value is anything printed to stdout by the command. If the command returns a non-zero exit status, then this function returns an error message. The error message string is the content of stderr from the command. The $PATH environment variable will contain at least "/usr/bin" and "/bin". If you require a program from another location, you should provide the full path in the first parameter. Shared libraries and data files required by the program must be available on filesystems which are mounted in the correct places. It is the caller's responsibility to ensure all filesystems that are needed are mounted at the right locations. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). command-lines command-lines 'arguments ...' This is the same as "command", but splits the result into a list of lines. See also: "sh-lines" Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). compress-device-out compress-device-out ctype device (zdevice|-) [level:N] This command compresses "device" and writes it out to the local file "zdevice". The "ctype" and optional "level" parameters have the same meaning as in "compress-out". Use "-" instead of a filename to read/write from stdin/stdout. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". compress-out compress-out ctype file (zfile|-) [level:N] This command compresses "file" and writes it out to the local file "zfile". The compression program used is controlled by the "ctype" parameter. Currently this includes: "compress", "gzip", "bzip2", "xz" or "lzop". Some compression types may not be supported by particular builds of libguestfs, in which case you will get an error containing the substring "not supported". The optional "level" parameter controls compression level. The meaning and default for this parameter depends on the compression program being used. Use "-" instead of a filename to read/write from stdin/stdout. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". config config hvparam hvvalue This can be used to add arbitrary hypervisor parameters of the form -param value. Actually it's not quite arbitrary - we prevent you from setting some parameters which would interfere with parameters that we use. The first character of "hvparam" string must be a "-" (dash). "hvvalue" can be NULL. copy-attributes copy-attributes src dest [all:true|false] [mode:true|false] [xattributes:true|false] [ownership:true|false] Copy the attributes of a path (which can be a file or a directory) to another path. By default "no" attribute is copied, so make sure to specify any (or "all" to copy everything). The optional arguments specify which attributes can be copied: "mode" Copy part of the file mode from "source" to "destination". Only the UNIX permissions and the sticky/setuid/setgid bits can be copied. "xattributes" Copy the Linux extended attributes (xattrs) from "source" to "destination". This flag does nothing if the linuxxattrs feature is not available (see "feature-available"). "ownership" Copy the owner uid and the group gid of "source" to "destination". "all" Copy all the attributes from "source" to "destination". Enabling it enables all the other flags, if they are not specified already. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". copy-device-to-device copy-device-to-device src dest [srcoffset:N] [destoffset:N] [size:N] [sparse:true|false] The four calls "copy-device-to-device", "copy-device-to-file", "copy- file-to-device", and "copy-file-to-file" let you copy from a source (device|file) to a destination (device|file). Partial copies can be made since you can specify optionally the source offset, destination offset and size to copy. These values are all specified in bytes. If not given, the offsets both default to zero, and the size defaults to copying as much as possible until we hit the end of the source. The source and destination may be the same object. However overlapping regions may not be copied correctly. If the destination is a file, it is created if required. If the destination file is not large enough, it is extended. If the "sparse" flag is true then the call avoids writing blocks that contain only zeroes, which can help in some situations where the backing disk is thin-provisioned. Note that unless the target is already zeroed, using this option will result in incorrect copying. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". copy-device-to-file copy-device-to-file src dest [srcoffset:N] [destoffset:N] [size:N] [sparse:true|false] See "copy-device-to-device" for a general overview of this call. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". copy-file-to-device copy-file-to-device src dest [srcoffset:N] [destoffset:N] [size:N] [sparse:true|false] See "copy-device-to-device" for a general overview of this call. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". copy-file-to-file copy-file-to-file src dest [srcoffset:N] [destoffset:N] [size:N] [sparse:true|false] See "copy-device-to-device" for a general overview of this call. This is not the function you want for copying files. This is for copying blocks within existing files. See "cp", "cp-a" and "mv" for general file copying and moving functions. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". copy-size copy-size src dest size This command copies exactly "size" bytes from one source device or file "src" to another destination device or file "dest". Note this will fail if the source is too short or if the destination is not large enough. This function is deprecated. In new code, use the "copy-device-to- device" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. cp cp src dest This copies a file from "src" to "dest" where "dest" is either a destination filename or destination directory. cp-a cp-a src dest This copies a file or directory from "src" to "dest" recursively using the "cp -a" command. cp-r cp-r src dest This copies a file or directory from "src" to "dest" recursively using the "cp -rP" command. Most users should use "cp-a" instead. This command is useful when you don't want to preserve permissions, because the target filesystem does not support it (primarily when writing to DOS FAT filesystems). dd dd src dest This command copies from one source device or file "src" to another destination device or file "dest". Normally you would use this to copy to or from a device or partition, for example to duplicate a filesystem. If the destination is a device, it must be as large or larger than the source file or device, otherwise the copy will fail. This command cannot do partial copies (see "copy-device-to-device"). This function is deprecated. In new code, use the "copy-device-to- device" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. device-index device-index device This function takes a device name (eg. "/dev/sdb") and returns the index of the device in the list of devices. Index numbers start from 0. The named device must exist, for example as a string returned from "list-devices". See also "list-devices", "part-to-dev". df df This command runs the "df" command to report disk space used. This command is mostly useful for interactive sessions. It is not intended that you try to parse the output string. Use "statvfs" from programs. df-h df-h This command runs the "df -h" command to report disk space used in human-readable format. This command is mostly useful for interactive sessions. It is not intended that you try to parse the output string. Use "statvfs" from programs. disk-create disk-create filename format size [backingfile:..] [backingformat:..] [preallocation:..] [compat:..] [clustersize:N] Create a blank disk image called "filename" (a host file) with format "format" (usually "raw" or "qcow2"). The size is "size" bytes. If used with the optional "backingfile" parameter, then a snapshot is created on top of the backing file. In this case, "size" must be passed as "-1". The size of the snapshot is the same as the size of the backing file, which is discovered automatically. You are encouraged to also pass "backingformat" to describe the format of "backingfile". If "filename" refers to a block device, then the device is formatted. The "size" is ignored since block devices have an intrinsic size. The other optional parameters are: "preallocation" If format is "raw", then this can be either "sparse" or "full" to create a sparse or fully allocated file respectively. The default is "sparse". If format is "qcow2", then this can be either "off" or "metadata". Preallocating metadata can be faster when doing lots of writes, but uses more space. The default is "off". "compat" "qcow2" only: Pass the string 1.1 to use the advanced qcow2 format supported by qemu ≥ 1.1. "clustersize" "qcow2" only: Change the qcow2 cluster size. The default is 65536 (bytes) and this setting may be any power of two between 512 and 2097152. Note that this call does not add the new disk to the handle. You may need to call "add-drive-opts" separately. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". disk-format disk-format filename Detect and return the format of the disk image called "filename". "filename" can also be a host device, etc. If the format of the image could not be detected, then "unknown" is returned. Note that detecting the disk format can be insecure under some circumstances. See "CVE-2010-3851" in guestfs(3). See also: "DISK IMAGE FORMATS" in guestfs(3) disk-has-backing-file disk-has-backing-file filename Detect and return whether the disk image "filename" has a backing file. Note that detecting disk features can be insecure under some circumstances. See "CVE-2010-3851" in guestfs(3). disk-virtual-size disk-virtual-size filename Detect and return the virtual size in bytes of the disk image called "filename". Note that detecting disk features can be insecure under some circumstances. See "CVE-2010-3851" in guestfs(3). dmesg dmesg This returns the kernel messages ("dmesg" output) from the guest kernel. This is sometimes useful for extended debugging of problems. Another way to get the same information is to enable verbose messages with "set-verbose" or by setting the environment variable "LIBGUESTFS_DEBUG=1" before running the program. download download remotefilename (filename|-) Download file "remotefilename" and save it as "filename" on the local machine. "filename" can also be a named pipe. See also "upload", "cat". Use "-" instead of a filename to read/write from stdin/stdout. download-offset download-offset remotefilename (filename|-) offset size Download file "remotefilename" and save it as "filename" on the local machine. "remotefilename" is read for "size" bytes starting at "offset" (this region must be within the file or device). Note that there is no limit on the amount of data that can be downloaded with this call, unlike with "pread", and this call always reads the full amount unless an error occurs. See also "download", "pread". Use "-" instead of a filename to read/write from stdin/stdout. drop-caches drop-caches whattodrop This instructs the guest kernel to drop its page cache, and/or dentries and inode caches. The parameter "whattodrop" tells the kernel what precisely to drop, see http://linux-mm.org/Drop_Caches Setting "whattodrop" to 3 should drop everything. This automatically calls sync(2) before the operation, so that the maximum guest memory is freed. du du path This command runs the "du -s" command to estimate file space usage for "path". "path" can be a file or a directory. If "path" is a directory then the estimate includes the contents of the directory and all subdirectories (recursively). The result is the estimated size in kilobytes (ie. units of 1024 bytes). e2fsck e2fsck device [correct:true|false] [forceall:true|false] This runs the ext2/ext3 filesystem checker on "device". It can take the following optional arguments: "correct" Automatically repair the file system. This option will cause e2fsck to automatically fix any filesystem problems that can be safely fixed without human intervention. This option may not be specified at the same time as the "forceall" option. "forceall" Assume an answer of 'yes' to all questions; allows e2fsck to be used non-interactively. This option may not be specified at the same time as the "correct" option. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". e2fsck-f e2fsck-f device This runs "e2fsck -p -f device", ie. runs the ext2/ext3 filesystem checker on "device", noninteractively (-p), even if the filesystem appears to be clean (-f). This function is deprecated. In new code, use the "e2fsck" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. echo-daemon echo-daemon 'words ...' This command concatenates the list of "words" passed with single spaces between them and returns the resulting string. You can use this command to test the connection through to the daemon. See also "ping-daemon". egrep egrep regex path This calls the external "egrep" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. egrepi egrepi regex path This calls the external "egrep -i" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. equal equal file1 file2 This compares the two files "file1" and "file2" and returns true if their content is exactly equal, or false otherwise. The external cmp(1) program is used for the comparison. exists exists path This returns "true" if and only if there is a file, directory (or anything) with the given "path" name. See also "is-file", "is-dir", "stat". extlinux extlinux directory Install the SYSLINUX bootloader on the device mounted at "directory". Unlike "syslinux" which requires a FAT filesystem, this can be used on an ext2/3/4 or btrfs filesystem. The "directory" parameter can be either a mountpoint, or a directory within the mountpoint. You also have to mark the partition as "active" ("part-set-bootable") and a Master Boot Record must be installed (eg. using "pwrite-device") on the first sector of the whole disk. The SYSLINUX package comes with some suitable Master Boot Records. See the extlinux(1) man page for further information. Additional configuration can be supplied to SYSLINUX by placing a file called "extlinux.conf" on the filesystem under "directory". For further information about the contents of this file, see extlinux(1). See also "syslinux". fallocate fallocate path len This command preallocates a file (containing zero bytes) named "path" of size "len" bytes. If the file exists already, it is overwritten. Do not confuse this with the guestfish-specific "alloc" command which allocates a file in the host and attaches it as a device. This function is deprecated. In new code, use the "fallocate64" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. fallocate64 fallocate64 path len This command preallocates a file (containing zero bytes) named "path" of size "len" bytes. If the file exists already, it is overwritten. Note that this call allocates disk blocks for the file. To create a sparse file use "truncate-size" instead. The deprecated call "fallocate" does the same, but owing to an oversight it only allowed 30 bit lengths to be specified, effectively limiting the maximum size of files created through that call to 1GB. Do not confuse this with the guestfish-specific "alloc" and "sparse" commands which create a file in the host and attach it as a device. feature-available feature-available 'groups ...' This is the same as "available", but unlike that call it returns a simple true/false boolean result, instead of throwing an exception if a feature is not found. For other documentation see "available". fgrep fgrep pattern path This calls the external "fgrep" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. fgrepi fgrepi pattern path This calls the external "fgrep -i" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. file file path This call uses the standard file(1) command to determine the type or contents of the file. This call will also transparently look inside various types of compressed file. The exact command which runs is "file -zb path". Note in particular that the filename is not prepended to the output (the -b option). The output depends on the output of the underlying file(1) command and it can change in future in ways beyond our control. In other words, the output is not guaranteed by the ABI. See also: file(1), "vfs-type", "lstat", "is-file", "is-blockdev" (etc), "is-zero". file-architecture file-architecture filename This detects the architecture of the binary "filename", and returns it if known. Currently defined architectures are: "i386" This string is returned for all 32 bit i386, i486, i586, i686 binaries irrespective of the precise processor requirements of the binary. "x86_64" 64 bit x86-64. "sparc" 32 bit SPARC. "sparc64" 64 bit SPARC V9 and above. "ia64" Intel Itanium. "ppc" 32 bit Power PC. "ppc64" 64 bit Power PC. Libguestfs may return other architecture strings in future. The function works on at least the following types of files: · many types of Un*x and Linux binary · many types of Un*x and Linux shared library · Windows Win32 and Win64 binaries · Windows Win32 and Win64 DLLs Win32 binaries and DLLs return "i386". Win64 binaries and DLLs return "x86_64". · Linux kernel modules · Linux new-style initrd images · some non-x86 Linux vmlinuz kernels What it can't do currently: · static libraries (libfoo.a) · Linux old-style initrd as compressed ext2 filesystem (RHEL 3) · x86 Linux vmlinuz kernels x86 vmlinuz images (bzImage format) consist of a mix of 16-, 32- and compressed code, and are horribly hard to unpack. If you want to find the architecture of a kernel, use the architecture of the associated initrd or kernel module(s) instead. filesize filesize file This command returns the size of "file" in bytes. To get other stats about a file, use "stat", "lstat", "is-dir", "is- file" etc. To get the size of block devices, use "blockdev-getsize64". filesystem-available filesystem-available filesystem Check whether libguestfs supports the named filesystem. The argument "filesystem" is a filesystem name, such as "ext3". You must call "launch" before using this command. This is mainly useful as a negative test. If this returns true, it doesn't mean that a particular filesystem can be created or mounted, since filesystems can fail for other reasons such as it being a later version of the filesystem, or having incompatible features, or lacking the right mkfs.<fs> tool. See also "available", "feature-available", "AVAILABILITY" in guestfs(3). fill fill c len path This command creates a new file called "path". The initial content of the file is "len" octets of "c", where "c" must be a number in the range "[0..255]". To fill a file with zero bytes (sparsely), it is much more efficient to use "truncate-size". To create a file with a pattern of repeating bytes use "fill-pattern". fill-dir fill-dir dir nr This function, useful for testing filesystems, creates "nr" empty files in the directory "dir" with names 00000000 through "nr-1" (ie. each file name is 8 digits long padded with zeroes). fill-pattern fill-pattern pattern len path This function is like "fill" except that it creates a new file of length "len" containing the repeating pattern of bytes in "pattern". The pattern is truncated if necessary to ensure the length of the file is exactly "len" bytes. find find directory This command lists out all files and directories, recursively, starting at "directory". It is essentially equivalent to running the shell command "find directory -print" but some post-processing happens on the output, described below. This returns a list of strings without any prefix. Thus if the directory structure was: /tmp/a /tmp/b /tmp/c/d then the returned list from "find" "/tmp" would be 4 elements: a b c c/d If "directory" is not a directory, then this command returns an error. The returned list is sorted. find0 find0 directory (files|-) This command lists out all files and directories, recursively, starting at "directory", placing the resulting list in the external file called "files". This command works the same way as "find" with the following exceptions: · The resulting list is written to an external file. · Items (filenames) in the result are separated by "" characters. See find(1) option -print0. · The result list is not sorted. Use "-" instead of a filename to read/write from stdin/stdout. findfs-label findfs-label label This command searches the filesystems and returns the one which has the given label. An error is returned if no such filesystem can be found. To find the label of a filesystem, use "vfs-label". findfs-uuid findfs-uuid uuid This command searches the filesystems and returns the one which has the given UUID. An error is returned if no such filesystem can be found. To find the UUID of a filesystem, use "vfs-uuid". fsck fsck fstype device This runs the filesystem checker (fsck) on "device" which should have filesystem type "fstype". The returned integer is the status. See fsck(8) for the list of status codes from "fsck". Notes: · Multiple status codes can be summed together. · A non-zero return code can mean "success", for example if errors have been corrected on the filesystem. · Checking or repairing NTFS volumes is not supported (by linux- ntfs). This command is entirely equivalent to running "fsck -a -t fstype device". fstrim fstrim mountpoint [offset:N] [length:N] [minimumfreeextent:N] Trim the free space in the filesystem mounted on "mountpoint". The filesystem must be mounted read-write. The filesystem contents are not affected, but any free space in the filesystem is "trimmed", that is, given back to the host device, thus making disk images more sparse, allowing unused space in qcow2 files to be reused, etc. This operation requires support in libguestfs, the mounted filesystem, the host filesystem, qemu and the host kernel. If this support isn't present it may give an error or even appear to run but do nothing. See also "zero-free-space". That is a slightly different operation that turns free space in the filesystem into zeroes. It is valid to call "fstrim" either instead of, or after calling "zero-free-space". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". get-append get-append Return the additional kernel options which are added to the guest kernel command line. If "NULL" then no options are added. get-attach-method get-attach-method Return the current backend. See "set-backend" and "BACKEND" in guestfs(3). This function is deprecated. In new code, use the "get-backend" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. get-autosync get-autosync Get the autosync flag. get-backend get-backend Return the current backend. This handle property was previously called the "attach method". See "set-backend" and "BACKEND" in guestfs(3). get-backend-settings get-backend-settings Return the current backend settings. See "BACKEND" in guestfs(3), "BACKEND SETTINGS" in guestfs(3). get-cachedir get-cachedir Get the directory used by the handle to store the appliance cache. get-direct get-direct Return the direct appliance mode flag. get-e2attrs get-e2attrs file This returns the file attributes associated with "file". The attributes are a set of bits associated with each inode which affect the behaviour of the file. The attributes are returned as a string of letters (described below). The string may be empty, indicating that no file attributes are set for this file. These attributes are only present when the file is located on an ext2/3/4 filesystem. Using this call on other filesystem types will result in an error. The characters (file attributes) in the returned string are currently: 'A' When the file is accessed, its atime is not modified. 'a' The file is append-only. 'c' The file is compressed on-disk. 'D' (Directories only.) Changes to this directory are written synchronously to disk. 'd' The file is not a candidate for backup (see dump(8)). 'E' The file has compression errors. 'e' The file is using extents. 'h' The file is storing its blocks in units of the filesystem blocksize instead of sectors. 'I' (Directories only.) The directory is using hashed trees. 'i' The file is immutable. It cannot be modified, deleted or renamed. No link can be created to this file. 'j' The file is data-journaled. 's' When the file is deleted, all its blocks will be zeroed. 'S' Changes to this file are written synchronously to disk. 'T' (Directories only.) This is a hint to the block allocator that subdirectories contained in this directory should be spread across blocks. If not present, the block allocator will try to group subdirectories together. 't' For a file, this disables tail-merging. (Not used by upstream implementations of ext2.) 'u' When the file is deleted, its blocks will be saved, allowing the file to be undeleted. 'X' The raw contents of the compressed file may be accessed. 'Z' The compressed file is dirty. More file attributes may be added to this list later. Not all file attributes may be set for all kinds of files. For detailed information, consult the chattr(1) man page. See also "set-e2attrs". Don't confuse these attributes with extended attributes (see "getxattr"). get-e2generation get-e2generation file This returns the ext2 file generation of a file. The generation (which used to be called the "version") is a number associated with an inode. This is most commonly used by NFS servers. The generation is only present when the file is located on an ext2/3/4 filesystem. Using this call on other filesystem types will result in an error. See "set-e2generation". get-e2label get-e2label device This returns the ext2/3/4 filesystem label of the filesystem on "device". This function is deprecated. In new code, use the "vfs-label" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. get-e2uuid get-e2uuid device This returns the ext2/3/4 filesystem UUID of the filesystem on "device". This function is deprecated. In new code, use the "vfs-uuid" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. get-hv get-hv Return the current hypervisor binary. This is always non-NULL. If it wasn't set already, then this will return the default qemu binary name. get-libvirt-requested-credential-challenge get-libvirt-requested-credential-challenge index Get the challenge (provided by libvirt) for the "index"'th requested credential. If libvirt did not provide a challenge, this returns the empty string "". See "LIBVIRT AUTHENTICATION" in guestfs(3) for documentation and example code. get-libvirt-requested-credential-defresult get-libvirt-requested-credential-defresult index Get the default result (provided by libvirt) for the "index"'th requested credential. If libvirt did not provide a default result, this returns the empty string "". See "LIBVIRT AUTHENTICATION" in guestfs(3) for documentation and example code. get-libvirt-requested-credential-prompt get-libvirt-requested-credential-prompt index Get the prompt (provided by libvirt) for the "index"'th requested credential. If libvirt did not provide a prompt, this returns the empty string "". See "LIBVIRT AUTHENTICATION" in guestfs(3) for documentation and example code. get-libvirt-requested-credentials get-libvirt-requested-credentials This should only be called during the event callback for events of type "GUESTFS_EVENT_LIBVIRT_AUTH". Return the list of credentials requested by libvirt. Possible values are a subset of the strings provided when you called "set-libvirt- supported-credentials". See "LIBVIRT AUTHENTICATION" in guestfs(3) for documentation and example code. get-memsize get-memsize This gets the memory size in megabytes allocated to the hypervisor. If "set-memsize" was not called on this handle, and if "LIBGUESTFS_MEMSIZE" was not set, then this returns the compiled-in default value for memsize. For more information on the architecture of libguestfs, see guestfs(3). get-network get-network This returns the enable network flag. get-path get-path Return the current search path. This is always non-NULL. If it wasn't set already, then this will return the default path. get-pgroup get-pgroup This returns the process group flag. get-pid pid get-pid Return the process ID of the hypervisor. If there is no hypervisor running, then this will return an error. This is an internal call used for debugging and testing. get-program get-program Get the program name. See "set-program". get-qemu get-qemu Return the current hypervisor binary (usually qemu). This is always non-NULL. If it wasn't set already, then this will return the default qemu binary name. This function is deprecated. In new code, use the "get-hv" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. get-recovery-proc get-recovery-proc Return the recovery process enabled flag. get-selinux get-selinux This returns the current setting of the selinux flag which is passed to the appliance at boot time. See "set-selinux". For more information on the architecture of libguestfs, see guestfs(3). get-smp get-smp This returns the number of virtual CPUs assigned to the appliance. get-tmpdir get-tmpdir Get the directory used by the handle to store temporary files. get-trace get-trace Return the command trace flag. get-umask get-umask Return the current umask. By default the umask is 022 unless it has been set by calling "umask". get-verbose get-verbose This returns the verbose messages flag. getcon getcon This gets the SELinux security context of the daemon. See the documentation about SELINUX in guestfs(3), and "setcon" getxattr getxattr path name Get a single extended attribute from file "path" named "name". This call follows symlinks. If you want to lookup an extended attribute for the symlink itself, use "lgetxattr". Normally it is better to get all extended attributes from a file in one go by calling "getxattrs". However some Linux filesystem implementations are buggy and do not provide a way to list out attributes. For these filesystems (notably ntfs-3g) you have to know the names of the extended attributes you want in advance and call this function. Extended attribute values are blobs of binary data. If there is no extended attribute named "name", this returns an error. See also: "getxattrs", "lgetxattr", attr(5). getxattrs getxattrs path This call lists the extended attributes of the file or directory "path". At the system call level, this is a combination of the listxattr(2) and getxattr(2) calls. See also: "lgetxattrs", attr(5). glob-expand glob-expand pattern This command searches for all the pathnames matching "pattern" according to the wildcard expansion rules used by the shell. If no paths match, then this returns an empty list (note: not an error). It is just a wrapper around the C glob(3) function with flags "GLOB_MARK|GLOB_BRACE". See that manual page for more details. Notice that there is no equivalent command for expanding a device name (eg. "/dev/sd*"). Use "list-devices", "list-partitions" etc functions instead. grep grep-opts grep regex path [extended:true|false] [fixed:true|false] [insensitive:true|false] [compressed:true|false] This calls the external "grep" program and returns the matching lines. The optional flags are: "extended" Use extended regular expressions. This is the same as using the -E flag. "fixed" Match fixed (don't use regular expressions). This is the same as using the -F flag. "insensitive" Match case-insensitive. This is the same as using the -i flag. "compressed" Use "zgrep" instead of "grep". This allows the input to be compress- or gzip-compressed. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). grepi grepi regex path This calls the external "grep -i" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. grub-install grub-install root device This command installs GRUB 1 (the Grand Unified Bootloader) on "device", with the root directory being "root". Notes: · There is currently no way in the API to install grub2, which is used by most modern Linux guests. It is possible to run the grub2 command from the guest, although see the caveats in "RUNNING COMMANDS" in guestfs(3). · This uses "grub-install" from the host. Unfortunately grub is not always compatible with itself, so this only works in rather narrow circumstances. Careful testing with each guest version is advisable. · If grub-install reports the error "No suitable drive was found in the generated device map." it may be that you need to create a "/boot/grub/device.map" file first that contains the mapping between grub device names and Linux device names. It is usually sufficient to create a file containing: (hd0) /dev/vda replacing "/dev/vda" with the name of the installation device. head head path This command returns up to the first 10 lines of a file as a list of strings. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). head-n head-n nrlines path If the parameter "nrlines" is a positive number, this returns the first "nrlines" lines of the file "path". If the parameter "nrlines" is a negative number, this returns lines from the file "path", excluding the last "nrlines" lines. If the parameter "nrlines" is zero, this returns an empty list. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). hexdump hexdump path This runs "hexdump -C" on the given "path". The result is the human- readable, canonical hex dump of the file. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). hivex-close hivex-close Close the current hivex handle. This is a wrapper around the hivex(3) call of the same name. hivex-commit hivex-commit filename Commit (write) changes to the hive. If the optional "filename" parameter is null, then the changes are written back to the same hive that was opened. If this is not null then they are written to the alternate filename given and the original hive is left untouched. This is a wrapper around the hivex(3) call of the same name. hivex-node-add-child hivex-node-add-child parent name Add a child node to "parent" named "name". This is a wrapper around the hivex(3) call of the same name. hivex-node-children hivex-node-children nodeh Return the list of nodes which are subkeys of "nodeh". This is a wrapper around the hivex(3) call of the same name. hivex-node-delete-child hivex-node-delete-child nodeh Delete "nodeh", recursively if necessary. This is a wrapper around the hivex(3) call of the same name. hivex-node-get-child hivex-node-get-child nodeh name Return the child of "nodeh" with the name "name", if it exists. This can return 0 meaning the name was not found. This is a wrapper around the hivex(3) call of the same name. hivex-node-get-value hivex-node-get-value nodeh key Return the value attached to "nodeh" which has the name "key", if it exists. This can return 0 meaning the key was not found. This is a wrapper around the hivex(3) call of the same name. hivex-node-name hivex-node-name nodeh Return the name of "nodeh". This is a wrapper around the hivex(3) call of the same name. hivex-node-parent hivex-node-parent nodeh Return the parent node of "nodeh". This is a wrapper around the hivex(3) call of the same name. hivex-node-set-value hivex-node-set-value nodeh key t val Set or replace a single value under the node "nodeh". The "key" is the name, "t" is the type, and "val" is the data. This is a wrapper around the hivex(3) call of the same name. hivex-node-values hivex-node-values nodeh Return the array of (key, datatype, data) tuples attached to "nodeh". This is a wrapper around the hivex(3) call of the same name. hivex-open hivex-open filename [verbose:true|false] [debug:true|false] [write:true|false] Open the Windows Registry hive file named "filename". If there was any previous hivex handle associated with this guestfs session, then it is closed. This is a wrapper around the hivex(3) call of the same name. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". hivex-root hivex-root Return the root node of the hive. This is a wrapper around the hivex(3) call of the same name. hivex-value-key hivex-value-key valueh Return the key (name) field of a (key, datatype, data) tuple. This is a wrapper around the hivex(3) call of the same name. hivex-value-type hivex-value-type valueh Return the data type field from a (key, datatype, data) tuple. This is a wrapper around the hivex(3) call of the same name. hivex-value-utf8 hivex-value-utf8 valueh This calls "hivex-value-value" (which returns the data field from a hivex value tuple). It then assumes that the field is a UTF-16LE string and converts the result to UTF-8 (or if this is not possible, it returns an error). This is useful for reading strings out of the Windows registry. However it is not foolproof because the registry is not strongly-typed and fields can contain arbitrary or unexpected data. hivex-value-value hivex-value-value valueh Return the data field of a (key, datatype, data) tuple. This is a wrapper around the hivex(3) call of the same name. See also: "hivex-value-utf8". initrd-cat initrd-cat initrdpath filename This command unpacks the file "filename" from the initrd file called "initrdpath". The filename must be given without the initial "/" character. For example, in guestfish you could use the following command to examine the boot script (usually called "/init") contained in a Linux initrd or initramfs image: initrd-cat /boot/initrd-<version>.img init See also "initrd-list". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). initrd-list initrd-list path This command lists out files contained in an initrd. The files are listed without any initial "/" character. The files are listed in the order they appear (not necessarily alphabetical). Directory names are listed as separate items. Old Linux kernels (2.4 and earlier) used a compressed ext2 filesystem as initrd. We only support the newer initramfs format (compressed cpio files). inotify-add-watch inotify-add-watch path mask Watch "path" for the events listed in "mask". Note that if "path" is a directory then events within that directory are watched, but this does not happen recursively (in subdirectories). Note for non-C or non-Linux callers: the inotify events are defined by the Linux kernel ABI and are listed in "/usr/include/sys/inotify.h". inotify-close inotify-close This closes the inotify handle which was previously opened by inotify_init. It removes all watches, throws away any pending events, and deallocates all resources. inotify-files inotify-files This function is a helpful wrapper around "inotify-read" which just returns a list of pathnames of objects that were touched. The returned pathnames are sorted and deduplicated. inotify-init inotify-init maxevents This command creates a new inotify handle. The inotify subsystem can be used to notify events which happen to objects in the guest filesystem. "maxevents" is the maximum number of events which will be queued up between calls to "inotify-read" or "inotify-files". If this is passed as 0, then the kernel (or previously set) default is used. For Linux 2.6.29 the default was 16384 events. Beyond this limit, the kernel throws away events, but records the fact that it threw them away by setting a flag "IN_Q_OVERFLOW" in the returned structure list (see "inotify-read"). Before any events are generated, you have to add some watches to the internal watch list. See: "inotify-add-watch" and "inotify-rm-watch". Queued up events should be read periodically by calling "inotify-read" (or "inotify-files" which is just a helpful wrapper around "inotify- read"). If you don't read the events out often enough then you risk the internal queue overflowing. The handle should be closed after use by calling "inotify-close". This also removes any watches automatically. See also inotify(7) for an overview of the inotify interface as exposed by the Linux kernel, which is roughly what we expose via libguestfs. Note that there is one global inotify handle per libguestfs instance. inotify-read inotify-read Return the complete queue of events that have happened since the previous read call. If no events have happened, this returns an empty list. Note: In order to make sure that all events have been read, you must call this function repeatedly until it returns an empty list. The reason is that the call will read events up to the maximum appliance- to-host message size and leave remaining events in the queue. inotify-rm-watch inotify-rm-watch wd Remove a previously defined inotify watch. See "inotify-add-watch". inspect-get-arch inspect-get-arch root This returns the architecture of the inspected operating system. The possible return values are listed under "file-architecture". If the architecture could not be determined, then the string "unknown" is returned. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-distro inspect-get-distro root This returns the distro (distribution) of the inspected operating system. Currently defined distros are: "archlinux" Arch Linux. "buildroot" Buildroot-derived distro, but not one we specifically recognize. "centos" CentOS. "cirros" Cirros. "debian" Debian. "fedora" Fedora. "freedos" FreeDOS. "gentoo" Gentoo. "linuxmint" Linux Mint. "mageia" Mageia. "mandriva" Mandriva. "meego" MeeGo. "openbsd" OpenBSD. "opensuse" OpenSUSE. "pardus" Pardus. "redhat-based" Some Red Hat-derived distro. "rhel" Red Hat Enterprise Linux. "scientificlinux" Scientific Linux. "slackware" Slackware. "sles" SuSE Linux Enterprise Server or Desktop. "suse-based" Some openSuSE-derived distro. "ttylinux" ttylinux. "ubuntu" Ubuntu. "unknown" The distro could not be determined. "windows" Windows does not have distributions. This string is returned if the OS type is Windows. Future versions of libguestfs may return other strings here. The caller should be prepared to handle any string. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-drive-mappings inspect-get-drive-mappings root This call is useful for Windows which uses a primitive system of assigning drive letters (like "C:") to partitions. This inspection API examines the Windows Registry to find out how disks/partitions are mapped to drive letters, and returns a hash table as in the example below: C => /dev/vda2 E => /dev/vdb1 F => /dev/vdc1 Note that keys are drive letters. For Windows, the key is case insensitive and just contains the drive letter, without the customary colon separator character. In future we may support other operating systems that also used drive letters, but the keys for those might not be case insensitive and might be longer than 1 character. For example in OS-9, hard drives were named "h0", "h1" etc. For Windows guests, currently only hard drive mappings are returned. Removable disks (eg. DVD-ROMs) are ignored. For guests that do not use drive mappings, or if the drive mappings could not be determined, this returns an empty hash table. Please read "INSPECTION" in guestfs(3) for more details. See also "inspect-get-mountpoints", "inspect-get-filesystems". inspect-get-filesystems inspect-get-filesystems root This returns a list of all the filesystems that we think are associated with this operating system. This includes the root filesystem, other ordinary filesystems, and non-mounted devices like swap partitions. In the case of a multi-boot virtual machine, it is possible for a filesystem to be shared between operating systems. Please read "INSPECTION" in guestfs(3) for more details. See also "inspect-get-mountpoints". inspect-get-format inspect-get-format root This returns the format of the inspected operating system. You can use it to detect install images, live CDs and similar. Currently defined formats are: "installed" This is an installed operating system. "installer" The disk image being inspected is not an installed operating system, but a bootable install disk, live CD, or similar. "unknown" The format of this disk image is not known. Future versions of libguestfs may return other strings here. The caller should be prepared to handle any string. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-hostname inspect-get-hostname root This function returns the hostname of the operating system as found by inspection of the guest's configuration files. If the hostname could not be determined, then the string "unknown" is returned. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-icon inspect-get-icon root [favicon:true|false] [highquality:true|false] This function returns an icon corresponding to the inspected operating system. The icon is returned as a buffer containing a PNG image (re- encoded to PNG if necessary). If it was not possible to get an icon this function returns a zero- length (non-NULL) buffer. Callers must check for this case. Libguestfs will start by looking for a file called "/etc/favicon.png" or "C:tc avicon.png" and if it has the correct format, the contents of this file will be returned. You can disable favicons by passing the optional "favicon" boolean as false (default is true). If finding the favicon fails, then we look in other places in the guest for a suitable icon. If the optional "highquality" boolean is true then only high quality icons are returned, which means only icons of high resolution with an alpha channel. The default (false) is to return any icon we can, even if it is of substandard quality. Notes: · Unlike most other inspection API calls, the guest's disks must be mounted up before you call this, since it needs to read information from the guest filesystem during the call. · Security: The icon data comes from the untrusted guest, and should be treated with caution. PNG files have been known to contain exploits. Ensure that libpng (or other relevant libraries) are fully up to date before trying to process or display the icon. · The PNG image returned can be any size. It might not be square. Libguestfs tries to return the largest, highest quality icon available. The application must scale the icon to the required size. · Extracting icons from Windows guests requires the external "wrestool" program from the "icoutils" package, and several programs ("bmptopnm", "pnmtopng", "pamcut") from the "netpbm" package. These must be installed separately. · Operating system icons are usually trademarks. Seek legal advice before using trademarks in applications. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". inspect-get-major-version inspect-get-major-version root This returns the major version number of the inspected operating system. Windows uses a consistent versioning scheme which is not reflected in the popular public names used by the operating system. Notably the operating system known as "Windows 7" is really version 6.1 (ie. major = 6, minor = 1). You can find out the real versions corresponding to releases of Windows by consulting Wikipedia or MSDN. If the version could not be determined, then 0 is returned. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-minor-version inspect-get-minor-version root This returns the minor version number of the inspected operating system. If the version could not be determined, then 0 is returned. Please read "INSPECTION" in guestfs(3) for more details. See also "inspect-get-major-version". inspect-get-mountpoints inspect-get-mountpoints root This returns a hash of where we think the filesystems associated with this operating system should be mounted. Callers should note that this is at best an educated guess made by reading configuration files such as "/etc/fstab". In particular note that this may return filesystems which are non-existent or not mountable and callers should be prepared to handle or ignore failures if they try to mount them. Each element in the returned hashtable has a key which is the path of the mountpoint (eg. "/boot") and a value which is the filesystem that would be mounted there (eg. "/dev/sda1"). Non-mounted devices such as swap devices are not returned in this list. For operating systems like Windows which still use drive letters, this call will only return an entry for the first drive "mounted on" "/". For information about the mapping of drive letters to partitions, see "inspect-get-drive-mappings". Please read "INSPECTION" in guestfs(3) for more details. See also "inspect-get-filesystems". inspect-get-package-format inspect-get-package-format root This function and "inspect-get-package-management" return the package format and package management tool used by the inspected operating system. For example for Fedora these functions would return "rpm" (package format) and "yum" (package management). This returns the string "unknown" if we could not determine the package format or if the operating system does not have a real packaging system (eg. Windows). Possible strings include: "rpm", "deb", "ebuild", "pisi", "pacman", "pkgsrc". Future versions of libguestfs may return other strings. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-package-management inspect-get-package-management root "inspect-get-package-format" and this function return the package format and package management tool used by the inspected operating system. For example for Fedora these functions would return "rpm" (package format) and "yum" (package management). This returns the string "unknown" if we could not determine the package management tool or if the operating system does not have a real packaging system (eg. Windows). Possible strings include: "yum", "up2date", "apt" (for all Debian derivatives), "portage", "pisi", "pacman", "urpmi", "zypper". Future versions of libguestfs may return other strings. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-product-name inspect-get-product-name root This returns the product name of the inspected operating system. The product name is generally some freeform string which can be displayed to the user, but should not be parsed by programs. If the product name could not be determined, then the string "unknown" is returned. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-product-variant inspect-get-product-variant root This returns the product variant of the inspected operating system. For Windows guests, this returns the contents of the Registry key "HKLM\Software\Microsoft\Windows NT\CurrentVersion" "InstallationType" which is usually a string such as "Client" or "Server" (other values are possible). This can be used to distinguish consumer and enterprise versions of Windows that have the same version number (for example, Windows 7 and Windows 2008 Server are both version 6.1, but the former is "Client" and the latter is "Server"). For enterprise Linux guests, in future we intend this to return the product variant such as "Desktop", "Server" and so on. But this is not implemented at present. If the product variant could not be determined, then the string "unknown" is returned. Please read "INSPECTION" in guestfs(3) for more details. See also "inspect-get-product-name", "inspect-get-major-version". inspect-get-roots inspect-get-roots This function is a convenient way to get the list of root devices, as returned from a previous call to "inspect-os", but without redoing the whole inspection process. This returns an empty list if either no root devices were found or the caller has not called "inspect-os". Please read "INSPECTION" in guestfs(3) for more details. inspect-get-type inspect-get-type root This returns the type of the inspected operating system. Currently defined types are: "linux" Any Linux-based operating system. "windows" Any Microsoft Windows operating system. "freebsd" FreeBSD. "netbsd" NetBSD. "openbsd" OpenBSD. "hurd" GNU/Hurd. "dos" MS-DOS, FreeDOS and others. "unknown" The operating system type could not be determined. Future versions of libguestfs may return other strings here. The caller should be prepared to handle any string. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-windows-current-control-set inspect-get-windows-current-control-set root This returns the Windows CurrentControlSet of the inspected guest. The CurrentControlSet is a registry key name such as "ControlSet001". This call assumes that the guest is Windows and that the Registry could be examined by inspection. If this is not the case then an error is returned. Please read "INSPECTION" in guestfs(3) for more details. inspect-get-windows-systemroot inspect-get-windows-systemroot root This returns the Windows systemroot of the inspected guest. The systemroot is a directory path such as "/WINDOWS". This call assumes that the guest is Windows and that the systemroot could be determined by inspection. If this is not the case then an error is returned. Please read "INSPECTION" in guestfs(3) for more details. inspect-is-live inspect-is-live root If "inspect-get-format" returns "installer" (this is an install disk), then this returns true if a live image was detected on the disk. Please read "INSPECTION" in guestfs(3) for more details. inspect-is-multipart inspect-is-multipart root If "inspect-get-format" returns "installer" (this is an install disk), then this returns true if the disk is part of a set. Please read "INSPECTION" in guestfs(3) for more details. inspect-is-netinst inspect-is-netinst root If "inspect-get-format" returns "installer" (this is an install disk), then this returns true if the disk is a network installer, ie. not a self-contained install CD but one which is likely to require network access to complete the install. Please read "INSPECTION" in guestfs(3) for more details. inspect-list-applications inspect-list-applications root Return the list of applications installed in the operating system. Note: This call works differently from other parts of the inspection API. You have to call "inspect-os", then "inspect-get-mountpoints", then mount up the disks, before calling this. Listing applications is a significantly more difficult operation which requires access to the full filesystem. Also note that unlike the other "inspect-get-*" calls which are just returning data cached in the libguestfs handle, this call actually reads parts of the mounted filesystems during the call. This returns an empty list if the inspection code was not able to determine the list of applications. The application structure contains the following fields: "app_name" The name of the application. For Red Hat-derived and Debian- derived Linux guests, this is the package name. "app_display_name" The display name of the application, sometimes localized to the install language of the guest operating system. If unavailable this is returned as an empty string "". Callers needing to display something can use "app_name" instead. "app_epoch" For package managers which use epochs, this contains the epoch of the package (an integer). If unavailable, this is returned as 0. "app_version" The version string of the application or package. If unavailable this is returned as an empty string "". "app_release" The release string of the application or package, for package managers that use this. If unavailable this is returned as an empty string "". "app_install_path" The installation path of the application (on operating systems such as Windows which use installation paths). This path is in the format used by the guest operating system, it is not a libguestfs path. If unavailable this is returned as an empty string "". "app_trans_path" The install path translated into a libguestfs path. If unavailable this is returned as an empty string "". "app_publisher" The name of the publisher of the application, for package managers that use this. If unavailable this is returned as an empty string "". "app_url" The URL (eg. upstream URL) of the application. If unavailable this is returned as an empty string "". "app_source_package" For packaging systems which support this, the name of the source package. If unavailable this is returned as an empty string "". "app_summary" A short (usually one line) description of the application or package. If unavailable this is returned as an empty string "". "app_description" A longer description of the application or package. If unavailable this is returned as an empty string "". Please read "INSPECTION" in guestfs(3) for more details. This function is deprecated. In new code, use the "inspect-list-applications2" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. inspect-list-applications2 inspect-list-applications2 root Return the list of applications installed in the operating system. Note: This call works differently from other parts of the inspection API. You have to call "inspect-os", then "inspect-get-mountpoints", then mount up the disks, before calling this. Listing applications is a significantly more difficult operation which requires access to the full filesystem. Also note that unlike the other "inspect-get-*" calls which are just returning data cached in the libguestfs handle, this call actually reads parts of the mounted filesystems during the call. This returns an empty list if the inspection code was not able to determine the list of applications. The application structure contains the following fields: "app2_name" The name of the application. For Red Hat-derived and Debian- derived Linux guests, this is the package name. "app2_display_name" The display name of the application, sometimes localized to the install language of the guest operating system. If unavailable this is returned as an empty string "". Callers needing to display something can use "app2_name" instead. "app2_epoch" For package managers which use epochs, this contains the epoch of the package (an integer). If unavailable, this is returned as 0. "app2_version" The version string of the application or package. If unavailable this is returned as an empty string "". "app2_release" The release string of the application or package, for package managers that use this. If unavailable this is returned as an empty string "". "app2_arch" The architecture string of the application or package, for package managers that use this. If unavailable this is returned as an empty string "". "app2_install_path" The installation path of the application (on operating systems such as Windows which use installation paths). This path is in the format used by the guest operating system, it is not a libguestfs path. If unavailable this is returned as an empty string "". "app2_trans_path" The install path translated into a libguestfs path. If unavailable this is returned as an empty string "". "app2_publisher" The name of the publisher of the application, for package managers that use this. If unavailable this is returned as an empty string "". "app2_url" The URL (eg. upstream URL) of the application. If unavailable this is returned as an empty string "". "app2_source_package" For packaging systems which support this, the name of the source package. If unavailable this is returned as an empty string "". "app2_summary" A short (usually one line) description of the application or package. If unavailable this is returned as an empty string "". "app2_description" A longer description of the application or package. If unavailable this is returned as an empty string "". Please read "INSPECTION" in guestfs(3) for more details. inspect-os inspect-os This function uses other libguestfs functions and certain heuristics to inspect the disk(s) (usually disks belonging to a virtual machine), looking for operating systems. The list returned is empty if no operating systems were found. If one operating system was found, then this returns a list with a single element, which is the name of the root filesystem of this operating system. It is also possible for this function to return a list containing more than one element, indicating a dual-boot or multi- boot virtual machine, with each element being the root filesystem of one of the operating systems. You can pass the root string(s) returned to other "inspect-get-*" functions in order to query further information about each operating system, such as the name and version. This function uses other libguestfs features such as "mount-ro" and "umount-all" in order to mount and unmount filesystems and look at the contents. This should be called with no disks currently mounted. The function may also use Augeas, so any existing Augeas handle will be closed. This function cannot decrypt encrypted disks. The caller must do that first (supplying the necessary keys) if the disk is encrypted. Please read "INSPECTION" in guestfs(3) for more details. See also "list-filesystems". is-blockdev is-blockdev-opts is-blockdev path [followsymlinks:true|false] This returns "true" if and only if there is a block device with the given "path" name. If the optional flag "followsymlinks" is true, then a symlink (or chain of symlinks) that ends with a block device also causes the function to return true. See also "stat". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". is-chardev is-chardev-opts is-chardev path [followsymlinks:true|false] This returns "true" if and only if there is a character device with the given "path" name. If the optional flag "followsymlinks" is true, then a symlink (or chain of symlinks) that ends with a chardev also causes the function to return true. See also "stat". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". is-config is-config This returns true iff this handle is being configured (in the "CONFIG" state). For more information on states, see guestfs(3). is-dir is-dir-opts is-dir path [followsymlinks:true|false] This returns "true" if and only if there is a directory with the given "path" name. Note that it returns false for other objects like files. If the optional flag "followsymlinks" is true, then a symlink (or chain of symlinks) that ends with a directory also causes the function to return true. See also "stat". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". is-fifo is-fifo-opts is-fifo path [followsymlinks:true|false] This returns "true" if and only if there is a FIFO (named pipe) with the given "path" name. If the optional flag "followsymlinks" is true, then a symlink (or chain of symlinks) that ends with a FIFO also causes the function to return true. See also "stat". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". is-file is-file-opts is-file path [followsymlinks:true|false] This returns "true" if and only if there is a regular file with the given "path" name. Note that it returns false for other objects like directories. If the optional flag "followsymlinks" is true, then a symlink (or chain of symlinks) that ends with a file also causes the function to return true. See also "stat". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". is-lv is-lv device This command tests whether "device" is a logical volume, and returns true iff this is the case. is-socket is-socket-opts is-socket path [followsymlinks:true|false] This returns "true" if and only if there is a Unix domain socket with the given "path" name. If the optional flag "followsymlinks" is true, then a symlink (or chain of symlinks) that ends with a socket also causes the function to return true. See also "stat". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". is-symlink is-symlink path This returns "true" if and only if there is a symbolic link with the given "path" name. See also "stat". is-whole-device is-whole-device device This returns "true" if and only if "device" refers to a whole block device. That is, not a partition or a logical device. is-zero is-zero path This returns true iff the file exists and the file is empty or it contains all zero bytes. is-zero-device is-zero-device device This returns true iff the device exists and contains all zero bytes. Note that for large devices this can take a long time to run. isoinfo isoinfo isofile This is the same as "isoinfo-device" except that it works for an ISO file located inside some other mounted filesystem. Note that in the common case where you have added an ISO file as a libguestfs device, you would not call this. Instead you would call "isoinfo-device". isoinfo-device isoinfo-device device "device" is an ISO device. This returns a struct of information read from the primary volume descriptor (the ISO equivalent of the superblock) of the device. Usually it is more efficient to use the isoinfo(1) command with the -d option on the host to analyze ISO files, instead of going through libguestfs. For information on the primary volume descriptor fields, see http://wiki.osdev.org/ISO_9660#The_Primary_Volume_Descriptor journal-close journal-close Close the journal handle. journal-get journal-get Read the current journal entry. This returns all the fields in the journal as a set of "(attrname, attrval)" pairs. The "attrname" is the field name (a string). The "attrval" is the field value (a binary blob, often but not always a string). Please note that "attrval" is a byte array, not a -terminated C string. The length of data may be truncated to the data threshold (see: "journal-set-data-threshold", "journal-get-data-threshold"). If you set the data threshold to unlimited (0) then this call can read a journal entry of any size, ie. it is not limited by the libguestfs protocol. journal-get-data-threshold journal-get-data-threshold Get the current data threshold for reading journal entries. This is a hint to the journal that it may truncate data fields to this size when reading them (note also that it may not truncate them). If this returns 0, then the threshold is unlimited. See also "journal-set-data-threshold". journal-next journal-next Move to the next journal entry. You have to call this at least once after opening the handle before you are able to read data. The returned boolean tells you if there are any more journal records to read. "true" means you can read the next record (eg. using "journal- get-data"), and "false" means you have reached the end of the journal. journal-open journal-open directory Open the systemd journal located in "directory". Any previously opened journal handle is closed. The contents of the journal can be read using "journal-next" and "journal-get". After you have finished using the journal, you should close the handle by calling "journal-close". journal-set-data-threshold journal-set-data-threshold threshold Set the data threshold for reading journal entries. This is a hint to the journal that it may truncate data fields to this size when reading them (note also that it may not truncate them). If you set this to 0, then the threshold is unlimited. See also "journal-get-data-threshold". journal-skip journal-skip skip Skip forwards ("skip ≥ 0") or backwards ("skip < 0") in the journal. The number of entries actually skipped is returned (note "rskip ≥ 0"). If this is not the same as the absolute value of the skip parameter ("|skip|") you passed in then it means you have reached the end or the start of the journal. kill-subprocess kill-subprocess This kills the hypervisor. Do not call this. See: "shutdown" instead. This function is deprecated. In new code, use the "shutdown" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. launch run launch You should call this after configuring the handle (eg. adding drives) but before performing any actions. Do not call "launch" twice on the same handle. Although it will not give an error (for historical reasons), the precise behaviour when you do this is not well defined. Handles are very cheap to create, so create a new one for each launch. lchown lchown owner group path Change the file owner to "owner" and group to "group". This is like "chown" but if "path" is a symlink then the link itself is changed, not the target. Only numeric uid and gid are supported. If you want to use names, you will need to locate and parse the password file yourself (Augeas support makes this relatively easy). ldmtool-create-all ldmtool-create-all This function scans all block devices looking for Windows dynamic disk volumes and partitions, and creates devices for any that were found. Call "list-ldm-volumes" and "list-ldm-partitions" to return all devices. Note that you don't normally need to call this explicitly, since it is done automatically at "launch" time. However you might want to call this function if you have hotplugged disks or have just created a Windows dynamic disk. ldmtool-diskgroup-disks ldmtool-diskgroup-disks diskgroup Return the disks in a Windows dynamic disk group. The "diskgroup" parameter should be the GUID of a disk group, one element from the list returned by "ldmtool-scan". ldmtool-diskgroup-name ldmtool-diskgroup-name diskgroup Return the name of a Windows dynamic disk group. The "diskgroup" parameter should be the GUID of a disk group, one element from the list returned by "ldmtool-scan". ldmtool-diskgroup-volumes ldmtool-diskgroup-volumes diskgroup Return the volumes in a Windows dynamic disk group. The "diskgroup" parameter should be the GUID of a disk group, one element from the list returned by "ldmtool-scan". ldmtool-remove-all ldmtool-remove-all This is essentially the opposite of "ldmtool-create-all". It removes the device mapper mappings for all Windows dynamic disk volumes ldmtool-scan ldmtool-scan This function scans for Windows dynamic disks. It returns a list of identifiers (GUIDs) for all disk groups that were found. These identifiers can be passed to other "ldmtool-*" functions. This function scans all block devices. To scan a subset of block devices, call "ldmtool-scan-devices" instead. ldmtool-scan-devices ldmtool-scan-devices 'devices ...' This function scans for Windows dynamic disks. It returns a list of identifiers (GUIDs) for all disk groups that were found. These identifiers can be passed to other "ldmtool-*" functions. The parameter "devices" is a list of block devices which are scanned. If this list is empty, all block devices are scanned. ldmtool-volume-hint ldmtool-volume-hint diskgroup volume Return the hint field of the volume named "volume" in the disk group with GUID "diskgroup". This may not be defined, in which case the empty string is returned. The hint field is often, though not always, the name of a Windows drive, eg. "E:". ldmtool-volume-partitions ldmtool-volume-partitions diskgroup volume Return the list of partitions in the volume named "volume" in the disk group with GUID "diskgroup". ldmtool-volume-type ldmtool-volume-type diskgroup volume Return the type of the volume named "volume" in the disk group with GUID "diskgroup". Possible volume types that can be returned here include: "simple", "spanned", "striped", "mirrored", "raid5". Other types may also be returned. lgetxattr lgetxattr path name Get a single extended attribute from file "path" named "name". If "path" is a symlink, then this call returns an extended attribute from the symlink. Normally it is better to get all extended attributes from a file in one go by calling "getxattrs". However some Linux filesystem implementations are buggy and do not provide a way to list out attributes. For these filesystems (notably ntfs-3g) you have to know the names of the extended attributes you want in advance and call this function. Extended attribute values are blobs of binary data. If there is no extended attribute named "name", this returns an error. See also: "lgetxattrs", "getxattr", attr(5). lgetxattrs lgetxattrs path This is the same as "getxattrs", but if "path" is a symbolic link, then it returns the extended attributes of the link itself. list-9p list-9p List all 9p filesystems attached to the guest. A list of mount tags is returned. list-devices list-devices List all the block devices. The full block device names are returned, eg. "/dev/sda". See also "list-filesystems". list-disk-labels list-disk-labels If you add drives using the optional "label" parameter of "add-drive- opts", you can use this call to map between disk labels, and raw block device and partition names (like "/dev/sda" and "/dev/sda1"). This returns a hashtable, where keys are the disk labels (without the "/dev/disk/guestfs" prefix), and the values are the full raw block device and partition names (eg. "/dev/sda" and "/dev/sda1"). list-dm-devices list-dm-devices List all device mapper devices. The returned list contains "/dev/mapper/*" devices, eg. ones created by a previous call to "luks-open". Device mapper devices which correspond to logical volumes are not returned in this list. Call "lvs" if you want to list logical volumes. list-filesystems list-filesystems This inspection command looks for filesystems on partitions, block devices and logical volumes, returning a list of "mountables" containing filesystems and their type. The return value is a hash, where the keys are the devices containing filesystems, and the values are the filesystem types. For example: "/dev/sda1" => "ntfs" "/dev/sda2" => "ext2" "/dev/vg_guest/lv_root" => "ext4" "/dev/vg_guest/lv_swap" => "swap" The key is not necessarily a block device. It may also be an opaque 'mountable' string which can be passed to "mount". The value can have the special value "unknown", meaning the content of the device is undetermined or empty. "swap" means a Linux swap partition. This command runs other libguestfs commands, which might include "mount" and "umount", and therefore you should use this soon after launch and only when nothing is mounted. Not all of the filesystems returned will be mountable. In particular, swap partitions are returned in the list. Also this command does not check that each filesystem found is valid and mountable, and some filesystems might be mountable but require special options. Filesystems may not all belong to a single logical operating system (use "inspect-os" to look for OSes). list-ldm-partitions list-ldm-partitions This function returns all Windows dynamic disk partitions that were found at launch time. It returns a list of device names. list-ldm-volumes list-ldm-volumes This function returns all Windows dynamic disk volumes that were found at launch time. It returns a list of device names. list-md-devices list-md-devices List all Linux md devices. list-partitions list-partitions List all the partitions detected on all block devices. The full partition device names are returned, eg. "/dev/sda1" This does not return logical volumes. For that you will need to call "lvs". See also "list-filesystems". ll ll directory List the files in "directory" (relative to the root directory, there is no cwd) in the format of 'ls -la'. This command is mostly useful for interactive sessions. It is not intended that you try to parse the output string. llz llz directory List the files in "directory" in the format of 'ls -laZ'. This command is mostly useful for interactive sessions. It is not intended that you try to parse the output string. ln ln target linkname This command creates a hard link using the "ln" command. ln-f ln-f target linkname This command creates a hard link using the "ln -f" command. The -f option removes the link ("linkname") if it exists already. ln-s ln-s target linkname This command creates a symbolic link using the "ln -s" command. ln-sf ln-sf target linkname This command creates a symbolic link using the "ln -sf" command, The -f option removes the link ("linkname") if it exists already. lremovexattr lremovexattr xattr path This is the same as "removexattr", but if "path" is a symbolic link, then it removes an extended attribute of the link itself. ls ls directory List the files in "directory" (relative to the root directory, there is no cwd). The '.' and '..' entries are not returned, but hidden files are shown. ls0 ls0 dir (filenames|-) This specialized command is used to get a listing of the filenames in the directory "dir". The list of filenames is written to the local file "filenames" (on the host). In the output file, the filenames are separated by "" characters. "." and ".." are not returned. The filenames are not sorted. Use "-" instead of a filename to read/write from stdin/stdout. lsetxattr lsetxattr xattr val vallen path This is the same as "setxattr", but if "path" is a symbolic link, then it sets an extended attribute of the link itself. lstat lstat path Returns file information for the given "path". This is the same as "stat" except that if "path" is a symbolic link, then the link is stat-ed, not the file it refers to. This is the same as the lstat(2) system call. lstatlist lstatlist path 'names ...' This call allows you to perform the "lstat" operation on multiple files, where all files are in the directory "path". "names" is the list of files from this directory. On return you get a list of stat structs, with a one-to-one correspondence to the "names" list. If any name did not exist or could not be lstat'd, then the "ino" field of that structure is set to "-1". This call is intended for programs that want to efficiently list a directory contents without making many round-trips. See also "lxattrlist" for a similarly efficient call for getting extended attributes. luks-add-key luks-add-key device keyslot This command adds a new key on LUKS device "device". "key" is any existing key, and is used to access the device. "newkey" is the new key to add. "keyslot" is the key slot that will be replaced. Note that if "keyslot" already contains a key, then this command will fail. You have to use "luks-kill-slot" first to remove that key. This command has one or more key or passphrase parameters. Guestfish will prompt for these separately. luks-close luks-close device This closes a LUKS device that was created earlier by "luks-open" or "luks-open-ro". The "device" parameter must be the name of the LUKS mapping device (ie. "/dev/mapper/mapname") and not the name of the underlying block device. luks-format luks-format device keyslot This command erases existing data on "device" and formats the device as a LUKS encrypted device. "key" is the initial key, which is added to key slot "slot". (LUKS supports 8 key slots, numbered 0-7). This command has one or more key or passphrase parameters. Guestfish will prompt for these separately. luks-format-cipher luks-format-cipher device keyslot cipher This command is the same as "luks-format" but it also allows you to set the "cipher" used. This command has one or more key or passphrase parameters. Guestfish will prompt for these separately. luks-kill-slot luks-kill-slot device keyslot This command deletes the key in key slot "keyslot" from the encrypted LUKS device "device". "key" must be one of the other keys. This command has one or more key or passphrase parameters. Guestfish will prompt for these separately. luks-open luks-open device mapname This command opens a block device which has been encrypted according to the Linux Unified Key Setup (LUKS) standard. "device" is the encrypted block device or partition. The caller must supply one of the keys associated with the LUKS block device, in the "key" parameter. This creates a new block device called "/dev/mapper/mapname". Reads and writes to this block device are decrypted from and encrypted to the underlying "device" respectively. If this block device contains LVM volume groups, then calling "vgscan" followed by "vg-activate-all" will make them visible. Use "list-dm-devices" to list all device mapper devices. This command has one or more key or passphrase parameters. Guestfish will prompt for these separately. luks-open-ro luks-open-ro device mapname This is the same as "luks-open" except that a read-only mapping is created. This command has one or more key or passphrase parameters. Guestfish will prompt for these separately. lvcreate lvcreate logvol volgroup mbytes This creates an LVM logical volume called "logvol" on the volume group "volgroup", with "size" megabytes. lvcreate-free lvcreate-free logvol volgroup percent Create an LVM logical volume called "/dev/volgroup/logvol", using approximately "percent" % of the free space remaining in the volume group. Most usefully, when "percent" is 100 this will create the largest possible LV. lvm-canonical-lv-name lvm-canonical-lv-name lvname This converts alternative naming schemes for LVs that you might find to the canonical name. For example, "/dev/mapper/VG-LV" is converted to "/dev/VG/LV". This command returns an error if the "lvname" parameter does not refer to a logical volume. See also "is-lv", "canonical-device-name". lvm-clear-filter lvm-clear-filter This undoes the effect of "lvm-set-filter". LVM will be able to see every block device. This command also clears the LVM cache and performs a volume group scan. lvm-remove-all lvm-remove-all This command removes all LVM logical volumes, volume groups and physical volumes. lvm-set-filter lvm-set-filter 'devices ...' This sets the LVM device filter so that LVM will only be able to "see" the block devices in the list "devices", and will ignore all other attached block devices. Where disk image(s) contain duplicate PVs or VGs, this command is useful to get LVM to ignore the duplicates, otherwise LVM can get confused. Note also there are two types of duplication possible: either cloned PVs/VGs which have identical UUIDs; or VGs that are not cloned but just happen to have the same name. In normal operation you cannot create this situation, but you can do it outside LVM, eg. by cloning disk images or by bit twiddling inside the LVM metadata. This command also clears the LVM cache and performs a volume group scan. You can filter whole block devices or individual partitions. You cannot use this if any VG is currently in use (eg. contains a mounted filesystem), even if you are not filtering out that VG. lvremove lvremove device Remove an LVM logical volume "device", where "device" is the path to the LV, such as "/dev/VG/LV". You can also remove all LVs in a volume group by specifying the VG name, "/dev/VG". lvrename lvrename logvol newlogvol Rename a logical volume "logvol" with the new name "newlogvol". lvresize lvresize device mbytes This resizes (expands or shrinks) an existing LVM logical volume to "mbytes". When reducing, data in the reduced part is lost. lvresize-free lvresize-free lv percent This expands an existing logical volume "lv" so that it fills "pc"% of the remaining free space in the volume group. Commonly you would call this with pc = 100 which expands the logical volume as much as possible, using all remaining free space in the volume group. lvs lvs List all the logical volumes detected. This is the equivalent of the lvs(8) command. This returns a list of the logical volume device names (eg. "/dev/VolGroup00/LogVol00"). See also "lvs-full", "list-filesystems". lvs-full lvs-full List all the logical volumes detected. This is the equivalent of the lvs(8) command. The "full" version includes all fields. lvuuid lvuuid device This command returns the UUID of the LVM LV "device". lxattrlist lxattrlist path 'names ...' This call allows you to get the extended attributes of multiple files, where all files are in the directory "path". "names" is the list of files from this directory. On return you get a flat list of xattr structs which must be interpreted sequentially. The first xattr struct always has a zero- length "attrname". "attrval" in this struct is zero-length to indicate there was an error doing "lgetxattr" for this file, or is a C string which is a decimal number (the number of following attributes for this file, which could be "0"). Then after the first xattr struct are the zero or more attributes for the first named file. This repeats for the second and subsequent files. This call is intended for programs that want to efficiently list a directory contents without making many round-trips. See also "lstatlist" for a similarly efficient call for getting standard stats. max-disks max-disks Return the maximum number of disks that may be added to a handle (eg. by "add-drive-opts" and similar calls). This function was added in libguestfs 1.19.7. In previous versions of libguestfs the limit was 25. See "MAXIMUM NUMBER OF DISKS" in guestfs(3) for additional information on this topic. md-create md-create name 'devices ...' [missingbitmap:N] [nrdevices:N] [spare:N] [chunk:N] [level:..] Create a Linux md (RAID) device named "name" on the devices in the list "devices". The optional parameters are: "missingbitmap" A bitmap of missing devices. If a bit is set it means that a missing device is added to the array. The least significant bit corresponds to the first device in the array. As examples: If "devices = ["/dev/sda"]" and "missingbitmap = 0x1" then the resulting array would be "[<missing>, "/dev/sda"]". If "devices = ["/dev/sda"]" and "missingbitmap = 0x2" then the resulting array would be "["/dev/sda", <missing>]". This defaults to 0 (no missing devices). The length of "devices" + the number of bits set in "missingbitmap" must equal "nrdevices" + "spare". "nrdevices" The number of active RAID devices. If not set, this defaults to the length of "devices" plus the number of bits set in "missingbitmap". "spare" The number of spare devices. If not set, this defaults to 0. "chunk" The chunk size in bytes. "level" The RAID level, which can be one of: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, raid10, 10. Some of these are synonymous, and more levels may be added in future. If not set, this defaults to "raid1". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". md-detail md-detail md This command exposes the output of 'mdadm -DY <md>'. The following fields are usually present in the returned hash. Other fields may also be present. "level" The raid level of the MD device. "devices" The number of underlying devices in the MD device. "metadata" The metadata version used. "uuid" The UUID of the MD device. "name" The name of the MD device. md-stat md-stat md This call returns a list of the underlying devices which make up the single software RAID array device "md". To get a list of software RAID devices, call "list-md-devices". Each structure returned corresponds to one device along with additional status information: "mdstat_device" The name of the underlying device. "mdstat_index" The index of this device within the array. "mdstat_flags" Flags associated with this device. This is a string containing (in no specific order) zero or more of the following flags: "W" write-mostly "F" device is faulty "S" device is a RAID spare "R" replacement md-stop md-stop md This command deactivates the MD array named "md". The device is stopped, but it is not destroyed or zeroed. mkdir mkdir path Create a directory named "path". mkdir-mode mkdir-mode path mode This command creates a directory, setting the initial permissions of the directory to "mode". For common Linux filesystems, the actual mode which is set will be "mode & ~umask & 01777". Non-native-Linux filesystems may interpret the mode in other ways. See also "mkdir", "umask" mkdir-p mkdir-p path Create a directory named "path", creating any parent directories as necessary. This is like the "mkdir -p" shell command. mkdtemp mkdtemp tmpl This command creates a temporary directory. The "tmpl" parameter should be a full pathname for the temporary directory name with the final six characters being "XXXXXX". For example: "/tmp/myprogXXXXXX" or "/Temp/myprogXXXXXX", the second one being suitable for Windows filesystems. The name of the temporary directory that was created is returned. The temporary directory is created with mode 0700 and is owned by root. The caller is responsible for deleting the temporary directory and its contents after use. See also: mkdtemp(3) mke2fs mke2fs device [blockscount:N] [blocksize:N] [fragsize:N] [blockspergroup:N] [numberofgroups:N] [bytesperinode:N] [inodesize:N] [journalsize:N] [numberofinodes:N] [stridesize:N] [stripewidth:N] [maxonlineresize:N] [reservedblockspercentage:N] [mmpupdateinterval:N] [journaldevice:..] [label:..] [lastmounteddir:..] [creatoros:..] [fstype:..] [usagetype:..] [uuid:..] [forcecreate:true|false] [writesbandgrouponly:true|false] [lazyitableinit:true|false] [lazyjournalinit:true|false] [testfs:true|false] [discard:true|false] [quotatype:true|false] [extent:true|false] [filetype:true|false] [flexbg:true|false] [hasjournal:true|false] [journaldev:true|false] [largefile:true|false] [quota:true|false] [resizeinode:true|false] [sparsesuper:true|false] [uninitbg:true|false] "mke2fs" is used to create an ext2, ext3, or ext4 filesystem on "device". The optional "blockscount" is the size of the filesystem in blocks. If omitted it defaults to the size of "device". Note if the filesystem is too small to contain a journal, "mke2fs" will silently create an ext2 filesystem instead. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". mke2fs-J mke2fs-J fstype blocksize device journal This creates an ext2/3/4 filesystem on "device" with an external journal on "journal". It is equivalent to the command: mke2fs -t fstype -b blocksize -J device=<journal> <device> See also "mke2journal". This function is deprecated. In new code, use the "mke2fs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mke2fs-JL mke2fs-JL fstype blocksize device label This creates an ext2/3/4 filesystem on "device" with an external journal on the journal labeled "label". See also "mke2journal-L". This function is deprecated. In new code, use the "mke2fs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mke2fs-JU mke2fs-JU fstype blocksize device uuid This creates an ext2/3/4 filesystem on "device" with an external journal on the journal with UUID "uuid". See also "mke2journal-U". This function is deprecated. In new code, use the "mke2fs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mke2journal mke2journal blocksize device This creates an ext2 external journal on "device". It is equivalent to the command: mke2fs -O journal_dev -b blocksize device This function is deprecated. In new code, use the "mke2fs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mke2journal-L mke2journal-L blocksize label device This creates an ext2 external journal on "device" with label "label". This function is deprecated. In new code, use the "mke2fs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mke2journal-U mke2journal-U blocksize uuid device This creates an ext2 external journal on "device" with UUID "uuid". This function is deprecated. In new code, use the "mke2fs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mkfifo mkfifo mode path This call creates a FIFO (named pipe) called "path" with mode "mode". It is just a convenient wrapper around "mknod". The mode actually set is affected by the umask. mkfs mkfs-opts mkfs fstype device [blocksize:N] [features:..] [inode:N] [sectorsize:N] This function creates a filesystem on "device". The filesystem type is "fstype", for example "ext3". The optional arguments are: "blocksize" The filesystem block size. Supported block sizes depend on the filesystem type, but typically they are 1024, 2048 or 4096 for Linux ext2/3 filesystems. For VFAT and NTFS the "blocksize" parameter is treated as the requested cluster size. For UFS block sizes, please see mkfs.ufs(8). "features" This passes the -O parameter to the external mkfs program. For certain filesystem types, this allows extra filesystem features to be selected. See mke2fs(8) and mkfs.ufs(8) for more details. You cannot use this optional parameter with the "gfs" or "gfs2" filesystem type. "inode" This passes the -I parameter to the external mke2fs(8) program which sets the inode size (only for ext2/3/4 filesystems at present). "sectorsize" This passes the -S parameter to external mkfs.ufs(8) program, which sets sector size for ufs filesystem. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". mkfs-b mkfs-b fstype blocksize device This call is similar to "mkfs", but it allows you to control the block size of the resulting filesystem. Supported block sizes depend on the filesystem type, but typically they are 1024, 2048 or 4096 only. For VFAT and NTFS the "blocksize" parameter is treated as the requested cluster size. This function is deprecated. In new code, use the "mkfs" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mkfs-btrfs mkfs-btrfs 'devices ...' [allocstart:N] [bytecount:N] [datatype:..] [leafsize:N] [label:..] [metadata:..] [nodesize:N] [sectorsize:N] Create a btrfs filesystem, allowing all configurables to be set. For more information on the optional arguments, see mkfs.btrfs(8). Since btrfs filesystems can span multiple devices, this takes a non- empty list of devices. To create general filesystems, use "mkfs". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". mklost-and-found mklost-and-found mountpoint Make the "lost+found" directory, normally in the root directory of an ext2/3/4 filesystem. "mountpoint" is the directory under which we try to create the "lost+found" directory. mkmountpoint mkmountpoint exemptpath "mkmountpoint" and "rmmountpoint" are specialized calls that can be used to create extra mountpoints before mounting the first filesystem. These calls are only necessary in some very limited circumstances, mainly the case where you want to mount a mix of unrelated and/or read- only filesystems together. For example, live CDs often contain a "Russian doll" nest of filesystems, an ISO outer layer, with a squashfs image inside, with an ext2/3 image inside that. You can unpack this as follows in guestfish: add-ro Fedora-11-i686-Live.iso run mkmountpoint /cd mkmountpoint /sqsh mkmountpoint /ext3fs mount /dev/sda /cd mount-loop /cd/LiveOS/squashfs.img /sqsh mount-loop /sqsh/LiveOS/ext3fs.img /ext3fs The inner filesystem is now unpacked under the /ext3fs mountpoint. "mkmountpoint" is not compatible with "umount-all". You may get unexpected errors if you try to mix these calls. It is safest to manually unmount filesystems and remove mountpoints after use. "umount-all" unmounts filesystems by sorting the paths longest first, so for this to work for manual mountpoints, you must ensure that the innermost mountpoints have the longest pathnames, as in the example code above. For more details see https://bugzilla.redhat.com/show_bug.cgi?id=599503 Autosync [see "set-autosync", this is set by default on handles] can cause "umount-all" to be called when the handle is closed which can also trigger these issues. mknod mknod mode devmajor devminor path This call creates block or character special devices, or named pipes (FIFOs). The "mode" parameter should be the mode, using the standard constants. "devmajor" and "devminor" are the device major and minor numbers, only used when creating block and character special devices. Note that, just like mknod(2), the mode must be bitwise OR'd with S_IFBLK, S_IFCHR, S_IFIFO or S_IFSOCK (otherwise this call just creates a regular file). These constants are available in the standard Linux header files, or you can use "mknod-b", "mknod-c" or "mkfifo" which are wrappers around this command which bitwise OR in the appropriate constant for you. The mode actually set is affected by the umask. mknod-b mknod-b mode devmajor devminor path This call creates a block device node called "path" with mode "mode" and device major/minor "devmajor" and "devminor". It is just a convenient wrapper around "mknod". The mode actually set is affected by the umask. mknod-c mknod-c mode devmajor devminor path This call creates a char device node called "path" with mode "mode" and device major/minor "devmajor" and "devminor". It is just a convenient wrapper around "mknod". The mode actually set is affected by the umask. mkswap mkswap-opts mkswap device [label:..] [uuid:..] Create a Linux swap partition on "device". The option arguments "label" and "uuid" allow you to set the label and/or UUID of the new swap partition. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". mkswap-L mkswap-L label device Create a swap partition on "device" with label "label". Note that you cannot attach a swap label to a block device (eg. "/dev/sda"), just to a partition. This appears to be a limitation of the kernel or swap tools. This function is deprecated. In new code, use the "mkswap" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mkswap-U mkswap-U uuid device Create a swap partition on "device" with UUID "uuid". This function is deprecated. In new code, use the "mkswap" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. mkswap-file mkswap-file path Create a swap file. This command just writes a swap file signature to an existing file. To create the file itself, use something like "fallocate". mktemp mktemp tmpl [suffix:..] This command creates a temporary file. The "tmpl" parameter should be a full pathname for the temporary directory name with the final six characters being "XXXXXX". For example: "/tmp/myprogXXXXXX" or "/Temp/myprogXXXXXX", the second one being suitable for Windows filesystems. The name of the temporary file that was created is returned. The temporary file is created with mode 0600 and is owned by root. The caller is responsible for deleting the temporary file after use. If the optional "suffix" parameter is given, then the suffix (eg. ".txt") is appended to the temporary name. See also: "mkdtemp". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". modprobe modprobe modulename This loads a kernel module in the appliance. The kernel module must have been whitelisted when libguestfs was built (see "appliance/kmod.whitelist.in" in the source). mount mount mountable mountpoint Mount a guest disk at a position in the filesystem. Block devices are named "/dev/sda", "/dev/sdb" and so on, as they were added to the guest. If those block devices contain partitions, they will have the usual names (eg. "/dev/sda1"). Also LVM "/dev/VG/LV"-style names can be used, or 'mountable' strings returned by "list-filesystems" or "inspect-get-mountpoints". The rules are the same as for mount(2): A filesystem must first be mounted on "/" before others can be mounted. Other filesystems can only be mounted on directories which already exist. The mounted filesystem is writable, if we have sufficient permissions on the underlying device. Before libguestfs 1.13.16, this call implicitly added the options "sync" and "noatime". The "sync" option greatly slowed writes and caused many problems for users. If your program might need to work with older versions of libguestfs, use "mount-options" instead (using an empty string for the first parameter if you don't want any options). mount-9p mount-9p mounttag mountpoint [options:..] Mount the virtio-9p filesystem with the tag "mounttag" on the directory "mountpoint". If required, "trans=virtio" will be automatically added to the options. Any other options required can be passed in the optional "options" parameter. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". mount-local mount-local localmountpoint [readonly:true|false] [options:..] [cachetimeout:N] [debugcalls:true|false] This call exports the libguestfs-accessible filesystem to a local mountpoint (directory) called "localmountpoint". Ordinary reads and writes to files and directories under "localmountpoint" are redirected through libguestfs. If the optional "readonly" flag is set to true, then writes to the filesystem return error "EROFS". "options" is a comma-separated list of mount options. See guestmount(1) for some useful options. "cachetimeout" sets the timeout (in seconds) for cached directory entries. The default is 60 seconds. See guestmount(1) for further information. If "debugcalls" is set to true, then additional debugging information is generated for every FUSE call. When "mount-local" returns, the filesystem is ready, but is not processing requests (access to it will block). You have to call "mount-local-run" to run the main loop. See "MOUNT LOCAL" in guestfs(3) for full documentation. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". mount-local-run mount-local-run Run the main loop which translates kernel calls to libguestfs calls. This should only be called after "mount-local" returns successfully. The call will not return until the filesystem is unmounted. Note you must not make concurrent libguestfs calls on the same handle from another thread. You may call this from a different thread than the one which called "mount-local", subject to the usual rules for threads and libguestfs (see "MULTIPLE HANDLES AND MULTIPLE THREADS" in guestfs(3)). See "MOUNT LOCAL" in guestfs(3) for full documentation. mount-loop mount-loop file mountpoint This command lets you mount "file" (a filesystem image in a file) on a mount point. It is entirely equivalent to the command "mount -o loop file mountpoint". mount-options mount-options options mountable mountpoint This is the same as the "mount" command, but it allows you to set the mount options as for the mount(8) -o flag. If the "options" parameter is an empty string, then no options are passed (all options default to whatever the filesystem uses). mount-ro mount-ro mountable mountpoint This is the same as the "mount" command, but it mounts the filesystem with the read-only (-o ro) flag. mount-vfs mount-vfs options vfstype mountable mountpoint This is the same as the "mount" command, but it allows you to set both the mount options and the vfstype as for the mount(8) -o and -t flags. mountpoints mountpoints This call is similar to "mounts". That call returns a list of devices. This one returns a hash table (map) of device name to directory where the device is mounted. mounts mounts This returns the list of currently mounted filesystems. It returns the list of devices (eg. "/dev/sda1", "/dev/VG/LV"). Some internal mounts are not shown. See also: "mountpoints" mv mv src dest This moves a file from "src" to "dest" where "dest" is either a destination filename or destination directory. See also: "rename". nr-devices nr-devices This returns the number of whole block devices that were added. This is the same as the number of devices that would be returned if you called "list-devices". To find out the maximum number of devices that could be added, call "max-disks". ntfs-3g-probe ntfs-3g-probe true|false device This command runs the ntfs-3g.probe(8) command which probes an NTFS "device" for mountability. (Not all NTFS volumes can be mounted read- write, and some cannot be mounted at all). "rw" is a boolean flag. Set it to true if you want to test if the volume can be mounted read-write. Set it to false if you want to test if the volume can be mounted read-only. The return value is an integer which 0 if the operation would succeed, or some non-zero value documented in the ntfs-3g.probe(8) manual page. ntfsclone-in ntfsclone-in (backupfile|-) device Restore the "backupfile" (from a previous call to "ntfsclone-out") to "device", overwriting any existing contents of this device. Use "-" instead of a filename to read/write from stdin/stdout. ntfsclone-out ntfsclone-out device (backupfile|-) [metadataonly:true|false] [rescue:true|false] [ignorefscheck:true|false] [preservetimestamps:true|false] [force:true|false] Stream the NTFS filesystem "device" to the local file "backupfile". The format used for the backup file is a special format used by the ntfsclone(8) tool. If the optional "metadataonly" flag is true, then only the metadata is saved, losing all the user data (this is useful for diagnosing some filesystem problems). The optional "rescue", "ignorefscheck", "preservetimestamps" and "force" flags have precise meanings detailed in the ntfsclone(8) man page. Use "ntfsclone-in" to restore the file back to a libguestfs device. Use "-" instead of a filename to read/write from stdin/stdout. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". ntfsfix ntfsfix device [clearbadsectors:true|false] This command repairs some fundamental NTFS inconsistencies, resets the NTFS journal file, and schedules an NTFS consistency check for the first boot into Windows. This is not an equivalent of Windows "chkdsk". It does not scan the filesystem for inconsistencies. The optional "clearbadsectors" flag clears the list of bad sectors. This is useful after cloning a disk with bad sectors to a new disk. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". ntfsresize ntfsresize-opts ntfsresize device [size:N] [force:true|false] This command resizes an NTFS filesystem, expanding or shrinking it to the size of the underlying device. The optional parameters are: "size" The new size (in bytes) of the filesystem. If omitted, the filesystem is resized to fit the container (eg. partition). "force" If this option is true, then force the resize of the filesystem even if the filesystem is marked as requiring a consistency check. After the resize operation, the filesystem is always marked as requiring a consistency check (for safety). You have to boot into Windows to perform this check and clear this condition. If you don't set the "force" option then it is not possible to call "ntfsresize" multiple times on a single filesystem without booting into Windows between each resize. See also ntfsresize(8). This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". ntfsresize-size ntfsresize-size device size This command is the same as "ntfsresize" except that it allows you to specify the new size (in bytes) explicitly. This function is deprecated. In new code, use the "ntfsresize" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. parse-environment parse-environment Parse the program's environment and set flags in the handle accordingly. For example if "LIBGUESTFS_DEBUG=1" then the 'verbose' flag is set in the handle. Most programs do not need to call this. It is done implicitly when you call "create". See "ENVIRONMENT VARIABLES" in guestfs(3) for a list of environment variables that can affect libguestfs handles. See also "guestfs_create_flags" in guestfs(3), and "parse-environment-list". parse-environment-list parse-environment-list 'environment ...' Parse the list of strings in the argument "environment" and set flags in the handle accordingly. For example if "LIBGUESTFS_DEBUG=1" is a string in the list, then the 'verbose' flag is set in the handle. This is the same as "parse-environment" except that it parses an explicit list of strings instead of the program's environment. part-add part-add device prlogex startsect endsect This command adds a partition to "device". If there is no partition table on the device, call "part-init" first. The "prlogex" parameter is the type of partition. Normally you should pass "p" or "primary" here, but MBR partition tables also support "l" (or "logical") and "e" (or "extended") partition types. "startsect" and "endsect" are the start and end of the partition in sectors. "endsect" may be negative, which means it counts backwards from the end of the disk ("-1" is the last sector). Creating a partition which covers the whole disk is not so easy. Use "part-disk" to do that. part-del part-del device partnum This command deletes the partition numbered "partnum" on "device". Note that in the case of MBR partitioning, deleting an extended partition also deletes any logical partitions it contains. part-disk part-disk device parttype This command is simply a combination of "part-init" followed by "part- add" to create a single primary partition covering the whole disk. "parttype" is the partition table type, usually "mbr" or "gpt", but other possible values are described in "part-init". part-get-bootable part-get-bootable device partnum This command returns true if the partition "partnum" on "device" has the bootable flag set. See also "part-set-bootable". part-get-gpt-type part-get-gpt-type device partnum Return the type GUID of numbered GPT partition "partnum". For MBR partitions, return an appropriate GUID corresponding to the MBR type. Behaviour is undefined for other partition types. part-get-mbr-id part-get-mbr-id device partnum Returns the MBR type byte (also known as the ID byte) from the numbered partition "partnum". Note that only MBR (old DOS-style) partitions have type bytes. You will get undefined results for other partition table types (see "part- get-parttype"). part-get-name part-get-name device partnum This gets the partition name on partition numbered "partnum" on device "device". Note that partitions are numbered from 1. The partition name can only be read on certain types of partition table. This works on "gpt" but not on "mbr" partitions. part-get-parttype part-get-parttype device This command examines the partition table on "device" and returns the partition table type (format) being used. Common return values include: "msdos" (a DOS/Windows style MBR partition table), "gpt" (a GPT/EFI-style partition table). Other values are possible, although unusual. See "part-init" for a full list. part-init part-init device parttype This creates an empty partition table on "device" of one of the partition types listed below. Usually "parttype" should be either "msdos" or "gpt" (for large disks). Initially there are no partitions. Following this, you should call "part-add" for each partition required. Possible values for "parttype" are: efi gpt Intel EFI / GPT partition table. This is recommended for >= 2 TB partitions that will be accessed from Linux and Intel-based Mac OS X. It also has limited backwards compatibility with the "mbr" format. mbr msdos The standard PC "Master Boot Record" (MBR) format used by MS-DOS and Windows. This partition type will only work for device sizes up to 2 TB. For large disks we recommend using "gpt". Other partition table types that may work but are not supported include: aix AIX disk labels. amiga rdb Amiga "Rigid Disk Block" format. bsd BSD disk labels. dasd DASD, used on IBM mainframes. dvh MIPS/SGI volumes. mac Old Mac partition format. Modern Macs use "gpt". pc98 NEC PC-98 format, common in Japan apparently. sun Sun disk labels. part-list part-list device This command parses the partition table on "device" and returns the list of partitions found. The fields in the returned structure are: part_num Partition number, counting from 1. part_start Start of the partition in bytes. To get sectors you have to divide by the device's sector size, see "blockdev-getss". part_end End of the partition in bytes. part_size Size of the partition in bytes. part-set-bootable part-set-bootable device partnum true|false This sets the bootable flag on partition numbered "partnum" on device "device". Note that partitions are numbered from 1. The bootable flag is used by some operating systems (notably Windows) to determine which partition to boot from. It is by no means universally recognized. part-set-gpt-type part-set-gpt-type device partnum guid Set the type GUID of numbered GPT partition "partnum" to "guid". Return an error if the partition table of "device" isn't GPT, or if "guid" is not a valid GUID. See http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs for a useful list of type GUIDs. part-set-mbr-id part-set-mbr-id device partnum idbyte Sets the MBR type byte (also known as the ID byte) of the numbered partition "partnum" to "idbyte". Note that the type bytes quoted in most documentation are in fact hexadecimal numbers, but usually documented without any leading "0x" which might be confusing. Note that only MBR (old DOS-style) partitions have type bytes. You will get undefined results for other partition table types (see "part- get-parttype"). part-set-name part-set-name device partnum name This sets the partition name on partition numbered "partnum" on device "device". Note that partitions are numbered from 1. The partition name can only be set on certain types of partition table. This works on "gpt" but not on "mbr" partitions. part-to-dev part-to-dev partition This function takes a partition name (eg. "/dev/sdb1") and removes the partition number, returning the device name (eg. "/dev/sdb"). The named partition must exist, for example as a string returned from "list-partitions". See also "part-to-partnum", "device-index". part-to-partnum part-to-partnum partition This function takes a partition name (eg. "/dev/sdb1") and returns the partition number (eg. 1). The named partition must exist, for example as a string returned from "list-partitions". See also "part-to-dev". ping-daemon ping-daemon This is a test probe into the guestfs daemon running inside the hypervisor. Calling this function checks that the daemon responds to the ping message, without affecting the daemon or attached block device(s) in any other way. pread pread path count offset This command lets you read part of a file. It reads "count" bytes of the file, starting at "offset", from file "path". This may read fewer bytes than requested. For further details see the pread(2) system call. See also "pwrite", "pread-device". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). pread-device pread-device device count offset This command lets you read part of a block device. It reads "count" bytes of "device", starting at "offset". This may read fewer bytes than requested. For further details see the pread(2) system call. See also "pread". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). pvchange-uuid pvchange-uuid device Generate a new random UUID for the physical volume "device". pvchange-uuid-all pvchange-uuid-all Generate new random UUIDs for all physical volumes. pvcreate pvcreate device This creates an LVM physical volume on the named "device", where "device" should usually be a partition name such as "/dev/sda1". pvremove pvremove device This wipes a physical volume "device" so that LVM will no longer recognise it. The implementation uses the "pvremove" command which refuses to wipe physical volumes that contain any volume groups, so you have to remove those first. pvresize pvresize device This resizes (expands or shrinks) an existing LVM physical volume to match the new size of the underlying device. pvresize-size pvresize-size device size This command is the same as "pvresize" except that it allows you to specify the new size (in bytes) explicitly. pvs pvs List all the physical volumes detected. This is the equivalent of the pvs(8) command. This returns a list of just the device names that contain PVs (eg. "/dev/sda2"). See also "pvs-full". pvs-full pvs-full List all the physical volumes detected. This is the equivalent of the pvs(8) command. The "full" version includes all fields. pvuuid pvuuid device This command returns the UUID of the LVM PV "device". pwrite pwrite path content offset This command writes to part of a file. It writes the data buffer "content" to the file "path" starting at offset "offset". This command implements the pwrite(2) system call, and like that system call it may not write the full data requested. The return value is the number of bytes that were actually written to the file. This could even be 0, although short writes are unlikely for regular files in ordinary circumstances. See also "pread", "pwrite-device". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). pwrite-device pwrite-device device content offset This command writes to part of a device. It writes the data buffer "content" to "device" starting at offset "offset". This command implements the pwrite(2) system call, and like that system call it may not write the full data requested (although short writes to disk devices and partitions are probably impossible with standard Linux kernels). See also "pwrite". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). read-file read-file path This calls returns the contents of the file "path" as a buffer. Unlike "cat", this function can correctly handle files that contain embedded ASCII NUL characters. read-lines read-lines path Return the contents of the file named "path". The file contents are returned as a list of lines. Trailing "LF" and "CRLF" character sequences are not returned. Note that this function cannot correctly handle binary files (specifically, files containing "" character which is treated as end of string). For those you need to use the "read-file" function and split the buffer into lines yourself. readdir readdir dir This returns the list of directory entries in directory "dir". All entries in the directory are returned, including "." and "..". The entries are not sorted, but returned in the same order as the underlying filesystem. Also this call returns basic file type information about each file. The "ftyp" field will contain one of the following characters: 'b' Block special 'c' Char special 'd' Directory 'f' FIFO (named pipe) 'l' Symbolic link 'r' Regular file 's' Socket 'u' Unknown file type '?' The readdir(3) call returned a "d_type" field with an unexpected value This function is primarily intended for use by programs. To get a simple list of names, use "ls". To get a printable directory for human consumption, use "ll". Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). readlink readlink path This command reads the target of a symbolic link. readlinklist readlinklist path 'names ...' This call allows you to do a "readlink" operation on multiple files, where all files are in the directory "path". "names" is the list of files from this directory. On return you get a list of strings, with a one-to-one correspondence to the "names" list. Each string is the value of the symbolic link. If the readlink(2) operation fails on any name, then the corresponding result string is the empty string "". However the whole operation is completed even if there were readlink(2) errors, and so you can call this function with names where you don't know if they are symbolic links already (albeit slightly less efficient). This call is intended for programs that want to efficiently list a directory contents without making many round-trips. realpath realpath path Return the canonicalized absolute pathname of "path". The returned path has no ".", ".." or symbolic link path elements. remount remount mountpoint [rw:true|false] This call allows you to change the "rw" (readonly/read-write) flag on an already mounted filesystem at "mountpoint", converting a readonly filesystem to be read-write, or vice-versa. Note that at the moment you must supply the "optional" "rw" parameter. In future we may allow other flags to be adjusted. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". remove-drive remove-drive label This function is conceptually the opposite of "add-drive-opts". It removes the drive that was previously added with label "label". Note that in order to remove drives, you have to add them with labels (see the optional "label" argument to "add-drive-opts"). If you didn't use a label, then they cannot be removed. You can call this function before or after launching the handle. If called after launch, if the backend supports it, we try to hot unplug the drive: see "HOTPLUGGING" in guestfs(3). The disk must not be in use (eg. mounted) when you do this. We try to detect if the disk is in use and stop you from doing this. removexattr removexattr xattr path This call removes the extended attribute named "xattr" of the file "path". See also: "lremovexattr", attr(5). rename rename oldpath newpath Rename a file to a new place on the same filesystem. This is the same as the Linux rename(2) system call. In most cases you are better to use "mv" instead. resize2fs resize2fs device This resizes an ext2, ext3 or ext4 filesystem to match the size of the underlying device. See also "RESIZE2FS ERRORS" in guestfs(3). resize2fs-M resize2fs-M device This command is the same as "resize2fs", but the filesystem is resized to its minimum size. This works like the -M option to the "resize2fs" command. To get the resulting size of the filesystem you should call "tune2fs-l" and read the "Block size" and "Block count" values. These two numbers, multiplied together, give the resulting size of the minimal filesystem in bytes. See also "RESIZE2FS ERRORS" in guestfs(3). resize2fs-size resize2fs-size device size This command is the same as "resize2fs" except that it allows you to specify the new size (in bytes) explicitly. See also "RESIZE2FS ERRORS" in guestfs(3). rm rm path Remove the single file "path". rm-f rm-f path Remove the file "path". If the file doesn't exist, that error is ignored. (Other errors, eg. I/O errors or bad paths, are not ignored) This call cannot remove directories. Use "rmdir" to remove an empty directory, or "rm-rf" to remove directories recursively. rm-rf rm-rf path Remove the file or directory "path", recursively removing the contents if its a directory. This is like the "rm -rf" shell command. rmdir rmdir path Remove the single directory "path". rmmountpoint rmmountpoint exemptpath This calls removes a mountpoint that was previously created with "mkmountpoint". See "mkmountpoint" for full details. rsync rsync src dest [archive:true|false] [deletedest:true|false] This call may be used to copy or synchronize two directories under the same libguestfs handle. This uses the rsync(1) program which uses a fast algorithm that avoids copying files unnecessarily. "src" and "dest" are the source and destination directories. Files are copied from "src" to "dest". The optional arguments are: "archive" Turns on archive mode. This is the same as passing the --archive flag to "rsync". "deletedest" Delete files at the destination that do not exist at the source. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". rsync-in rsync-in remote dest [archive:true|false] [deletedest:true|false] This call may be used to copy or synchronize the filesystem on the host or on a remote computer with the filesystem within libguestfs. This uses the rsync(1) program which uses a fast algorithm that avoids copying files unnecessarily. This call only works if the network is enabled. See "set-network" or the --network option to various tools like guestfish(1). Files are copied from the remote server and directory specified by "remote" to the destination directory "dest". The format of the remote server string is defined by rsync(1). Note that there is no way to supply a password or passphrase so the target must be set up not to require one. The optional arguments are the same as those of "rsync". This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". rsync-out rsync-out src remote [archive:true|false] [deletedest:true|false] This call may be used to copy or synchronize the filesystem within libguestfs with a filesystem on the host or on a remote computer. This uses the rsync(1) program which uses a fast algorithm that avoids copying files unnecessarily. This call only works if the network is enabled. See "set-network" or the --network option to various tools like guestfish(1). Files are copied from the source directory "src" to the remote server and directory specified by "remote". The format of the remote server string is defined by rsync(1). Note that there is no way to supply a password or passphrase so the target must be set up not to require one. The optional arguments are the same as those of "rsync". Globbing does not happen on the "src" parameter. In programs which use the API directly you have to expand wildcards yourself (see "glob- expand"). In guestfish you can use the "glob" command (see "glob" in guestfish(1)), for example: ><fs> glob rsync-out /* rsync://remote/ This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". scrub-device scrub-device device This command writes patterns over "device" to make data retrieval more difficult. It is an interface to the scrub(1) program. See that manual page for more details. scrub-file scrub-file file This command writes patterns over a file to make data retrieval more difficult. The file is removed after scrubbing. It is an interface to the scrub(1) program. See that manual page for more details. scrub-freespace scrub-freespace dir This command creates the directory "dir" and then fills it with files until the filesystem is full, and scrubs the files as for "scrub-file", and deletes them. The intention is to scrub any free space on the partition containing "dir". It is an interface to the scrub(1) program. See that manual page for more details. set-append append set-append append This function is used to add additional options to the guest kernel command line. The default is "NULL" unless overridden by setting "LIBGUESTFS_APPEND" environment variable. Setting "append" to "NULL" means no additional options are passed (libguestfs always adds a few of its own). set-attach-method attach-method set-attach-method backend Set the method that libguestfs uses to connect to the backend guestfsd daemon. See "BACKEND" in guestfs(3). This function is deprecated. In new code, use the "set-backend" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. set-autosync autosync set-autosync true|false If "autosync" is true, this enables autosync. Libguestfs will make a best effort attempt to make filesystems consistent and synchronized when the handle is closed (also if the program exits without closing handles). This is enabled by default (since libguestfs 1.5.24, previously it was disabled by default). set-backend backend set-backend backend Set the method that libguestfs uses to connect to the backend guestfsd daemon. This handle property was previously called the "attach method". See "BACKEND" in guestfs(3). set-backend-settings set-backend-settings 'settings ...' Set a list of zero or more settings which are passed through to the current backend. Each setting is a string which is interpreted in a backend-specific way, or ignored if not understood by the backend. The default value is an empty list, unless the environment variable "LIBGUESTFS_BACKEND_SETTINGS" was set when the handle was created. This environment variable contains a colon-separated list of settings. See "BACKEND" in guestfs(3), "BACKEND SETTINGS" in guestfs(3). set-cachedir cachedir set-cachedir cachedir Set the directory used by the handle to store the appliance cache, when using a supermin appliance. The appliance is cached and shared between all handles which have the same effective user ID. The environment variables "LIBGUESTFS_CACHEDIR" and "TMPDIR" control the default value: If "LIBGUESTFS_CACHEDIR" is set, then that is the default. Else if "TMPDIR" is set, then that is the default. Else "/var/tmp" is the default. set-direct direct set-direct true|false If the direct appliance mode flag is enabled, then stdin and stdout are passed directly through to the appliance once it is launched. One consequence of this is that log messages aren't caught by the library and handled by "set-log-message-callback", but go straight to stdout. You probably don't want to use this unless you know what you are doing. The default is disabled. set-e2attrs set-e2attrs file attrs [clear:true|false] This sets or clears the file attributes "attrs" associated with the inode "file". "attrs" is a string of characters representing file attributes. See "get-e2attrs" for a list of possible attributes. Not all attributes can be changed. If optional boolean "clear" is not present or false, then the "attrs" listed are set in the inode. If "clear" is true, then the "attrs" listed are cleared in the inode. In both cases, other attributes not present in the "attrs" string are left unchanged. These attributes are only present when the file is located on an ext2/3/4 filesystem. Using this call on other filesystem types will result in an error. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". set-e2generation set-e2generation file generation This sets the ext2 file generation of a file. See "get-e2generation". set-e2label set-e2label device label This sets the ext2/3/4 filesystem label of the filesystem on "device" to "label". Filesystem labels are limited to 16 characters. You can use either "tune2fs-l" or "get-e2label" to return the existing label on a filesystem. This function is deprecated. In new code, use the "set-label" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. set-e2uuid set-e2uuid device uuid This sets the ext2/3/4 filesystem UUID of the filesystem on "device" to "uuid". The format of the UUID and alternatives such as "clear", "random" and "time" are described in the tune2fs(8) manpage. You can use "vfs-uuid" to return the existing UUID of a filesystem. This function is deprecated. In new code, use the "set-uuid" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. set-hv hv set-hv hv Set the hypervisor binary that we will use. The hypervisor depends on the backend, but is usually the location of the qemu/KVM hypervisor. For the uml backend, it is the location of the "linux" or "vmlinux" binary. The default is chosen when the library was compiled by the configure script. You can also override this by setting the "LIBGUESTFS_HV" environment variable. Note that you should call this function as early as possible after creating the handle. This is because some pre-launch operations depend on testing qemu features (by running "qemu -help"). If the qemu binary changes, we don't retest features, and so you might see inconsistent results. Using the environment variable "LIBGUESTFS_HV" is safest of all since that picks the qemu binary at the same time as the handle is created. set-label set-label mountable label Set the filesystem label on "mountable" to "label". Only some filesystem types support labels, and libguestfs supports setting labels on only a subset of these. ext2, ext3, ext4 Labels are limited to 16 bytes. NTFS Labels are limited to 128 unicode characters. XFS The label is limited to 12 bytes. The filesystem must not be mounted when trying to set the label. btrfs The label is limited to 256 bytes and some characters are not allowed. Setting the label on a btrfs subvolume will set the label on its parent filesystem. The filesystem must not be mounted when trying to set the label. To read the label on a filesystem, call "vfs-label". set-libvirt-requested-credential set-libvirt-requested-credential index cred After requesting the "index"'th credential from the user, call this function to pass the answer back to libvirt. See "LIBVIRT AUTHENTICATION" in guestfs(3) for documentation and example code. set-libvirt-supported-credentials set-libvirt-supported-credentials 'creds ...' Call this function before setting an event handler for "GUESTFS_EVENT_LIBVIRT_AUTH", to supply the list of credential types that the program knows how to process. The "creds" list must be a non-empty list of strings. Possible strings are: "username" "authname" "language" "cnonce" "passphrase" "echoprompt" "noechoprompt" "realm" "external" See libvirt documentation for the meaning of these credential types. See "LIBVIRT AUTHENTICATION" in guestfs(3) for documentation and example code. set-memsize memsize set-memsize memsize This sets the memory size in megabytes allocated to the hypervisor. This only has any effect if called before "launch". You can also change this by setting the environment variable "LIBGUESTFS_MEMSIZE" before the handle is created. For more information on the architecture of libguestfs, see guestfs(3). set-network network set-network true|false If "network" is true, then the network is enabled in the libguestfs appliance. The default is false. This affects whether commands are able to access the network (see "RUNNING COMMANDS" in guestfs(3)). You must call this before calling "launch", otherwise it has no effect. set-path path set-path searchpath Set the path that libguestfs searches for kernel and initrd.img. The default is "$libdir/guestfs" unless overridden by setting "LIBGUESTFS_PATH" environment variable. Setting "path" to "NULL" restores the default path. set-pgroup pgroup set-pgroup true|false If "pgroup" is true, child processes are placed into their own process group. The practical upshot of this is that signals like "SIGINT" (from users pressing "^C") won't be received by the child process. The default for this flag is false, because usually you want "^C" to kill the subprocess. Guestfish sets this flag to true when used interactively, so that "^C" can cancel long-running commands gracefully (see "user-cancel"). set-program program set-program program Set the program name. This is an informative string which the main program may optionally set in the handle. When the handle is created, the program name in the handle is set to the basename from "argv[0]". If that was not possible, it is set to the empty string (but never "NULL"). set-qemu qemu set-qemu hv Set the hypervisor binary (usually qemu) that we will use. The default is chosen when the library was compiled by the configure script. You can also override this by setting the "LIBGUESTFS_HV" environment variable. Setting "hv" to "NULL" restores the default qemu binary. Note that you should call this function as early as possible after creating the handle. This is because some pre-launch operations depend on testing qemu features (by running "qemu -help"). If the qemu binary changes, we don't retest features, and so you might see inconsistent results. Using the environment variable "LIBGUESTFS_HV" is safest of all since that picks the qemu binary at the same time as the handle is created. This function is deprecated. In new code, use the "set-hv" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. set-recovery-proc recovery-proc set-recovery-proc true|false If this is called with the parameter "false" then "launch" does not create a recovery process. The purpose of the recovery process is to stop runaway hypervisor processes in the case where the main program aborts abruptly. This only has any effect if called before "launch", and the default is true. About the only time when you would want to disable this is if the main process will fork itself into the background ("daemonize" itself). In this case the recovery process thinks that the main program has disappeared and so kills the hypervisor, which is not very helpful. set-selinux selinux set-selinux true|false This sets the selinux flag that is passed to the appliance at boot time. The default is "selinux=0" (disabled). Note that if SELinux is enabled, it is always in Permissive mode ("enforcing=0"). set-smp smp set-smp smp Change the number of virtual CPUs assigned to the appliance. The default is 1. Increasing this may improve performance, though often it has no effect. This function must be called before "launch". set-tmpdir tmpdir set-tmpdir tmpdir Set the directory used by the handle to store temporary files. The environment variables "LIBGUESTFS_TMPDIR" and "TMPDIR" control the default value: If "LIBGUESTFS_TMPDIR" is set, then that is the default. Else if "TMPDIR" is set, then that is the default. Else "/tmp" is the default. set-trace trace set-trace true|false If the command trace flag is set to 1, then libguestfs calls, parameters and return values are traced. If you want to trace C API calls into libguestfs (and other libraries) then possibly a better way is to use the external ltrace(1) command. Command traces are disabled unless the environment variable "LIBGUESTFS_TRACE" is defined and set to 1. Trace messages are normally sent to "stderr", unless you register a callback to send them somewhere else (see "set-event-callback"). set-uuid set-uuid device uuid Set the filesystem UUID on "device" to "uuid". Only some filesystem types support setting UUIDs. To read the UUID on a filesystem, call "vfs-uuid". set-verbose verbose set-verbose true|false If "verbose" is true, this turns on verbose messages. Verbose messages are disabled unless the environment variable "LIBGUESTFS_DEBUG" is defined and set to 1. Verbose messages are normally sent to "stderr", unless you register a callback to send them somewhere else (see "set-event-callback"). setcon setcon context This sets the SELinux security context of the daemon to the string "context". See the documentation about SELINUX in guestfs(3). setxattr setxattr xattr val vallen path This call sets the extended attribute named "xattr" of the file "path" to the value "val" (of length "vallen"). The value is arbitrary 8 bit data. sfdisk sfdisk device cyls heads sectors 'lines ...' This is a direct interface to the sfdisk(8) program for creating partitions on block devices. "device" should be a block device, for example "/dev/sda". sectors on the device, which are passed directly to sfdisk as the -C, -H and -S parameters. If you pass 0 for any of these, then the corresponding parameter is omitted. Usually for 'large' disks, you can just pass 0 for these, but for small (floppy-sized) disks, sfdisk (or rather, the kernel) cannot work out the right geometry and you will need to tell it. "lines" is a list of lines that we feed to "sfdisk". For more information refer to the sfdisk(8) manpage. To create a single partition occupying the whole disk, you would pass "lines" as a single element list, when the single element being the string "," (comma). This function is deprecated. In new code, use the "part-add" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. sfdiskM sfdiskM device 'lines ...' This is a simplified interface to the "sfdisk" command, where partition sizes are specified in megabytes only (rounded to the nearest cylinder) and you don't need to specify the cyls, heads and sectors parameters which were rarely if ever used anyway. This function is deprecated. In new code, use the "part-add" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. sfdisk-N sfdisk-N device partnum cyls heads sectors line This runs sfdisk(8) option to modify just the single partition "n" (note: "n" counts from 1). For other parameters, see "sfdisk". You should usually pass 0 for the This function is deprecated. In new code, use the "part-add" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. sfdisk-disk-geometry sfdisk-disk-geometry device This displays the disk geometry of "device" read from the partition table. Especially in the case where the underlying block device has been resized, this can be different from the kernel's idea of the geometry (see "sfdisk-kernel-geometry"). The result is in human-readable format, and not designed to be parsed. sfdisk-kernel-geometry sfdisk-kernel-geometry device This displays the kernel's idea of the geometry of "device". The result is in human-readable format, and not designed to be parsed. sfdisk-l sfdisk-l device This displays the partition table on "device", in the human-readable output of the sfdisk(8) command. It is not intended to be parsed. This function is deprecated. In new code, use the "part-list" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. sh sh command This call runs a command from the guest filesystem via the guest's "/bin/sh". This is like "command", but passes the command to: /bin/sh -c "command" Depending on the guest's shell, this usually results in wildcards being expanded, shell expressions being interpolated and so on. All the provisos about "command" apply to this call. sh-lines sh-lines command This is the same as "sh", but splits the result into a list of lines. shutdown shutdown This is the opposite of "launch". It performs an orderly shutdown of the backend process(es). If the autosync flag is set (which is the default) then the disk image is synchronized. If the subprocess exits with an error then this function will return an error, which should not be ignored (it may indicate that the disk image could not be written out properly). It is safe to call this multiple times. Extra calls are ignored. This call does not close or free up the handle. You still need to call "close" afterwards. "close" will call this if you don't do it explicitly, but note that any errors are ignored in that case. sleep sleep secs Sleep for "secs" seconds. stat stat path Returns file information for the given "path". This is the same as the stat(2) system call. statvfs statvfs path Returns file system statistics for any mounted file system. "path" should be a file or directory in the mounted file system (typically it is the mount point itself, but it doesn't need to be). This is the same as the statvfs(2) system call. strings strings path This runs the strings(1) command on a file and returns the list of printable strings found. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). strings-e strings-e encoding path This is like the "strings" command, but allows you to specify the encoding of strings that are looked for in the source file "path". Allowed encodings are: s Single 7-bit-byte characters like ASCII and the ASCII-compatible parts of ISO-8859-X (this is what "strings" uses). S Single 8-bit-byte characters. b 16-bit big endian strings such as those encoded in UTF-16BE or UCS-2BE. l (lower case letter L) 16-bit little endian such as UTF-16LE and UCS-2LE. This is useful for examining binaries in Windows guests. B 32-bit big endian such as UCS-4BE. L 32-bit little endian such as UCS-4LE. The returned strings are transcoded to UTF-8. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). swapoff-device swapoff-device device This command disables the libguestfs appliance swap device or partition named "device". See "swapon-device". swapoff-file swapoff-file file This command disables the libguestfs appliance swap on file. swapoff-label swapoff-label label This command disables the libguestfs appliance swap on labeled swap partition. swapoff-uuid swapoff-uuid uuid This command disables the libguestfs appliance swap partition with the given UUID. swapon-device swapon-device device This command enables the libguestfs appliance to use the swap device or partition named "device". The increased memory is made available for all commands, for example those run using "command" or "sh". Note that you should not swap to existing guest swap partitions unless you know what you are doing. They may contain hibernation information, or other information that the guest doesn't want you to trash. You also risk leaking information about the host to the guest this way. Instead, attach a new host device to the guest and swap on that. swapon-file swapon-file file This command enables swap to a file. See "swapon-device" for other notes. swapon-label swapon-label label This command enables swap to a labeled swap partition. See "swapon- device" for other notes. swapon-uuid swapon-uuid uuid This command enables swap to a swap partition with the given UUID. See "swapon-device" for other notes. sync sync This syncs the disk, so that any writes are flushed through to the underlying disk image. You should always call this if you have modified a disk image, before closing the handle. syslinux syslinux device [directory:..] Install the SYSLINUX bootloader on "device". The device parameter must be either a whole disk formatted as a FAT filesystem, or a partition formatted as a FAT filesystem. In the latter case, the partition should be marked as "active" ("part-set- bootable") and a Master Boot Record must be installed (eg. using "pwrite-device") on the first sector of the whole disk. The SYSLINUX package comes with some suitable Master Boot Records. See the syslinux(1) man page for further information. The optional arguments are: "directory" Install SYSLINUX in the named subdirectory, instead of in the root directory of the FAT filesystem. Additional configuration can be supplied to SYSLINUX by placing a file called "syslinux.cfg" on the FAT filesystem, either in the root directory, or under "directory" if that optional argument is being used. For further information about the contents of this file, see syslinux(1). This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". tail tail path This command returns up to the last 10 lines of a file as a list of strings. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). tail-n tail-n nrlines path If the parameter "nrlines" is a positive number, this returns the last "nrlines" lines of the file "path". If the parameter "nrlines" is a negative number, this returns lines from the file "path", starting with the "-nrlines"th line. If the parameter "nrlines" is zero, this returns an empty list. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). tar-in tar-in-opts tar-in (tarfile|-) directory [compress:..] This command uploads and unpacks local file "tarfile" into "directory". The optional "compress" flag controls compression. If not given, then the input should be an uncompressed tar file. Otherwise one of the following strings may be given to select the compression type of the input file: "compress", "gzip", "bzip2", "xz", "lzop". (Note that not all builds of libguestfs will support all of these compression types). This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". tar-out tar-out-opts tar-out directory (tarfile|-) [compress:..] [numericowner:true|false] [excludes:..] local file "tarfile". The optional "compress" flag controls compression. If not given, then the output will be an uncompressed tar file. Otherwise one of the following strings may be given to select the compression type of the output file: "compress", "gzip", "bzip2", "xz", "lzop". (Note that not all builds of libguestfs will support all of these compression types). The other optional arguments are: "excludes" A list of wildcards. Files are excluded if they match any of the wildcards. "numericowner" If set to true, the output tar file will contain UID/GID numbers This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". tgz-in tgz-in (tarball|-) directory This command uploads and unpacks local file "tarball" (a gzip compressed tar file) into "directory". This function is deprecated. In new code, use the "tar-in" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. tgz-out tgz-out directory (tarball|-) local file "tarball". This function is deprecated. In new code, use the "tar-out" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. touch touch path Touch acts like the touch(1) command. It can be used to update the timestamps on a file, or, if the file does not exist, to create a new zero-length file. This command only works on regular files, and will fail on other file types such as directories, symbolic links, block special etc. truncate truncate path This command truncates "path" to a zero-length file. The file must truncate-size truncate-size path size This command truncates "path" to size "size" bytes. The file must If the current file size is less than "size" then the file is extended to the required size with zero bytes. This creates a sparse file (ie. disk blocks are not allocated for the file until you write to it). To create a non-sparse file of zeroes, use "fallocate64" instead. tune2fs tune2fs device [force:true|false] [maxmountcount:N] [mountcount:N] [errorbehavior:..] [group:N] [intervalbetweenchecks:N] [reservedblockspercentage:N] [lastmounteddirectory:..] [reservedblockscount:N] [user:N] This call allows you to adjust various filesystem parameters of an ext2/ext3/ext4 filesystem called "device". The optional parameters are: "force" Force tune2fs to complete the operation even in the face of errors. This is the same as the tune2fs "-f" option. "maxmountcount" Set the number of mounts after which the filesystem is checked by e2fsck(8). If this is 0 then the number of mounts is disregarded. This is the same as the tune2fs "-c" option. "mountcount" Set the number of times the filesystem has been mounted. This is the same as the tune2fs "-C" option. "errorbehavior" Change the behavior of the kernel code when errors are detected. Possible values currently are: "continue", "remount-ro", "panic". In practice these options don't really make any difference, particularly for write errors. This is the same as the tune2fs "-e" option. "group" Set the group which can use reserved filesystem blocks. This is the same as the tune2fs "-g" option except that it can only be specified as a number. "intervalbetweenchecks" Adjust the maximal time between two filesystem checks (in seconds). If the option is passed as 0 then time-dependent checking is disabled. This is the same as the tune2fs "-i" option. "reservedblockspercentage" Set the percentage of the filesystem which may only be allocated by privileged processes. This is the same as the tune2fs "-m" option. "lastmounteddirectory" Set the last mounted directory. This is the same as the tune2fs "-M" option. "reservedblockscount" Set the number of reserved filesystem blocks. This is the same as the tune2fs "-r" option. "user" Set the user who can use the reserved filesystem blocks. This is the same as the tune2fs "-u" option except that it can only be specified as a number. To get the current values of filesystem parameters, see "tune2fs-l". For precise details of how tune2fs works, see the tune2fs(8) man page. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". tune2fs-l tune2fs-l device This returns the contents of the ext2, ext3 or ext4 filesystem superblock on "device". It is the same as running "tune2fs -l device". See tune2fs(8) manpage for more details. The list of fields returned isn't clearly defined, and depends on both the version of "tune2fs" that libguestfs was built against, and the filesystem itself. txz-in txz-in (tarball|-) directory This command uploads and unpacks local file "tarball" (an xz compressed tar file) into "directory". This function is deprecated. In new code, use the "tar-in" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. txz-out txz-out directory (tarball|-) local file "tarball" (as an xz compressed tar archive). This function is deprecated. In new code, use the "tar-out" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. This function sets the mask used for creating new files and device Typical umask values would be 022 which creates new files with permissions like "-rw-r--r--" or "-rwxr-xr-x", and 002 which creates new files with permissions like "-rw-rw-r--" or "-rwxrwxr-x". The default umask is 022. This is important because it means that directories and device nodes will be created with 0644 or 0755 mode even if you specify 0777. This call returns the previous umask. umount unmount umount-opts umount pathordevice [force:true|false] [lazyunmount:true|false] This unmounts the given filesystem. The filesystem may be specified either by its mountpoint (path) or the device which contains the filesystem. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". umount-all unmount-all umount-all This unmounts all mounted filesystems. Some internal mounts are not unmounted by this call. umount-local umount-local [retry:true|false] If libguestfs is exporting the filesystem on a local mountpoint, then this unmounts it. See "MOUNT LOCAL" in guestfs(3) for full documentation. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". Upload local file "filename" to "remotefilename" on the filesystem. "filename" can also be a named pipe. Upload local file "filename" to "remotefilename" on the filesystem. "remotefilename" is overwritten starting at the byte "offset" specified. The intention is to overwrite parts of existing files or devices, although if a non-existent file is specified then it is created with a "hole" before "offset". The size of the data written is implicit in the size of the source "filename". Note that there is no limit on the amount of data that can be uploaded with this call, unlike with "pwrite", and this call always writes the full amount unless an error occurs. user-cancel user-cancel Unlike most other libguestfs calls, this function is signal safe and thread safe. You can call it from a signal handler or from another thread, without needing to do any locking. The transfer that was in progress (if there is one) will stop shortly afterwards, and will return an error. The errno (see "guestfs_last_errno") is set to "EINTR", so you can test for this to find out if the operation was cancelled or failed because of another error. No cleanup is performed: for example, if a file was being uploaded then after cancellation there may be a partially uploaded file. It is the caller's responsibility to clean up if necessary. There are two common places that you might call "user-cancel": In an interactive text-based program, you might call it from a "SIGINT" signal handler so that pressing "^C" cancels the current operation. (You also need to call "guestfs_set_pgroup" so that child processes In a graphical program, when the main thread is displaying a progress bar with a cancel button, wire up the cancel button to call this function. utimens utimens path atsecs atnsecs mtsecs mtnsecs This command sets the timestamps of a file with nanosecond precision. "atsecs, atnsecs" are the last access time (atime) in secs and nanoseconds from the epoch. "mtsecs, mtnsecs" are the last modification time (mtime) in secs and nanoseconds from the epoch. If the *nsecs field contains the special value "-1" then the corresponding timestamp is set to the current time. (The *secs field is ignored in this case). If the *nsecs field contains the special value "-2" then the corresponding timestamp is left unchanged. (The *secs field is ignored in this case). utsname utsname This returns the kernel version of the appliance, where this is available. This information is only useful for debugging. Nothing in the returned structure is defined by the API. version version Return the libguestfs version number that the program is linked against. Note that because of dynamic linking this is not necessarily the version of libguestfs that you compiled against. You can compile the program, and then at runtime dynamically link against a completely different "libguestfs.so" library. This call was added in version 1.0.58. In previous versions of libguestfs there was no way to get the version number. From C code you can use dynamic linker functions to find out if this symbol exists (if it doesn't, then it's an earlier version). The call returns a structure with four elements. The first three ("major", "minor" and "release") are numbers and correspond to the usual version triplet. The fourth element ("extra") is a string and is normally empty, but may be used for distro-specific information. To construct the original version string: "$major.$minor.$release$extra" Note: Don't use this call to test for availability of features. In enterprise distributions we backport features from later versions into earlier versions, making this an unreliable way to test for features. vfs-label vfs-label mountable This returns the label of the filesystem on "mountable". If the filesystem is unlabeled, this returns the empty string. To find a filesystem from the label, use "findfs-label". vfs-type vfs-type mountable This command gets the filesystem type corresponding to the filesystem on "mountable". For most filesystems, the result is the name of the Linux VFS module which would be used to mount this filesystem if you mounted it without specifying the filesystem type. For example a string such as "ext3" or "ntfs". vfs-uuid vfs-uuid mountable This returns the filesystem UUID of the filesystem on "mountable". If the filesystem does not have a UUID, this returns the empty string. To find a filesystem from the UUID, use "findfs-uuid". vg-activate vg-activate true|false 'volgroups ...' This command activates or (if "activate" is false) deactivates all logical volumes in the listed volume groups "volgroups". This command is the same as running "vgchange -a y|n volgroups..." Note that if "volgroups" is an empty list then all volume groups are activated or deactivated. vg-activate-all vg-activate-all true|false This command activates or (if "activate" is false) deactivates all logical volumes in all volume groups. This command is the same as running "vgchange -a y|n" vgchange-uuid vgchange-uuid vg Generate a new random UUID for the volume group "vg". vgchange-uuid-all vgchange-uuid-all Generate new random UUIDs for all volume groups. vgcreate vgcreate volgroup 'physvols ...' This creates an LVM volume group called "volgroup" from the non-empty list of physical volumes "physvols". vglvuuids vglvuuids vgname Given a VG called "vgname", this returns the UUIDs of all the logical volumes created in this volume group. You can use this along with "lvs" and "lvuuid" calls to associate logical volumes and volume groups. vgmeta vgmeta vgname "vgname" is an LVM volume group. This command examines the volume Note that the metadata is an internal structure used by LVM, subject to change at any time, and is provided for information only. vgpvuuids vgpvuuids vgname Given a VG called "vgname", this returns the UUIDs of all the physical volumes that this volume group resides on. You can use this along with "pvs" and "pvuuid" calls to associate physical volumes and volume groups. vgremove vgremove vgname Remove an LVM volume group "vgname", (for example "VG"). This also forcibly removes all logical volumes in the volume group (if any). vgrename vgrename volgroup newvolgroup Rename a volume group "volgroup" with the new name "newvolgroup". vgs vgs List all the volumes groups detected. This is the equivalent of the vgs(8) command. This returns a list of just the volume group names that were detected (eg. "VolGroup00"). vgs-full vgs-full List all the volumes groups detected. This is the equivalent of the vgs(8) command. The "full" version includes all fields. vgscan vgscan This rescans all block devices and rebuilds the list of LVM physical volumes, volume groups and logical volumes. vguuid vguuid vgname This command returns the UUID of the LVM VG named "vgname". wc-c wc-c path This command counts the characters in a file, using the "wc -c" external command. wc-l wc-l path This command counts the lines in a file, using the "wc -l" external command. wc-w wc-w path This command counts the words in a file, using the "wc -w" external command. wipefs wipefs device This command erases filesystem or RAID signatures from the specified "device" to make the filesystem invisible to libblkid. This does not erase the filesystem itself nor any other data from the "device". Compare with "zero" which zeroes the first few blocks of a device. write write path content This call creates a file called "path". The content of the file is the string "content" (which can contain any 8 bit data). write-append write-append path content This call appends "content" to the end of file "path". If "path" does not exist, then a new file is created. write-file write-file path content size This call creates a file called "path". The contents of the file is the string "content" (which can contain any 8 bit data), with length "size". As a special case, if "size" is 0 then the length is calculated using "strlen" (so in this case the content cannot contain embedded ASCII NULs). NB. Owing to a bug, writing content containing ASCII NUL characters does not work, even if the length is specified. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "write" call Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. xfs-admin device [extunwritten:true|false] [imgfile:true|false] [v2log:true|false] [projid32bit:true|false] [lazycounter:true|false] [label:..] [uuid:..] Change the parameters of the XFS filesystem on "device". Devices that are mounted cannot be modified. Administrators must unmount filesystems before this call can modify parameters. Some of the parameters of a mounted filesystem can be examined and modified using the "xfs-info" and "xfs-growfs" calls. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". xfs-growfs xfs-growfs path [datasec:true|false] [logsec:true|false] [rtsec:true|false] [datasize:N] [logsize:N] [rtsize:N] [rtextsize:N] [maxpct:N] Grow the XFS filesystem mounted at "path". The returned struct contains geometry information. Missing fields are returned as "-1" (for numeric fields) or empty string. This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". xfs-info xfs-info pathordevice "pathordevice" is a mounted XFS filesystem or a device containing an XFS filesystem. This command returns the geometry of the filesystem. The returned struct contains geometry information. Missing fields are returned as "-1" (for numeric fields) or empty string. xfs-repair xfs-repair device [forcelogzero:true|false] [nomodify:true|false] [noprefetch:true|false] [forcegeometry:true|false] [maxmem:N] [ihashsize:N] [bhashsize:N] [agstride:N] [logdev:..] [rtdev:..] Repair corrupt or damaged XFS filesystem on "device". The filesystem is specified using the "device" argument which should be the device name of the disk partition or volume containing the filesystem. If given the name of a block device, "xfs_repair" will attempt to find the raw device associated with the specified block device and will use the raw device instead. Regardless, the filesystem to be repaired must be unmounted, otherwise, the resulting filesystem may be inconsistent or corrupt. The returned status indicates whether filesystem corruption was detected (returns 1) or was not detected (returns 0). This command has one or more optional arguments. See "OPTIONAL ARGUMENTS". zegrep zegrep regex path This calls the external "zegrep" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. zegrepi zegrepi regex path This calls the external "zegrep -i" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. zero zero device This command writes zeroes over the first few blocks of "device". How many blocks are zeroed isn't specified (but it's not enough to securely wipe the device). It should be sufficient to remove any partition tables, filesystem superblocks and so on. If blocks are already zero, then this command avoids writing zeroes. This prevents the underlying device from becoming non-sparse or growing unnecessarily. zero-device zero-device device This command writes zeroes over the entire "device". Compare with "zero" which just zeroes the first few blocks of a device. If blocks are already zero, then this command avoids writing zeroes. This prevents the underlying device from becoming non-sparse or growing unnecessarily. zero-free-space zero-free-space directory Zero the free space in the filesystem mounted on "directory". The The filesystem contents are not affected, but any free space in the filesystem is freed. Free space is not "trimmed". You may want to call "fstrim" either as an alternative to this, or after calling this, depending on your requirements. zerofree zerofree device This runs the zerofree program on "device". This program claims to zero unused inodes and disk blocks on an ext2/3 filesystem, thus making it possible to compress the filesystem more effectively. You should not run this program if the filesystem is mounted. It is possible that using this program can damage the filesystem or data on the filesystem. zfgrep zfgrep pattern path This calls the external "zfgrep" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. zfgrepi zfgrepi pattern path This calls the external "zfgrep -i" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. zfile zfile meth path This command runs "file" after first decompressing "path" using "method". "method" must be one of "gzip", "compress" or "bzip2". Since 1.0.63, use "file" instead which can now process compressed files. This function is deprecated. In new code, use the "file" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. zgrep zgrep regex path This calls the external "zgrep" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. zgrepi zgrepi regex path This calls the external "zgrep -i" program and returns the matching lines. Because of the message protocol, there is a transfer limit of somewhere between 2MB and 4MB. See "PROTOCOL LIMITS" in guestfs(3). This function is deprecated. In new code, use the "grep" call instead. Deprecated functions will not be removed from the API, but the fact that they are deprecated indicates that there are problems with correct use of these functions. #### EXIT STATUS guestfish returns 0 if the commands completed without error, or 1 if there was an error. #### ENVIRONMENT VARIABLES EDITOR The "edit" command uses $EDITOR as the editor. If not set, it uses "vi". FEBOOTSTRAP_KERNEL FEBOOTSTRAP_MODULES When using supermin ≥ 4.1.0, these have been renamed "SUPERMIN_KERNEL" and "SUPERMIN_MODULES". GUESTFISH_DISPLAY_IMAGE The "display" command uses$GUESTFISH_DISPLAY_IMAGE to display images. If not set, it uses display(1). GUESTFISH_INIT Printed when guestfish starts. See "PROMPT". GUESTFISH_OUTPUT Printed before guestfish output. See "PROMPT". GUESTFISH_PID Used with the --remote option to specify the remote guestfish process to control. See section "REMOTE CONTROL GUESTFISH OVER A SOCKET". GUESTFISH_PS1 Set the command prompt. See "PROMPT". GUESTFISH_RESTORE Printed before guestfish exits. See "PROMPT". HEXEDITOR The "hexedit" command uses $HEXEDITOR as the external hex editor. If not specified, the external hexedit(1) program is used. HOME If compiled with GNU readline support, various files in the home directory can be used. See "FILES". LIBGUESTFS_APPEND Pass additional options to the guest kernel. LIBGUESTFS_ATTACH_METHOD This is the old way to set "LIBGUESTFS_BACKEND". LIBGUESTFS_BACKEND Choose the default way to create the appliance. See "guestfs_set_backend" in guestfs(3). LIBGUESTFS_BACKEND_SETTINGS A colon-separated list of backend-specific settings. See "BACKEND" in guestfs(3), "BACKEND SETTINGS" in guestfs(3). LIBGUESTFS_CACHEDIR The location where libguestfs will cache its appliance, when using a supermin appliance. The appliance is cached and shared between all handles which have the same effective user ID. If "LIBGUESTFS_CACHEDIR" is not set, then "TMPDIR" is used. If "TMPDIR" is not set, then "/var/tmp" is used. See also "LIBGUESTFS_TMPDIR", "set-cachedir". LIBGUESTFS_DEBUG Set "LIBGUESTFS_DEBUG=1" to enable verbose messages. This has the same effect as using the -v option. LIBGUESTFS_HV Set the default hypervisor (usually qemu) binary that libguestfs uses. If not set, then the qemu which was found at compile time by the configure script is used. LIBGUESTFS_MEMSIZE Set the memory allocated to the qemu process, in megabytes. For example: LIBGUESTFS_MEMSIZE=700 LIBGUESTFS_PATH Set the path that guestfish uses to search for kernel and initrd.img. See the discussion of paths in guestfs(3). LIBGUESTFS_QEMU This is the old way to set "LIBGUESTFS_HV". LIBGUESTFS_TMPDIR The location where libguestfs will store temporary files used by each handle. If "LIBGUESTFS_TMPDIR" is not set, then "TMPDIR" is used. If "TMPDIR" is not set, then "/tmp" is used. See also "LIBGUESTFS_CACHEDIR", "set-tmpdir". LIBGUESTFS_TRACE Set "LIBGUESTFS_TRACE=1" to enable command traces. PAGER The "more" command uses$PAGER as the pager. If not set, it uses "more". PATH Libguestfs and guestfish may run some external programs, and rely on $PATH being set to a reasonable value. If using the libvirt backend, libvirt will not work at all unless$PATH contains the path of qemu/KVM. SUPERMIN_KERNEL SUPERMIN_MODULES These two environment variables allow the kernel that libguestfs uses in the appliance to be selected. If $SUPERMIN_KERNEL is not set, then the most recent host kernel is chosen. For more information about kernel selection, see supermin(1). This feature is only available in supermin / febootstrap ≥ 3.8. TMPDIR See "LIBGUESTFS_CACHEDIR", "LIBGUESTFS_TMPDIR". #### FILES $XDG_CONFIG_HOME/libguestfs/libguestfs-tools.conf $HOME/.libguestfs-tools.rc$XDG_CONFIG_DIRS/libguestfs/libguestfs-tools.conf /etc/libguestfs-tools.conf write mode (--ro or --rw). See libguestfs-tools.conf(5). $HOME/.guestfish If compiled with GNU readline support, then the command history is saved in this file.$HOME/.inputrc /etc/inputrc If compiled with GNU readline support, then these files can be used To write rules which only apply to guestfish, use: $if guestfish ...$endif Variables that you can set in inputrc that change the behaviour of guestfish in useful ways include: completion-ignore-case (default: on) By default, guestfish will ignore case when tab-completing paths on the disk. Use: set completion-ignore-case off to make guestfish case sensitive. test1.img test2.img (etc) When using the -N or --new option, the prepared disk or filesystem will be created in the file "test1.img" in the current directory. The second use of -N will use "test2.img" and so on. Any existing file with the same name will be overwritten. You can use a different filename by using the "filename=" prefix. guestfs(3), http://libguestfs.org/, virt-alignment-scan(1), virt-builder(1), virt-cat(1), virt-copy-in(1), virt-copy-out(1), virt-customize(1), virt-df(1), virt-diff(1), virt-edit(1), virt-filesystems(1), virt-inspector(1), virt-list-filesystems(1), virt-list-partitions(1), virt-ls(1), virt-make-fs(1), virt-rescue(1), virt-resize(1), virt-sparsify(1), virt-sysprep(1), virt-tar(1), virt-tar-in(1), virt-tar-out(1), virt-win-reg(1), libguestfs-tools.conf(5), display(1), hexedit(1), supermin(1). #### AUTHORS Richard W.M. Jones ("rjones at redhat dot com") Copyright (C) 2009-2014 Red Hat Inc. This program is free software; you can redistribute it and/or modify it Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. #### BUGS To get a list of bugs against libguestfs, use this link: https://bugzilla.redhat.com/buglist.cgi?component=libguestfs&product=Virtualization+Tools To report a new bug against libguestfs, use this link: https://bugzilla.redhat.com/enter_bug.cgi?component=libguestfs&product=Virtualization+Tools When reporting a bug, please supply: · The version of libguestfs. · Where you got libguestfs (eg. which Linux distro, compiled from source, etc) · Describe the bug accurately and give a way to reproduce it. · Run libguestfs-test-tool(1) and paste the complete, unedited output into the bug report. All copyrights belong to their respective owners. Other content (c) 2014-2018, GNU.WIKI. Please report site errors to webmaster@gnu.wiki.
2020-02-21 12:14:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3780297636985779, "perplexity": 9805.54453967525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00515.warc.gz"}
https://dustingmixon.wordpress.com/2017/10/06/monte-carlo-approximation-certificates-for-k-means-clustering/
# Monte Carlo approximation certificates for k-means clustering This week, I visited Afonso Bandeira at NYU to give a talk in the MaD seminar on the semidefinite relaxation of k-means. Here are the slides. The last part of the talk is very new; I worked it out with Soledad Villar while she visited me a couple weeks ago, and our paper just hit the arXiv. In this blog entry, I’ll briefly summarize the main idea of the paper. Suppose you are given data points $\{x_i\}_{i\in T}\subseteq\mathbb{R}^m$, and you are tasked with finding the partition $C_1\sqcup\cdots\sqcup C_k=T$ that minimizes the k-means objective $\displaystyle{\frac{1}{|T|}\sum_{t\in[k]}\sum_{i\in C_t}\bigg\|x_i-\frac{1}{|C_t|}\sum_{j\in C_t}x_j\bigg\|^2\qquad(T\text{-IP})}$ (Here, we normalize the objective by $|T|$ for convenience later.) To do this, you will likely run MATLAB’s built-in implementation of k-means++, which randomly selects $k$ of the data points (with an intelligent choice of random distribution), and then uses these data points as proto-centroids to initialize Lloyd’s algorithm. In practice, this works very well: After running it a few times, you generally get a very nice clustering. But when do you know to stop looking for an even better clustering? Not only does k-means++ work well in practice, it comes with a guarantee: The initial clustering has random k-means value $W$ such that $\displaystyle{\mathrm{val}(T\text{-IP})\geq \frac{1}{8(\log k+2)}\cdot \mathbb{E}W.}$ As such, you can compute the initial value of k-means++ for multiple trials to estimate this lower bound and produce an approximation ratio of sorts. Unfortunately, this ratio can be rather poor. For example, running k-means++ on the MNIST training set of 60,000 handwritten digits produces a clustering of value 39.22, but the above lower bound is about 2.15. So, who knows? Perhaps there’s another clustering out there that’s 10 times better! Actually, there isn’t, and our paper provides a fast algorithm to demonstrate this. What you’d like to do is solve the k-means SDP, that is, minimize $\displaystyle{\frac{1}{2|T|}\mathrm{tr}(DX)\quad\text{subject to}\quad X1=1,~\mathrm{tr}(X)=k,~X\geq0,~X\succeq0\qquad(T\text{-SDP})}$ where $D$ is the $T\times T$ matrix whose $(i,j)$th entry is $\|x_i-x_j\|^2$. Indeed, $\mathrm{val}(T\text{-SDP})\leq\mathrm{val}(T\text{-IP})$ since $X=\sum_{t\in[k]}\frac{1}{|C_t|}1_{C_t}1_{C_t}^\top$ is feasible in $(T\text{-SDP})$ with the same value as $\{C_t\}_{t\in[k]}$ in $(T\text{-IP})$. Unfortunately, solving the SDP is far slower than k-means++, and so another idea is necessary. As an alternative, select $s$ small and draw $S$ uniformly from $\binom{T}{s}$. Then it turns out (and is not hard to show) that $\mathbb{E}\mathrm{val}(S\text{-SDP})\leq\mathrm{val}(T\text{-IP}).$ As such, one may quickly compute independent instances of $\mathrm{val}(S\text{-SDP})$ and then conduct an appropriate hypothesis test to obtain a high-confidence lower bound on $\mathbb{E}\mathrm{val}(S\text{-SDP})$. With this, you can improve k-means++’s MNIST lower bound from 2.15 to around 37. Furthermore, for a mixture of Gaussians, the size of $s$ depends only on $m$ and $k$, rather than the number of data points. In particular, if you have more than a million points (say), you can use our method to compute a good lower bound faster than k-means++ can even cluster. (!) ## 2 thoughts on “Monte Carlo approximation certificates for k-means clustering” 1. Jake says: For me, one exciting thing about this new lower bound is that it might be useful for quickly estimating the best choice for k, i.e. look for an elbow in the lower bounds, and hope that it’s near the elbow in the real cluster scores. 1. Yes, this is something Afonso mentioned to me just yesterday. I suspect a theorem can be proved along these lines…
2021-05-18 15:04:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7711071968078613, "perplexity": 450.7862416970721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989637.86/warc/CC-MAIN-20210518125638-20210518155638-00155.warc.gz"}
https://scicomp.stackexchange.com/questions/19353/how-to-change-one-bit-of-a-32-bit-integer-in-c
# How to change one bit of a 32 bit integer in C I have three 32 bit integers a,b,c. I want to make 10th bit of a=(23 rd bit of b) xor (4th bit of c) without disturbing other bits of a. How can I do this in C programming language? a can be zero also. In that case I consider a= 00...0, 32 zeros. • I'm voting to close this question as off-topic because it's really a question for Stack Overflow. – Bill Barth Apr 13 '15 at 15:19 • stackoverflow, per my understand, is for when the OP has code the needs to be debugged – user3629249 Apr 13 '15 at 15:25 • @user3629249: That is simply not true and the question is off-topic for SciComp.StackExchange.com. See this highly-rated question and its answers on StackOverflow: How do you set, clear and toggle a single bit in C/C++? – horchler Apr 13 '15 at 19:17 • I agree with horchler. Stack Exchange policy is not to migrate questions that already have accepted answers, even if the question is off-topic. Therefore, I will close the question and add a post notification to explain that this question is off-topic. – Geoff Oxberry Apr 13 '15 at 22:51 // 10th bit of a=(23 rd bit of b) xor (4th bit of c)
2021-01-25 07:25:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2915087640285492, "perplexity": 1386.6764961827598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00692.warc.gz"}
https://chiclittlehoney.com/2017/04/05/just-engaged-essentials/?replytocom=1843
# Just Engaged Essentials If you haven’t already heard, I’m engaged!  And of course, in true blogger style, I had to have EVERYTHING  engaged/ bride-to-be/miss-to-mrs related.  Like everything. I was able to restrain myself a little by narrowing my favorites down to an eight-part list that includes items such as tumblers and mugs (because I have an addiction), shirts, journals, and other frivolous items that are more for fun than anything else.  As I start planning my wedding and assembling my bridesmaid squad, I’ll have these adorable little reminders that I’m about to marry the love of my life! Btw, the official wedding hashtag is #WeddingWhitmire.  You know, because that’s important. 1. Ring ice tray (so your champagne – or water – can match your ring) \\ 2. Disco ball tumbler (because this is a party!) \\ 3. Ring drink floats (for time by the pool to take away the stress of wedding planning) \\ 4. Bride-to-Be Book (to preserve your memories for years to come) \\ 5. Miss to Mrs. tumbler (it’s Kate Spade… do I need to say anything else?) \\ 6. Bride tee (every bride needs at least one!) \\ 7. I’m Getting Meowied mug  (because the cuteness) \\ 8. Mr. & Mrs. ring tray (so you don’t lose that precious ring) ## 6 thoughts on “Just Engaged Essentials” 1. Congrats on the engagement! I didn’t even know they made products celebrating this kinda thing, so cool. Alysse lysseonlife.wordpress.com Like 2. AH all this stuff is too cute! I am so excited for even more pre-wedding post 🙂 Congrats again love! xoxo, Lauren Lindmark dailydoseofcharm.com Like 3. Darby says: You know me, most everything that needs to written down can be done on your phone! But, I have found that looking back at how I organized it all 27 years ago, i.e. a planner, is so special . So excited about this wedding !!! Darby oxox Like This site uses Akismet to reduce spam. Learn how your comment data is processed.
2023-03-28 03:20:55
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909383654594421, "perplexity": 8517.590961147522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00523.warc.gz"}
https://math.stackexchange.com/questions/2315953/the-gcd-and-the-lcm
# The GCD and The LCM [closed] Find the GCD and the LCM of 24,48 and 96 ,then compare the multiplication of those numbers and the multiplication of the GCD and the LCM. ## closed as off-topic by Henrik, lulu, kingW3, Davide Giraudo, zz20sJun 9 '17 at 13:34 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Henrik, lulu, kingW3, Davide Giraudo, zz20s If this question can be reworded to fit the rules in the help center, please edit the question. • Sounds straight forward, what have you tried? – lulu Jun 9 '17 at 11:38 • Done. What should I do now? – Henrik Jun 9 '17 at 11:38 • You should post it as your answer @Henrik . – Harsh Kumar Jun 9 '17 at 11:39 • @Henrik If this question gets closed because of lack of context, the system will close it in 30 days (with its current 3 downvotes) and reverse any reputation gains from this question. – Toby Mak Jun 9 '17 at 11:50 • Contest math? It would be an assignment for primary school students. – edm Jun 9 '17 at 11:51 Although this is not a site for homework questions, maybe you are unfamiliar with the algorithm - Through prime factorization: $$24=2^3\cdot 3$$ $$48=2^4\cdot 3$$ $$96=2^5\cdot 3$$ $$\gcd(24,48,96) = \gcd(2^3\cdot 3,2^4\cdot 3,2^5\cdot 3) = 2^3\cdot 3 = 24$$ Following the same method: $$\text{lcm}(24,48,96) = 96$$ Then, it is obvious that: $$96 \cdot 24 < 96 \cdot 24 \cdot 48$$
2019-09-19 18:40:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44136250019073486, "perplexity": 1320.9234757266393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00205.warc.gz"}
https://www.maths.cam.ac.uk/computing/laptops/print_configs/osxprint108
# Printing on Apple OSX 10.8 For Mac OSX 10.8, there is currently no way to listen to adverts sent out by MATHS printers, and so we instead must add printers manually. There are a few steps involved. • First, install the printer drivers. • Second, turn on the Mac's web interface for its CUPS server. • Third, use the CUPS admin page (on the Mac) to add print queues for the printers you wish to use. ## Turn on the Mac's web interface for its CUPS server This may already be running, but if not you will need to start it manually from the command line in the Terminal window. To begin, open your web-browser. If the CUPS server doesn't have its web interface enabled, you will get an error message "Server internal error" together with a message telling you to start the web interface, which you should do. sudo cupsctl WebInterface=yes Revisit the web page http://localhost:631/admin and it should now work correctly. You should now be looking at the Admin page for CUPS on your Mac. ## Add print queues for the printers you wish to use Your web-browser will prompt for a login. You need to use an account on your OSX computer that can administer the computer (this is NOT your Raven account). Click the radio button beside Internet Printing Protocol (ipp), and then Continue. In the Connection edit box, type the URI which corresponds to the printer you wish to use. This will be of the form: ipp://cups-serv.maths.cam.ac.uk:631/printers/printername You will need to replace printername with the name of the printer you wish to use. For example, to use the printer b1south, use the URI: ipp://cups-serv.maths.cam.ac.uk:631/printers/b1south The full list of available printers can be seen by connecting to the CUPS server on http://lapserv.maths.cam.ac.uk:631/printers/. Conveniently, this also lists both the URI and the name of the printer driver, together with other details required at the next step. Once you have entered the URI, click on the Continue button. The next page prompts you for a Name for the printer (use the same name specified on the URI (e.g. b1south), and then a Description and Location. You can copy these from the corresponding fields on the page from lapserv.maths, which is what we recommend. You do not need to share the printer, so leave that box unchecked. Once again, click Continue On the next screen you will need to provide details on the make and model of the printer. You can again read these from the field from lapserv.maths. Most of our printers are from HP, so we begin by selecting HP as the Make, and then Continue. Now carefully choose the Model. Once you have selected the correct model, click Add Printer. The final step in this procedure is to set the sensible defaults for the new printer. Almost all of our printers use A4 paper, and most have additional trays and duplex units. The Options installed is where you tell CUPS about additional features the printer has, such as extra trays, duplex units, additional memory etc. The General tab is the place to declare the page size, and that you like to print double-sided, with binding on the long edge. Once these are all okay, click on Set Default Options (at the bottom of the page.) Finally, verify the settings by printing a test page. There is an option for this from the Maintenance drop down menu, or print a file from your favourite application. Rather than using the web interface, you might want to add the printer from the command line. The following may work. If it fails, perhaps revert to using the web interface. /usr/sbin/lpadmin -p b1south -E \ -v ipp://cups-serv.damtp.cam.ac.uk:631/printers/b1south \ -P "/Library/Printers/PPDs/Contents/Resources/HP LaserJet 4000 Series.gz" \ -D "DAMTP b1 south P4015x" -L "PavB 1st floor" \ -o PageSize=A4 ## The Apple way Of course, you might also want to use the System Preferences tool to add a printer. This can be made to work, but doesn't seem to present you with as much choice regarding the default options as you can achieve via cups. In brief, the steps are: 1. Bring up the System Preferences tool. 2. Choose Print and Scan. 3. Click on the plus sign to add a printer. 4. Choose IP (as opposed to Default, Fax or Windows) 5. Put an address of cups-serv.damtp.cam.ac.uk:631. 6. Put a queue of printers/b1south (use appropriate printer name). 7. Change the name to b1south. 8. Change the location. 9. Choose the appropriate printer driver. 11. It complains that it can't verify the printer. Click Continue. 12. Set the default options. ## Trouble-Shooting If OSX is experiencing problems with printing (after the above) then try any or all of the following: • Check the OSX firewall and if it is turned on you can trying turning it off to see if that helps, then turn it back on and add an exception. • Apply the latest OSX updates.
2020-02-18 19:25:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2836548686027527, "perplexity": 2673.345950430063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00265.warc.gz"}
https://www.transtutors.com/questions/9-18-the-wessels-corporation-is-considering-installing-a-new-conveyor-for-materials--1368138.htm
# 9.18 The Wessels Corporation is considering installing a new conveyor for materials warehouse.... 9.18          The Wessels Corporation is considering installing a new conveyor for materials warehouse. The conveyor will have an initial cost of $75,000 and an installation c Expected benefits of the conveyor are: (a) Annu al labor cost will be red uced by ( b) breakage and other damages from handling will be reduced by$400 per month firm&#39;s costs are expected to increase as follows: ( a) Elect ricity cost will rise by $10 and ( b) annual repair and maintenance of the conveyor will amount to$900. Ask for Expert's Help ## Recent Questions in Managerial Accounting Submit Your Questions Here! Copy and paste your question here... Attach Files
2018-09-19 21:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20531299710273743, "perplexity": 12647.877420180386}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156305.13/warc/CC-MAIN-20180919200547-20180919220547-00101.warc.gz"}
http://nanoscale.blogspot.com/2015/05/book-recommendations-stuff-matters-and.html
## Monday, May 18, 2015 ### Book recommendations: Stuff Matters and The Disappearing Spoon I've lamented the lack of good popularizations of condensed matter/solid state physics.  I do, however, have recommendations for two relatively recent books about materials and chemistry, which is pretty close. The Disappearing Spoon, by Sam Kean, is a fun, engaging stroll across the periodic table, exploring the properties of the various chemical elements through the usually fascinating, sometimes funny, occasionally macabre histories of their discoveries and uses.  The title references joke spoons made from gallium that would melt (and fall to the bottom of the cup) when used to stir tea.  The tone is light and anecdotal, and the history is obscure enough that you haven't heard all the stories before.  Very fun. Stuff Matters, by Mark Miodownik, is similar in spirit, though not quite so historical and containing more physics and materials science.  The author is a materials scientist who happens to be a gifted author and popularizer as well.  He's done a BBC three-episode series about materials (available here), another BBC series about modern technologies, and a TED lesson about why glass is transparent. Ted said... I am always on the lookout for good popular books on these subjects, both for myself and to recommend to friends and family curious about my line of work. Thank you for the recommendations! I cannot resist adding a plug for my favorite popular condensed matter book, The Self-Made Tapestry. Regrettably, it is out of print. Douglas Natelson said... Sweet! I'll have to try to find a copy.... friv said... I am impressed by the details that you have on this article. Thanks Anonymous said... How do you explain the basics of band theory to a non-physicist? To an artist, at that? Anonymous said... What, no string theory??? Well, then, these scientists books are worthless. I wouldn't put on my shoes without the approval of the string theory con artists. Anzel said... Mark Mlodownik wrote the book I had intended to write someday :( Have you read "Periodic Tales" by Hugh Aldersey-Williams? Pretty steadily chemistry, but I quite enjoyed it. Douglas Natelson said... Anon@12:29, I'm going to give it a try. Clearly it's difficult, and you have to work by analogy, meaning that you have to sacrifice some accuracy. I will not be attempting to explain Slater determinants. Anzel, I know what you mean. Haven't read that one - I'll add it to my list. David Brown said... slideshow seminar suitable for freshman physics majors: “An Introduction to AMO Physics” by Cass Sackett, UVA suitable for junior/senior physics majors: "Condensed Matter in a Nutshell" by Gerald D. Mahan, 2011 Anonymous said... Science popularization (equation free "understanding") is a bunch of bs. If the gov paid me for understanding graduate level classical physics, relativity, and quantum theory then I would be happy to learn it. And no, I don't need to drop down 10's of thousands of $'s for university professors to teach it to me either. Guess what, Doug Natelson, the books are free on the internet. If Doug Natelson was being paid$0 dollas to do science, safe to say he would be science illiterate like the rest of us. Anonymous said... David Brown, Why bother providing references to inferior and thus irrelevant knowledge? Didn't you get the memo that only the members of the string theory privatization cult can possibly understand condensed matter as well as high energy physics...or physics related mathematics...or anything including tying your shoes. https://www.blogger.com/comment.g?blogID=22973357&postID=1137531427488191970 Anzel said... Also just picked up "Rust: The Longest War" by Johnathan Waldman and so far (I'm only 3 chapters in) it's been pretty good. RVerduzco said... I really enjoyed Stuff Matters - its a great and entertaining and entertaining introduction to materials science. My wife (she's not a scientist, background in literature) read parts and really enjoyed it. It's a rare book that can appeal to both scientists and non-scientists.
2020-02-25 16:33:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3434934616088867, "perplexity": 3973.4295814734746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146123.78/warc/CC-MAIN-20200225141345-20200225171345-00097.warc.gz"}
https://crypto.stackexchange.com/questions/37194/password-manager-that-uses-a-mix-of-long-and-short-key-derivation-functions
# Password manager that uses a mix of long and short key derivation functions I was reading "A Convenient Method for Securely Managing Passwords, Halderman et al., 2005". In short, the authors say to do the following: cache = very_long_key_derivation_function(salt,master_password) save the cache on disk where • very_long_key_derivation_function is some key derivation functions tuned to require approx 2 minutes • short_key_derivation_function is some key derivation functions tuned to require 1 second. This forbids to site_a to brute force the secret in order to find the password of site_b (since each try requires 2 minutes), still allowing fast time to calculate the password of a website. If the cache file is stolen, is still necessary to guess the master_password to know the derived passwords. There are two problems with this method: • the master_password is used each time the user needs the password of a certain site • in order to change the master password (it is used often, it could be seen), all the derived passwords must be changed I would like to modify the schema in the following way: k1 = very_long_key_derivation_function(salt,master_password_1) cache = k1 xor k2 save cache on disk In this way master_password_1 is used only once (the first time the password manager is used), while master_password_2 is used each time the user needs a password for a site. The derived passwords do not depend on master_password_2. Assuming that master_password_1 can not be stolen (since it is used only once), in order to steal a site password both the cache file and master_password_2 are needed. Also, it is possible to change master_password_2 if the user suspects that somebody saw him writing it, by computing: k1 = very_long_key_derivation_function(salt,master_password_1)
2019-09-15 21:07:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42329275608062744, "perplexity": 2493.992863009743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00408.warc.gz"}
https://www.cs.swarthmore.edu/~meeden/cs21/s16/cs21labs/lab10.php
# CS21 Lab 10: Recursion Due Saturday Night, April 16 Run update21, if you haven't already, to create the cs21/labs/10 directory. 1. InsideOut Write a function that uses recursion to turn a string "inside-out". Your function should first pull out the middle character of the string, then the middle character from the remaining string, and so on and so forth, until there is only 1 or fewer characters left. The recursion also puts the pulled-out characters together, in the order they were pulled out. Here's a quick example, assuming the initial string is "CAT": • pull out the middle letter: "A", so the remaining string is "CT" • given the string "CT", pull out the middle letter. Since there is no true middle letter, use the one on the right (the "T") • this leaves us with just the "C", so putting them all together gives "ATC" Below are a few sample runs. Include a short main() function to ask the user for a string, and then send that string to your insideout(S) function. Your insideout(S) function should return the inside-out string back to main() for printing. $python insideout.py word: swarthmore! hmtorraew!s $ python insideout.py word: ABCDEF DCEBFA $python insideout.py word: 12345 34251 In this last example, the initial middle character is the "3". After that is pulled out, the remaining string is "1245". Note the function as written chooses the "4", leaving the string "125". The middle character in that string is the "2", etc. #### Extra Challenge Are there any words that, after being turned "inside-out", are still valid English words? If so, how many? 2. Cubes In a file called cubes.py, write a function called drawCube(pt,size,win) that draws what looks like a three-dimensional cube, given a corner point (pt), the size of the cube, and a graphics window (win) for the drawing. What your function should really do is draw three 4-sided Polygons, one for each side of the cube (see picture below). For example, the right side of the cube would be made of points pt, p1, p6, and p5. Hints and requirements: • use the clone() and move() methods to create the other points based on the given initial point (see diagram) • note the math needed in the diagram below to calculate how far to move points p1 and p3 (if cloning pt) • Use from math import * to import the sin(), cos(), and radians() functions • make each side of the cube a different shade of one color (e.g., blue, dark blue, light blue), where the same side is always the same color (e.g., the left side is always the dark side) • make sure everything scales, so your function works no matter what size and point are given • include a simple main() function to test your drawCube() function Now add a third function, recursiveCubes(pt,size,win), that uses recursion (and calls drawCube()) to draw the image below. Modify your main() function to call recursiveCubes(pt,size,win), instead of drawCube(pt,size,win). 3. Collatz Conjecture Here's a fun function to compute: Known as part of the Collatz Conjecture, this function says, starting with any positive integer, • If the number is even, divide it by two. • If the number is odd, multiply it by 3 and add 1 This process is repeated over and over, using the result of one calculation as the starting point of the next, until we reach the number 1. Here's a quick example, using a starting number of 5: • 5 is odd, so next number is 5*3 + 1 = 16 • 16 is even, so next number is 16/2 = 8 • 8 is even, so next number is 8/2 = 4 • 4 is even, so next number is 4/2 = 2 • 2 is even, so next number is 2/2 = 1, and we are done Write a recursive function called collatz(n) that, given a positive integer, prints each number in the sequence, and returns how many steps were needed to reach 1. Include in your program a simple main() function that asks the user for the starting number, calls the recursive function, and prints the returned number of steps. For example, if the user enters n=5, your program should display: $ python collatz.py n: 5 5 16 8 4 2 1 num steps = 5 for n=5 Here are a few more examples: $python collatz.py n: 37 37 112 56 28 14 7 22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1 num steps = 21 for n=37 $ python collatz.py n: 24 24 12 6 3 10 5 16 8 4 2 1 num steps = 10 for n=24 #### Extra Challenge Use the following plotting function to display the number of steps needed vs n for all n from 1 to 10000, like this graph. In the function below, x and y are parallel lists (x contains all values of n, and y contains the number of steps needed for a given value of n). import pylab def plot(x,y): pylab.plot(x, y, 'go') pylab.grid(True) pylab.xlabel('n') pylab.ylabel('steps needed') pylab.title('collatz plot: steps needed vs n') pylab.show() 4. Ruler In a file called ruler.py, write a recursive function to display a set of lines like a ruler: Your function should have only 3 parameters: the top point of the center line, the size of the center line, and a graphics window to draw the lines. Include a short main() function to test your recursive ruler function. #### Extra Challenge The image below uses the x-coordinate to determine the color of the line. This is more easily accomplished using a Hue-Saturation-Value (HSV) color model, where varying the hue corresponds to the common "rainbow" colors (ROYGBIV). Research an HSV to RGB transformation and use it (along with color_rgb(R,G,B)) to make a rainbow effect based on the x-coordinate. Submit Once you are satisfied with your program, hand it in by typing handin21 in a terminal window.
2022-06-30 19:06:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3028772473335266, "perplexity": 1756.5738738252992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00432.warc.gz"}
http://www.nag.com/numeric/fl/nagdoc_fl24/html/G13/g13bbf.html
G13 Chapter Contents G13 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentG13BBF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose G13BBF filters a time series by a transfer function model. ## 2  Specification SUBROUTINE G13BBF ( Y, NY, MR, NMR, PAR, NPAR, CY, WA, IWA, B, NB, IFAIL) INTEGER NY, MR(NMR), NMR, NPAR, IWA, NB, IFAIL REAL (KIND=nag_wp) Y(NY), PAR(NPAR), CY, WA(IWA), B(NB) ## 3  Description From a given series ${y}_{1},{y}_{2},\dots ,{y}_{n}$ a new series ${b}_{1},{b}_{2},\dots ,{b}_{n}$ is calculated using a supplied (filtering) transfer function model according to the equation $bt=δ1bt-1+δ2bt-2+⋯+δpbt-p+ω0yt-b-ω1yt-b-1-⋯-ωqyt-b-q.$ (1) As in the use of G13BAF, large transient errors may arise in the early values of ${b}_{t}$ due to ignorance of ${y}_{t}$ for $t<0$, and two possibilities are allowed. (i) The equation (1) is applied from $t=1+b+q,\dots ,n$ so all terms in ${y}_{t}$ on the right-hand side of (1) are known, the unknown set of values ${b}_{t}$ for $t=b+q,\dots ,b+q+1-p$ being taken as zero. (ii) The unknown values of ${y}_{t}$ for $t\le 0$ are estimated by backforecasting exactly as for G13BAF. ## 4  References Box G E P and Jenkins G M (1976) Time Series Analysis: Forecasting and Control (Revised Edition) Holden–Day ## 5  Parameters 1:     Y(NY) – REAL (KIND=nag_wp) arrayInput On entry: the ${Q}_{y}^{\prime }$ backforecasts starting with backforecast at time $1-{Q}_{y}^{\prime }$ to backforecast at time $0$ followed by the time series starting at time $1$, where ${Q}_{y}^{\prime }={\mathbf{MR}}\left(6\right)+{\mathbf{MR}}\left(9\right)×{\mathbf{MR}}\left(10\right)$. If there are no backforecasts either because the ARIMA model for the time series is not known or because it is known but has no moving average terms, then the time series starts at the beginning of Y. 2:     NY – INTEGERInput On entry: the total number of backforecasts and time series data points in array Y. Constraint: ${\mathbf{NY}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1+{Q}_{y}^{\prime },{\mathbf{NPAR}}\right)$. 3:     MR(NMR) – INTEGER arrayInput On entry: the orders vector for the filtering transfer function model followed by the orders vector for the ARIMA model for the time series if the latter is known. The transfer function model orders appear in the standard form $\left(b,q,p\right)$ as given in the G13 Chapter Introduction. Note that if the ARIMA model for the time series is supplied then the routine will assume that the first ${Q}_{y}^{\prime }$ values of the array Y are backforecasts. Constraints: the filtering model is restricted in the following way: • ${\mathbf{MR}}\left(1\right)\text{, ​}{\mathbf{MR}}\left(2\right)\text{, ​}{\mathbf{MR}}\left(3\right)\ge 0$. the ARIMA model for the time series is restricted in the following ways: • ${\mathbf{MR}}\left(\mathit{k}\right)\ge 0$, for $\mathit{k}=4,5,\dots ,10$; • if ${\mathbf{MR}}\left(10\right)=0$, ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)=0$; • if ${\mathbf{MR}}\left(10\right)\ne 0$, ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)\ne 0$; • ${\mathbf{MR}}\left(10\right)\ne 1$. 4:     NMR – INTEGERInput On entry: the number of values supplied in the array MR. It takes the value $3$ if no ARIMA model for the time series is supplied but otherwise it takes the value $10$. Thus NMR acts as an indicator as to whether backforecasting can be carried out. Constraint: ${\mathbf{NMR}}=3$ or $10$. 5:     PAR(NPAR) – REAL (KIND=nag_wp) arrayInput On entry: the parameters of the filtering transfer function model followed by the parameters of the ARIMA model for the time series. In the transfer function model the parameters are in the standard order of MA-like followed by AR-like operator parameters. In the ARIMA model the parameters are in the standard order of non-seasonal AR and MA followed by seasonal AR and MA. 6:     NPAR – INTEGERInput On entry: the total number of parameters held in array PAR. Constraints: • if ${\mathbf{NMR}}=3$, ${\mathbf{NPAR}}={\mathbf{MR}}\left(2\right)+{\mathbf{MR}}\left(3\right)+1$; • if ${\mathbf{NMR}}=10$, ${\mathbf{NPAR}}={\mathbf{MR}}\left(2\right)+{\mathbf{MR}}\left(3\right)+1+{\mathbf{MR}}\left(4\right)+{\mathbf{MR}}\left(6\right)+{\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(9\right)$. 7:     CY – REAL (KIND=nag_wp)Input On entry: if the ARIMA model is known (i.e., ${\mathbf{NMR}}=10$), CY must specify the constant term of the ARIMA model for the time series. If this model is not known (i.e., ${\mathbf{NMR}}=3$) then CY is not used. 8:     WA(IWA) – REAL (KIND=nag_wp) arrayWorkspace 9:     IWA – INTEGERInput On entry: the dimension of the array WA as declared in the (sub)program from which G13BBF is called. Constraints: let $K={\mathbf{MR}}\left(3\right)+{\mathbf{MR}}\left(4\right)+{\mathbf{MR}}\left(5\right)+\left({\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)\right)×{\mathbf{MR}}\left(10\right)$, then • if ${\mathbf{NMR}}=3$, ${\mathbf{IWA}}\ge {\mathbf{MR}}\left(1\right)+{\mathbf{NPAR}}$; • if ${\mathbf{NMR}}=10$, ${\mathbf{IWA}}\ge {\mathbf{MR}}\left(1\right)+{\mathbf{NPAR}}+K×\left(K+2\right)$. 10:   B(NB) – REAL (KIND=nag_wp) arrayOutput On exit: the filtered output series. If the ARIMA model for the time series was known, and hence ${Q}_{y}^{\prime }$ backforecasts were supplied in Y, then B contains ${Q}_{y}^{\prime }$ ‘filtered’ backforecasts followed by the filtered series. Otherwise, the filtered series begins at the start of B just as the original series began at the start of Y. In either case, if the value of the series at time $t$ is held in ${\mathbf{Y}}\left(t\right)$, then the filtered value at time $t$ is held in ${\mathbf{B}}\left(t\right)$. 11:   NB – INTEGERInput On entry: the dimension of the array B as declared in the (sub)program from which G13BBF is called. In addition to holding the returned filtered series, B is also used as an intermediate work array if the ARIMA model for the time series is known. Constraints: • if ${\mathbf{NMR}}=3$, ${\mathbf{NB}}\ge {\mathbf{NY}}$; • if ${\mathbf{NMR}}=10$, ${\mathbf{NB}}\ge {\mathbf{NY}}+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{MR}}\left(1\right)+{\mathbf{MR}}\left(2\right),{\mathbf{MR}}\left(3\right)\right)$. 12:   IFAIL – INTEGERInput/Output On entry: IFAIL must be set to $0$, $-1\text{​ or ​}1$. If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{​ or ​}1$ is recommended. If the output of error messages is undesirable, then the value $1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is $0$. When the value $-\mathbf{1}\text{​ or ​}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit: ${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6). ## 6  Error Indicators and Warnings If on entry ${\mathbf{IFAIL}}={\mathbf{0}}$ or $-{\mathbf{1}}$, explanatory error messages are output on the current error message unit (as defined by X04AAF). Errors or warnings detected by the routine: ${\mathbf{IFAIL}}=1$ On entry, ${\mathbf{NMR}}\ne 3$ and ${\mathbf{NMR}}\ne 10$, or ${\mathbf{MR}}\left(\mathit{i}\right)<0$, for $\mathit{i}=1,2,\dots ,{\mathbf{NMR}}$, or ${\mathbf{NMR}}=10$ and ${\mathbf{MR}}\left(10\right)=1$, or ${\mathbf{NMR}}=10$ and ${\mathbf{MR}}\left(10\right)=0$ and ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)\ne 0$, or ${\mathbf{NMR}}=10$ and ${\mathbf{MR}}\left(10\right)\ne 0$, and ${\mathbf{MR}}\left(7\right)+{\mathbf{MR}}\left(8\right)+{\mathbf{MR}}\left(9\right)=0$, or NPAR is inconsistent with the contents of MR, or WA is too small, or B is too small. ${\mathbf{IFAIL}}=2$ A supplied model has parameter values which have failed the validity test. ${\mathbf{IFAIL}}=3$ The supplied time series is too short to carry out the requested filtering successfully. ${\mathbf{IFAIL}}=4$ This only occurs when an ARIMA model for the time series has been supplied. The matrix which is used to solve for the starting values for MA filtering is singular. ${\mathbf{IFAIL}}=-999$ Internal memory allocation failed. ## 7  Accuracy Accuracy and stability are high except when the AR-like parameters are close to the invertibility boundary. All calculations are performed in basic precision except for one inner product type calculation which on machines of low precision is performed in additional precision. If an ARIMA model is supplied, a local workspace array of fixed length is allocated internally by G13BBF. The total size of this array amounts to $K$ integer elements, where $K$ is the expression defined in the description of the parameter WA. The time taken by G13BBF is roughly proportional to the product of the length of the series and number of parameters in the filtering model with appreciable increase if an ARIMA model is supplied for the time series. ## 9  Example This example reads a time series of length $296$. It reads one univariate ARIMA $\left(1,1,0,0,1,1,12\right)$ model for the series and the $\left(0,13,12\right)$ filtering transfer function model. $12$ initial backforecasts are required and these are calculated by a call to G13AJF . The backforecasts are inserted at the start of the series and G13BBF is called to perform the filtering. ### 9.1  Program Text Program Text (g13bbfe.f90) ### 9.2  Program Data Program Data (g13bbfe.d) ### 9.3  Program Results Program Results (g13bbfe.r)
2013-12-09 09:27:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 85, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883323311805725, "perplexity": 1889.0204652119853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163952819/warc/CC-MAIN-20131204133232-00013-ip-10-33-133-15.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/151773/unit-circle-combined-with-angles
# Unit circle combined with angles I've tried to make a unit circle to explain the angles, but the circle is not on the right place within the defined axes. Am I using the wrong method? Or can't I use the axis definition as in the code. I also want the intersection of the segment and the circle to define sin and cos of the angle (but that I think I will find later with the tkz-euclide) My code: \documentclass[11pt,a4paper]{article} % use larger type; default would be 10pt \usepackage{tikz} \usepackage{tkz-euclide} \usetkzobj{all} %% om allerhande objecten te gebruiken zoals gradenboog... \usetikzlibrary{calc,intersections,through,backgrounds,snakes} \usepackage{pgfplots} \pgfplotsset{compat=1.8} \usepgfplotslibrary{statistics} \begin{document} \begin{tikzpicture} \begin{axis}% [ grid=major, x=50mm, y=50mm, xmin=-1.1, xmax=1.1, xtick={-1,0,1}, minor xtick={-1,-0.9,...,1}, xminorgrids = true, xlabel={\tiny $x$}, axis x line=middle, ymin=-1.1, ymax=1.1, ytick={-1,0,1}, minor ytick={-1,-0.9,...,1}, yminorgrids = true, ylabel={\scriptsize $y$}, axis y line=middle, no markers, samples=100, ] \end{axis} \tkzDefPoint(0,0){A} \tkzDrawCircle[R](A,5cm) \tkzDefPoint[shift={(0,0)}](0:5.2){B} \tkzDefPoint[shift={(0,0)}](50:5.2){C} \tkzDefPoint[shift={(0,0)}](130:5.2){D} \tkzDrawSegments[color = red, line width = 1pt](A,B A,C) \tkzDrawSegments[color = blue, line width = 1pt](A,B A,D) \tkzDrawPoints(A) \tkzLabelPoints(A) \tkzMarkAngle[fill= blue,size=2.5cm, opacity=.4](B,A,D); \tkzMarkAngle[fill= red,size=1.5cm, opacity=.7](B,A,C); \tkzFindAngle(B,A,C) \tkzGetAngle{angleBAC}; \FPround\angleBAC\angleBAC{0} \tkzLabelAngle[pos = 1](B,A,C){\angleBAC$^\circ$ }; \tkzLabelAngle[pos = 2](B,A,D){\angleBAD$^\circ$ }; \end{tikzpicture} \end{document} I also have the problem when the angle > 180 it gives the wrong angle, because \tkzGetAngle only works in the interval -180° +180°. - You sort of ask three quite different questions. For the first: By default the anchor of a pgfplots axis is set to south west, and the position is set to (0,0) in the coordinate system of tikzpicture. You can change the position with at={(x,y)}, but as your circle is set around (0,0) that isn't necessary. You just need to add anchor=center to the axis options. For the second: To get the intersection between the line segments and the circle you can use \tkzInterLC[R](A,C)(A,5cm)\tkzGetSecondPoint{CC} \tkzInterLC[R](A,D)(A,5cm)\tkzGetSecondPoint{DC} CC and DC is the intersections. In the below code I've drawn and labeled those points, but I haven't drawn the lines corresponding to the sine and cosine. \documentclass[11pt]{standalone} % use larger type; default would be 10pt \usepackage{tikz} \usepackage{tkz-euclide} \usetkzobj{all} %% om allerhande objecten te gebruiken zoals gradenboog... \usetikzlibrary{calc,intersections,through,backgrounds,snakes} \usepackage{pgfplots} \pgfplotsset{compat=1.8} \usepgfplotslibrary{statistics} \begin{document} \begin{tikzpicture} \begin{axis}% [ anchor=center, % sets axis anchor to the axis origin grid=major, x=50mm, y=50mm, xmin=-1.1, xmax=1.1, xtick={-1,0,1}, minor xtick={-1,-0.9,...,1}, xminorgrids = true, xlabel={\tiny $x$}, axis x line=middle, ymin=-1.1, ymax=1.1, ytick={-1,0,1}, minor ytick={-1,-0.9,...,1}, yminorgrids = true, ylabel={\scriptsize $y$}, axis y line=middle, no markers, samples=100, ] \end{axis} \tkzDefPoint(0,0){A} \tkzDrawCircle[R](A,5cm) \tkzDefPoint[shift={(0,0)}](0:5.2){B} \tkzDefPoint[shift={(0,0)}](50:5.2){C} \tkzDefPoint[shift={(0,0)}](130:5.2){D} \tkzDrawSegments[color = red, line width = 1pt](A,B A,C) \tkzDrawSegments[color = blue, line width = 1pt](A,B A,D) % Finds the intersections of segments and circle \tkzInterLC[R](A,C)(A,5cm)\tkzGetSecondPoint{CC} \tkzInterLC[R](A,D)(A,5cm)\tkzGetSecondPoint{DC} % draw and label points \tkzDrawPoints(A,CC,DC) \tkzLabelPoints(A,CC,DC) \tkzMarkAngle[fill= blue,size=2.5cm, opacity=.4](B,A,D); \tkzMarkAngle[fill= red,size=1.5cm, opacity=.7](B,A,C); \tkzFindAngle(B,A,C) \tkzGetAngle{angleBAC}; \FPround\angleBAC\angleBAC{0} \tkzLabelAngle[pos = 1](B,A,C){\angleBAC$^\circ$ }; \tkzLabelAngle[pos = 2](B,A,D){\angleBAD$^\circ$ }; +1. Very pretty solution. I's typing one using \begin{scope}[xshift=5.5cm,yshift=5.5cm]\end{scope} to affect the second part of the code. – Sigur Dec 30 '13 at 10:51
2016-05-01 01:02:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8162994980812073, "perplexity": 4651.51135468858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00087-ip-10-239-7-51.ec2.internal.warc.gz"}
https://ltwork.net/what-are-some-good-songs-to-listen-to-anything-but-rap--2170703
# What are some good songs to listen to (anything but rap)? ###### Question: What are some good songs to listen to (anything but rap)? ### Graph the inequality and check the solution.Waseme swam more than 100 laps.l > 100A number line going from 0 to 200.Check all that Graph the inequality and check the solution. Waseme swam more than 100 laps. l > 100 A number line going from 0 to 200. Check all that apply. A. Draw a closed circle at 100. B. Draw an open circle at 100. C. Shade all numbers to the right of 100. D. Shade all numbers to the left of 100. E. 150 i... ### Hi this for someone Hi this for someone... ### Tina said that a possible range for the given function is y > 0. Based on the given scenario, do you agree or disagree? If you Tina said that a possible range for the given function is y > 0. Based on the given scenario, do you agree or disagree? If you agree, explain why the range is correct. If you disagree, what values should be the range for the given scenario? PLEASE HELP ME... ### What does this excerpt tell us most about the cat? What does this excerpt tell us most about the cat?... ### Which was not a cause of the protestant reformation? the renaissance spirit of questioning authority Which was not a cause of the protestant reformation? the renaissance spirit of questioning authority worldliness and greed among catholic officials sale of indulgences council of trent... ### Which of the following is an equation of the line that passes through the point (-1,4) and is perpendicular to the equation of the graph of y = 4x - 3?Please Which of the following is an equation of the line that passes through the point (-1,4) and is perpendicular to the equation of the graph of y = 4x - 3? Please help me on this pleas... ### Sophia downloaded 592 vacation pictures of these pictures 63% show the beach about how many pictures show the beach you Sophia downloaded 592 vacation pictures of these pictures 63% show the beach about how many pictures show the beach you they rate per 100 to estimate... ### Please help me with this :D Note: The even odd thing is for all of the numbers. Please help me with this :D Note: The even odd thing is for all of the numbers. $Please help me with this :D Note: The even odd thing is for all of the numbers.$... ### The day is cold, and dark, and dreary; it rains, and the wind is never weary; the vine still clings The day is cold, and dark, and dreary; it rains, and the wind is never weary; the vine still clings to the moldering wall, but at every gust the dead leaves fall, and the day is dark and dreary. my life is cold, and dark, and dreary; it rains, and the wind is never weary; my thoughts still cling... ### I have to round two decimal places of this number 6.08276 what is the ANSWER?I NEED HELP PLIS IS FOR A TEST ​ I have to round two decimal places of this number 6.08276 what is the ANSWER? I NEED HELP PLIS IS FOR A TEST ​... ### …[It] is evident [obvious] that the state, and if necessary the nation, has got to possess the right of supervision and control as regards …[It] is evident [obvious] that the state, and if necessary the nation, has got to possess the right of supervision and control as regards the great corporations which are its creatures; particularly as regards the great business combinations [corporations] which derive [develop] a portion of thei... ### Elaborate on how could the hyatt regency disaster have been prevented Elaborate on how could the hyatt regency disaster have been prevented... ### Wich of the following equations would NOT contain the point (4,12) Wich of the following equations would NOT contain the point (4,12) $Wich of the following equations would NOT contain the point (4,12)$... ### Graph: f(x) = 0.5^x- 3Step 1: Calculate the initial value of the function.f(0) = Graph: f(x) = 0.5^x- 3 Step 1: Calculate the initial value of the function. f(0) = $Graph: f(x) = 0.5^x- 3 Step 1: Calculate the initial value of the function. f(0) =$... Theres a ss look at it and please help $Theres a ss look at it and please help$... ### For centuries, people have been selectively breeding plants and animals to have certain desirable traits. For centuries, people have been selectively breeding plants and animals to have certain desirable traits. gregor mendel predicted the existence of units of hereditary information that control those traits. what do we call those units of hereditary today? (a) chromosomes (b) proteins (c) genes (d... ### Solve and show work.1. x^2 + 3x - 4 = 02. 2x^2 - 4x - 3 = 03. x(x - 2) = 44. 9x^2 + 12x + 4 = 05. x^2 + 2x = 1 Solve and show work. 1. x^2 + 3x - 4 = 0 2. 2x^2 - 4x - 3 = 0 3. x(x - 2) = 4 4. 9x^2 + 12x + 4 = 0 5. x^2 + 2x = 1... ### If a person's goal is to increase his or her overall fitness, which is the most important factor the specific aerobic exercise If a person's goal is to increase his or her overall fitness, which is the most important factor the specific aerobic exercise a person does or the intensity of the exercise? Defend your response. <<​...
2022-12-06 18:23:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2699224054813385, "perplexity": 1430.5080085210896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00027.warc.gz"}
https://www.physicsforums.com/threads/doppler-shift.134024/
Doppler shift? Mindscrape Alright, I'm trying to derive the dopper shift using a spacetime diagram (see attached). If we model light pulses then we can derive the distance between the pulses in time, and hence the doppler shift... right? So, if we make some light pulses along some sort of time event in the x, cT frame and extend their perpindiculars then we can make some relations. Here is what I have done: $$ct=cTcos(\theta)$$ and so we know that $$sin(\theta) = \frac{v}{c}$$ and $$cos(\theta) = \sqrt{1 - \frac{v^2}{c^2}}$$ so if $$t=Tcos(\theta)$$ and using kinematics we know the distace between pulses is $$x=vTcos(\theta)$$ and the intervals will then be $$t = T + \frac{x}{c}$$ then with some algebra $$t=T\gamma(1 + \frac{v}{c} \sqrt{1- \frac{v^2}{c^2}})$$ When we take the ratio between the two time, which should be the doppler shift then we get $$\frac{t}{T} = \frac{1 + \frac{v}{c} \sqrt{1 - \frac{v^2}{c^2}}}{ \sqrt{1- \frac{v^2}{c^2}}}$$ Which is really close, but off somehow. Anyone know what went wrong? Attachments • brehmetrick.pdf 33.5 KB · Views: 166 Last edited:
2022-10-01 01:33:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886311411857605, "perplexity": 1199.693565844553}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00276.warc.gz"}
https://www.physicsforums.com/threads/pyrite-roasting-mass-balance-with-chemical-reaction.803174/
# Homework Help: Pyrite roasting -- Mass balance with chemical reaction Tags: 1. Mar 14, 2015 ### MexChemE 1. The problem statement, all variables and given/known data A certain pyrite ore contains 85% of FeS2 and 15% of inerts. This ore is introduced into a roasting furnace with 20% excess air, in order to oxidize the FeS2 in the reaction: $$\textrm{FeS}_2 + \frac{11}{4}\textrm{O}_2 \rightarrow \frac{1}{2}\textrm{Fe}_2 \textrm{O}_3 + 2\textrm{SO}_2$$ The solid product contains 2% FeS2 in mass. Using 100 lb as basis, determine: a) The chemical equation describing the process using the calculation basis, in mass and mole. b) Conversion percentage. c) The volume of the exhaust gases at 300 °C and 1 atm. d) The amount of solid product obtained. e) The amount of sulfuric acid which can be formed from the exhaust gases. 2. Relevant equations Steady state mass balance with chemical reaction $$\textrm{In} + \textrm{Generation} = \textrm{Out} + \textrm{Consumption}$$ 3. The attempt at a solution First, I sketched a diagram of the process, as you can see in the attachments. Now, here's my work. Part a) 85 lb (0.708 lbmol) of FeS2 are being fed into the furnace. So, here's the balanced equation in molar base: $$0.708\textrm{FeS}_2 + 1.947\textrm{O}_2 \rightarrow 0.354\textrm{Fe}_2 \textrm{O}_3 + 1.416\textrm{SO}_2$$ And mass base: $$85\textrm{FeS}_2 + 62.3\textrm{O}_2 \rightarrow 56.6\textrm{Fe}_2 \textrm{O}_3 + 90.6\textrm{SO}_2$$ Part b) Now, for part b) I need to know how much FeS2 reacted (in lb). We'll call this quantity "A." We now also know that the O2 required is 1.947 lbmol. Assuming dry air (21% oxygen; 79% nitrogen), the moles of air required are 9.271 lbmol; and since we have a 20% of excess air, the air fed into the furnace is 11.125 lbmol, of which 2.336 lbmol are oxygen (74.76 lb). Now, we'll analyze the mass exchange occurring with the reaction. We know 85 lb of FeS2 and 74.76 lb O2 were fed into the furnace. If A lb of FeS2 reacted, we will have an output of: (85 - A) FeS2 (74.76 - 0.73A) O2 (0.67A) Fe2O3 (1.07A) SO2 We know M3 is composed of 15 lb of inerts plus some amount of iron sulphide and ferric oxide. $$M_3=15+(85-A) + 0.67A= 100-0.33A$$ Now, if we perform a sulphide balance we have: $$85 = A + 0.02M_3$$ Now we have a 2x2 linear equation system which we cal solve in order to obtain M3 and A. A = 83.551 lb M3 = 72.428 lb The percentage of conversion will be given by: $$\% \textrm{Conversion} = \frac{83.551}{85} \times 100\% = 98.3\%$$ Part c) Now that we know A, we can calculate the amount of moles of exhaust gases, which is 10.616 lbmol. Assuming ideal beahvior, at 300 °C and 1 atm of pressure, the exhaust gases occupy a volume of 7997.3 ft3. Part d) The solid product is M3, which are 72.428 lb, with a composition of: (15 lb) Inerts (1.499 lb) FeS2 (55.979 lb) Fe2O3 Part e) I have some doubts about my procedure for this part. We have 1.397 lbmol of SO2, of which only 0.8604 lbmol can react with the limited amount of O2 moles in the exhaust gases (which are 0.4302 lbmol) to form 0.8604 lbmol of SO3. This amount of SO3 will in turn react with a stoichiometric amount of water in order to produce 0.8604 lbmol of H2SO4. Therefore, with an output of 1.397 lbmol of SO2 and 0.4302 lbmol of O2 as limiting reactant, 0.8604 lbmol of H2SO4 can be produced. My concerns are mostly focused on part e), but feel free to point out any inconsistencies you may find along the way. Thanks in advance for any input! #### Attached Files: • ###### diagrama.png File size: 5 KB Views: 390 2. Mar 15, 2015 ### Staff: Mentor It is not clear to me what the e asks. Process of producing sulfuric acid requires several steps and at least one additional reagent (water). If you can add water, why can't you use more air to oxidize all sulfur to SO3? 3. Mar 15, 2015 ### MexChemE You have a good point; if I took the freedom of adding water I could just add more air and oxidize the 1.397 lbmol of SO2, and because of stoichiometry, that amount will produce 1.397 lbmol of H2SO4.. I originally used the limited amount of oxygen because the problem asked to use the product gases, but actually the only product gas of interest is SO2. Also, the reaction SO3 + H2O is not very common in industry, it releases too much heat, as far as I know. So there's that little detail too. Last edited: Mar 15, 2015
2018-05-24 16:05:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5857248306274414, "perplexity": 3399.528910428619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866511.32/warc/CC-MAIN-20180524151157-20180524171157-00401.warc.gz"}
https://chemistry.tutorvista.com/organic-chemistry/organic-chemistry-reactions.html
Top # Organic Chemistry Reactions Organic chemistry reactions are easy to understand and based on a particular concept many new pathways are developed on daily basis.Named organic chemistry reactions are such a reactions which are synthetically important and has a vast scope. The organic chemistry reactions list includes elimination, addition, substitution, condensation, oxidation, reduction, dehydration and hydrolysis reaction. So all the above named reactions in organic synthesis involves any or some of the type of the reactions. The strategic application of named reaction is their ease to understand the development of organic chemistry pathways and their innovativeness. Related Calculators Chemistry Reaction Calculator Calculator for Chemistry Redox Reaction Calculator Balancing Chemistry Equations Calculator ## Aldol Condensation In this reaction two carbonyl compounds containing a hydrogen condense in the presence of a base to give ß hydroxy carbonyl compound. This is the characteristic reaction for carbonyl compounds containing a hydrogen and hence formaldehyde and benzaldehyde will not give aldol condensation. This reaction indicates the acidic nature of hydrogen at alpha position in the carbonyl compound. For example acetaldehyde undergoes aldol condensation to give ß hydroxy butanaldehyde which on dehydration gives crotanaldehyde. This method is excellent synthetic way to prepare unsaturated aldehydes and acids. If we take a mixture of acetone and acetaldehyde we will get a condensation product between acetone and acetaldehyde in which the acetone will lose the alpha hydrogen easily to give the product. This mechanism involves formation of carboxylate ion by losing alpha acidic hydrogen and attack of the nucleophile to another carbonyl compound. This is the condensation type reaction. ## Cannizzaro Reaction In this reaction carbonyl compounds without hydrogen at a carbon undergo disproportion to give alcohol and salt of carboxylic acid in the presence of a base. This is the characteristic reaction for compounds without hydrogen at alpha carbon. Hence acetaldehyde and propanaldehyde will not undergo cannizzaro reaction. For example formaldehyde undergoes disproportion in the presence of base to give methyl alcohol and sodium formate. 2HCHO$\to$CH3OH + HCOONa If we take formaldehyde and benzaldehyde the major products are methyl alcohol and benzoic acid. ## Claisen Reaction It is the reaction where two ester molecules condense in the presence of base to give condensed ester with alcohol. This is also another type of carbon-carbon bond formation reaction. In this reaction instead of two ester molecules we can condense ester with another carbonyl compound like acetaldehyde and acetone also. For example two ethyl acetate molecules condense together to give ethyl acetoacetate and ethanol. This is a type of condensation reaction. CH3-CH2-O-CO-CH3 + CH3-CH2-O-CO-CH3 CH3-CH2-O-CO-CH2-CO-CH3 + CH3-CH2-OH ## Rosenmund Reduction In this reduction an acyl chloride is reduced to aldehyde in the presence of Palladium and Barium sulphate as catalytic poison. The purpose of Barium sulphate is reduce to effectiveness of palladium or otherwise the aldehyde thus formed will be directly reduced to alcohol. This is the effective way to convert acyl chloride to aldehyde. For example Banzoyl chloride is reduced to benzaldehyde. ## Wolf-Kishner Reduction Here the carbonyl compounds like aldehydes and ketones are directly reduced to alkane in the presence of hydrazine in a suitable base like sodium ethoxide or sodium hydroxide. The mechanism involves formation of hydrazone followed by deprotonation and evolution of nitrogen gives the desired alkane. For example acetone on Wolf-Kishner reduction gives propane. ## Gabriel-Pthalimide Synthesis It is the effective way to prepare the primary amines. Here the pthalimide is converted to N-methyl pthalimide which on further base hydrolysis gives the primary amine. The complete process involves the following steps. 1. Converting pthalimide to potassium pthalimide. 2. Converting potassium pthalimide to N-methyl pthalimide. 3. Hydrolysis of N-methyl pthalimide to give desired primary amine. This method will not work in case of aromatic amines like aniline as the halogen bond cannot be easily broken from benzene ring. In this example on treating methyl iodide with pthalimide in the presence of base we will get methyl amine. ## Carbylamine Reaction This is the characteristic reaction of primary amines. Primary amines on reacting with chloroform and potassium hydroxide form iso-cyanides a foul smell gas which is often used to identify primary amine group in an organic compound. Secondary and tertiary amines will not give this test. For example aniline on carbyl amine reaction gives phenyl iso-cyanide. C6H5NH2 + CHCl3 + KOH C6H5NC + 3KCl + 3H2O ## Clemmensen Reduction Here the carbonyl compounds are reduced to alkanes in the presence of Zinc amalgam in hydrochloric acid. This method is particularly effective for aryl-alkyl ketones and the substrate should not be acid sensitive. For example aceto phenone on clemmensen reduction gives ethyl benzene. Acetone will give the test result to the moderate extend and on reduction it will give propane. CH3-CO-CH3 CH3-CH2-CH3 ## Lucas Test • This is the effective test to distinguish the primary, secondary and tertiary alcohols. • Anhydrous zinc chloride in concentrated hydrochloric acid is called as Lucas reagent. • Alcohols on reacting with Lucas reagent give alkyl halides due to which a turbidity develops. • The turbidity will develop in according to the rate of the reaction. Tertiary alcohols will undergo the reaction much more faster and turbidity will develop immediately. Secondary alcohols will undergo the reaction at moderate rates and turbidity will develop after 2-3 minutes. CH3-CH(OH)-CH3 +HCl $\to$ CH3-CH(Cl)-CH3 + H2O Primary alcohols will not undergo the reaction at normal conditions so the turbidity will develop only on heating. CH3-CH2-OH + HCl $\to$ CH3-CH2-Cl + H2O By this we can differentiate primary, secondary and tertiary alcohols. ## Mustard Oil Reaction This is the characteristic reaction of primary amines. Primary amines on reactive with carbon disulphide in the presence of mercuric chloride to give iso-thiocyanates. CH3-NH2 + CS2 + HgCl2 CH3-S-CN + HgS + HCl This test is also used to distinguish the primary amines from other amino groups. This is called as mustard oil reaction because the iso-thiocyanates are active ingredients of mustard oil. ## Organic chemistry reactions Organic chemistry reactions are different from inorganic chemistry reactions as the core principle will guide the mechanism of the reaction and the products. For example hydrocarbons on combustion give carbon dioxide and water. This is applicable to all hydrocarbons in complete combustion. So any one can easily say that Methane on combustion will give carbon dioxide and water. CH4 + 2O2 CO2 + 2H2O ## Named Organic Reactions Named organic chemistry reactions were discovered pathways by many scientists over the course of time. They are named after the scientist who discovered the pathway. For example Claisen reaction is named after the scientist Claisen who discovered that esters can be condensed to give condensed products at slightly different conditions. Some named reactions are named after the reactants, intermediates or products of the reaction. For example Gabriel-pthalimide synthesis uses pthalimide as an intermediate. ## Organic Chemistry Mechanism Organic chemistry mechanism is the detailed pathway of an organic chemistry reaction. It shows the intermediates formed during a organic chemistry reaction which can be isolated by adding suitable reagents. Similarly it explains the transition state through which a reactant is converted into product. The mechanism is useful in assisting to derive the rate expression and to determine the kinetically controlled and thermodynamically controlled products. ## Strategic Applications of Named Reaction The strategic application of named reaction is their innovativeness that expand the scope of organic chemistry to new pathways and mechanisms. The organic chemistry field is expanding every day with the invention of new method of synthesis and new compounds. This provides more and more interest in the field for research and development. ## Named Reactions in Organic Synthesis The following are some of the named reactions in organic synthesis. 1. Aldol condensation. 2. Friedal-craft reaction. 3. Cannizaro reaction. 4. Rosamund reduction. 5. Lucas test. 6. Victor-meyer test 7. Gabriel Pthalimide synthesis. 8. Clemmenson reduction. 9. Wolf-Kishner reduction. 10. Claisen reaction. ## Organic Chemistry Reaction List Organic chemistry reactions can be classified as following. 1. Oxidation reaction. 2. Reduction reaction. 3. Hydrolysis reaction. 4. Hydration reaction. 5. Dehydration reaction. 6. Condensation reaction. 7. Polymerization reaction. 8. Substitution reaction. 10. Elimination reaction. More topics in Organic Chemistry Reactions 1,3 Dipolar Cycloaddition Alder Ene Reaction Aldol Condensation Arbuzov Reaction Arndt Eistert Synthesis Azo Coupling Baeyer Villiger Oxidation Baker Venkataraman Rearrangement Balz-schiemann Reaction Bamford Stevens Reaction Barton Decarboxylation Baylis Hillman Reaction Beckmann Rearrangement Benzilic Acid Rearrangement Benzoin Condensation Bergman Cyclization Ohira Bestmann Reagent Biginelli Reaction Birch Reduction Bischler Napieralski Reaction Blaise Reaction Blanc Reaction Bohlmann Rahtz Pyridine Synthesis Boronic Acid Mannich Reaction Bouveault Blanc Reduction Brook Rearrangement Brown Hydroboration Bucherer Bergs Reaction Buchwald hartwig Cross Coupling Cannizzaro Reaction Claisen Condensation Claisen Rearrangement Clemmensen Reduction Cope Elimination NCERT Solutions NCERT Solutions NCERT Solutions CLASS 6 NCERT Solutions CLASS 7 NCERT Solutions CLASS 8 NCERT Solutions CLASS 9 NCERT Solutions CLASS 10 NCERT Solutions CLASS 11 NCERT Solutions CLASS 12 Related Topics Chemistry Help Chemistry Tutor *AP and SAT are registered trademarks of the College Board.
2019-05-26 21:12:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30197715759277344, "perplexity": 14387.543707850484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00130.warc.gz"}
https://community.wolfram.com/groups/-/m/t/1232172
# [TMJ] Calculating RRKM Rate Constants from Vibrational Frequencies Posted 1 year ago 802 Views | 0 Replies | 1 Total Likes | New THE MATHEMATICA JOURNAL article: ## Calculating RRKM Rate Constants from Vibrational Frequencies and Their Dynamic Interpretation by ADAM C. MANSELL, DAVID J. KAHLE, DARRIN J. BELLERT ABSTRACT: Rice–Rampsberger–Kassel–Marcus (RRKM) theory calculates an energy-dependent microcanonical unimolecular rate constant for a chemical reaction from a sum and density of vibrational quantum states. This article demonstrates how to program the Beyer–Swinehart direct count of the sum and density of states for harmonic oscillators, as well as the Stein–Rabinovitch extension for anharmonic oscillators. Microcanonical rate constants are calculated for the decomposition of vinyl cyanide ( $C_3 H_3 N$) into $HCN$, $HNC$ and $HCCH$ as an example.
2019-01-16 05:46:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4661197066307068, "perplexity": 5266.128482286966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583656897.10/warc/CC-MAIN-20190116052151-20190116074151-00023.warc.gz"}
https://techwhiff.com/learn/what-is-the-yield-on-a-zero-coupon-bond-with-a/106641
##### Let the Universal set be the letters a through j: U = {a, b, ..., i,... Let the Universal set be the letters a through j: U = {a, b, ..., i, j}. Let A = {a, c, e, i}, B = {a, b, e, g), and C = {a, b, f, i} List the elements of the set An (BUC)... ##### Question Material g) Ending finished goods inventory budget (3 marks) h) Cost of goods sold budget... Question Material g) Ending finished goods inventory budget (3 marks) h) Cost of goods sold budget (4 marks) Fizzy Pop bottles a lemon flavoured soft drink. All inventory is in direct materials and finished goods at the end of each quarter. There is no work-in-process inventory. Fizzy Pop uses ca... ##### Case 10.4 Gena Duckworth v. St. Louis Metropolitan Police Department 2007 U.S. App. LEXIS 17137 (8th... Case 10.4 Gena Duckworth v. St. Louis Metropolitan Police Department 2007 U.S. App. LEXIS 17137 (8th Cir.) The issue in the case that follows is whether assigning female police officers to the nightwatch is a bona fide occupational qualification. BENTON, CIRCUIT JUDGE. Three female officers sued the... ##### For each of the following situations, use the IS-LM-FX model to illustrate the effects of the... For each of the following situations, use the IS-LM-FX model to illustrate the effects of the shock. For each case, state the effect of the shock (increase, decrease, no change, or ambiguous) on the following variables: Y, i, E, C, I, TB. Assume the government allows the exchange rate to float. a. L... ##### Can someone help please Test II- Problem 1. Southland Tires has been approached by a large... can someone help please Test II- Problem 1. Southland Tires has been approached by a large chain store that offers to buy 80,000 tires at P34. Delivery must be made within 30 days. Southland can produce 320,000 tires per month. Expected sales at regular prices for the coming month are 300,000 tir... ##### Differential Equations Confused on how to do this (2) (Spts) Find the explicit solution of the... Differential Equations Confused on how to do this (2) (Spts) Find the explicit solution of the initial value problem 12 = 7e-2 – 2y where y(0) = 1.... ##### Developmental psychologists study all of the following EXCEPT: -What part of the brain is involved in the development of depression - Smoking trends of adolescents - Prenatal development - Aging issues of the elderly - How personality develops acro Developmental psychologists study all of the following EXCEPT: -What part of the brain is involved in the development of depression - Smoking trends of adolescents - Prenatal development - Aging issues of the elderly - How personality develops across the lifespan I think the answer is smoking tren... ##### 2. Draw the mechanism of the following reaction. Show how all of the following products form.... 2. Draw the mechanism of the following reaction. Show how all of the following products form. Determine the function of the reagent. x CH,OH ly tyto... ##### How to do well in a managerial accounting exam or midterm? like if ur really stuck,... how to do well in a managerial accounting exam or midterm? like if ur really stuck, any logic tips, pointers?... ##### Complete the equation to show how pyridine, CH N, acts as a Brønsted-Lowry base in water.... Complete the equation to show how pyridine, CH N, acts as a Brønsted-Lowry base in water. equation: C.H.N+H,0 =... ##### 1. A batch of cells are found to contain 300 mM glucose and 150 mM CaCl2... 1. A batch of cells are found to contain 300 mM glucose and 150 mM CaCl2 inside the cell. The cell membrane has pores which are permeable to glucose but not to ions. These cells are then placed in a NaCl solution which has an osmolarity of 450 milliosmoles. What happens to the cells after the soluti... ##### 4-13. Suppose the economy's labor market is competitive and that labor demand can be written as... 4-13. Suppose the economy's labor market is competitive and that labor demand can be written as w = 50 -0.3E while labor supply can be written as w= 8 +0.2E where E is the total amount of employment in millions. What is the market clearing wage? How many people are employed? What is the total va... ##### 7. MECHANISM Using electron push arrows, provide a logical, detailed, stepwise mechanism for the following transformation.... 7. MECHANISM Using electron push arrows, provide a logical, detailed, stepwise mechanism for the following transformation. Be sure to show each step in the mechanism, including initiation, and propagation steps as well as at least one termination step. HBr → Br ROOR... ##### 1 MPa pressure and 500 oC temperature and 200 m/s air enters a turbine. 150 kPa... 1 MPa pressure and 500 oC temperature and 200 m/s air enters a turbine. 150 kPa pressure rises at 150 oC temperature and 250 m/s speed. Since the sectional area is 80 square centimeters square. a. Mass flow of air b. The power produced by the turbine c. Find the adiabatic efficiency of the turbine. ...
2022-09-25 02:51:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3188683092594147, "perplexity": 3454.1734861232003}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00312.warc.gz"}
http://lmb.univ-fcomte.fr/spip.php?page=calendrier&date_debut=2017-09-01
## Les événements de septembre 2017 • ### Jeudi 28 septembre 15:00-17:30 - Mathieu Colin (Univ. de Bordeaux) ; Alberto Farina (Univ. d’Amiens) Séminaires Equations aux Dérivées Partielles Résumé : Mathieu Colin : "Ondes solitaires et systèmes de Schrödinger" Alberto Farina : "Un résultat de type Bernstein pour l’équation des surfaces minimales" Lieu : 316B • ### Mardi 26 septembre 13:45-14:45 - Colin Petitjean - UFC The linear structure of some dual Lipschitz free spaces Résumé : Consider a metric space $M$ with a distinguished point $0_M$. Let $Lip_0(M)$ be the Banach space of Lipschitz functions from $M$ to $\mathbb R$ satisfying $f(0_M) = 0$ (the canonical norm being the best Lipschitz constant). The Lipschitz-free space $\mathcal F(M)$ over $M$ is defined as the closed linear span in $Lip_0(M)^*$ of $\delta(M)$ where $\delta (x)$ denotes the Dirac measure defined by $\langle \delta (x) , f \rangle = f(x)$. The Lipschitz free space $\mathcal F(M)$ is a Banach space such that every Lipschitz function on $M$ admits a canonical linear extension defined on $\mathcal F(M)$. It follows easily from this fundamental linearisation property that the dual of $\mathcal F (M)$ is in fact $Lip_0(M)$. A considerable effort to study the linear structure and geometry of these spaces has been undergone by many researchers in the last two or three decades. In this talk, we first focus on some classes of metric spaces $M$ for which $\mathcal F(M)$ is isometrically isomorphic to a dual Banach space. After a quick overview of the already known results in this line, we define and study the notion of "natural predual". A natural predual is a Banach space $X$ such that $X^* = \mathcal F(M)$ isometrically and $\delta(M)$ is $\sigma(\mathcal F(M),X)$-closed. As we shall see, $\delta(M)$ is always $\sigma(\mathcal F(M),Lip_0(M))$-closed but it may happened that it is not $\sigma(\mathcal F(M),X)$-closed for some predual $X$. We characterise the existence of a natural predual in some particular classes of metric spaces. Notably, we concentrate on the class of uniformly discrete and bounded (shortened u.d.b.) metric spaces, for which it is well known that $\mathcal F(M)$ is isomorphic to $\ell_1$. In particular, we exhibit an example of a u.d.b. metric space $M$ for which $\mathcal F(M)$ is a dual isometrically but which does not have any natural predual. We also provide a u.d.b. metric space $M$ such that $\mathcal F(M)$ is not a dual isometrically. We finish with the study of the extremal structure of Lipschitz free spaces admitting a natural predual. This is part of a joint work with L. García-Lirola, A. Procházka and A. Rueda Zoca. Lieu : 316Bbis • ### Jeudi 28 septembre 16:30-18:00 - Commission Informatique (CI) Lieu : Salle 316Bbis • ### Vendredi 29 septembre 10:00-12:00 - Soutenance de thèse de Tianxiang GOU Lieu : Salle 316B (LMB, UFR ST) • ### Mardi 26 septembre 14:15-16:00 - Soutenance d’habilitation d’Antoine PERASSO Lieu : Salles de Actes (UFR ST, Besançon)
2017-11-25 04:02:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9118793606758118, "perplexity": 1040.513655979835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809392.94/warc/CC-MAIN-20171125032456-20171125052456-00572.warc.gz"}
https://wiki.loliot.net/docs/lang/python/libraries/yolov4/model/python-yolov4-model-loss/
# YOLOv1# We optimize for sum-squared error in the output of our model. We use sum-squared error because it is easy to optimize, however it does not perfectly align with our goal of maximizing average precision. It weights localization error equally with classification error which may not be ideal. Also, in every image many grid cells do not contain any object. This pushes the “confidence” scores of those cells towards zero, often overpowering the gradient from cells that do contain objects. This can lead to model instability, causing training to diverge early on. To remedy this, we increase the loss from bounding box coordinate predictions and decrease the loss from confidence predictions for boxes that don’t contain objects. We use two parameters, $\lambda_{coord}$ and $\lambda_{noobj}$ to accomplish this. We set $\lambda_{coord} = 5$ and $\lambda_{noobj} = 0.5$. Sum-squared error also equally weights errors in large boxes and small boxes. Our error metric should reflect that small deviations in large boxes matter less than in small boxes. To partially address this we predict the square root of the bounding box width and height instead of the width and height directly. YOLO predicts multiple bounding boxes per grid cell. At training time we only want one bounding box predictor to be responsible for each object. We assign one predictor to be “responsible” for predicting an object based on which prediction has the highest current IOU with the ground truth. This leads to specialization between the bounding box predictors. Each predictor gets better at predicting certain sizes, aspect ratios, or classes of object, improving overall recall. where $1^{obj}_i$ denotes if object appears in cell i and $1^{obj}_{ij}$ denotes that the j-th bounding box predictor in cell i is “responsible” for that prediction. $B = \begin{pmatrix} t_x & t_y & t_w & t_h & t_o \end{pmatrix}$ \begin{aligned} x &= logistic(t_x), & y&= logistic(t_y), & w &= logistic(t_w), & h &= logistic(t_h), \\ C &= logistic(t_o), & p(c_k) &= softmax(c_k) \end{aligned} \begin{aligned} Loss &= \lambda_{coord} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \left[ \left( x_{ij} - \hat{x}_{ij} \right)^2 + \left( y_{ij} - \hat{y}_{ij} \right)^2 + \left( \sqrt{w_{ij}}- \sqrt{\hat{w}_{ij}} \right)^2 + \left( \sqrt{h_{ij}} - \sqrt{\hat{h}_{ij}} \right)^2 \right] \\ &+ \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \left( IOU^{truth}_{pred} - \hat{C}_{ij} \right)^2 \\ &+ \lambda_{noobj} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{noobj}_{ij} \left( 0 - \hat{C}_{ij} \right)^2 \\ &+ \sum^{S^2}_{i=0} 1^{obj}_i \sum_{c \in classes} \left( p_i(c) - \hat{p}_i(c) \right)^2 \end{aligned} # YOLOv2# When we move to anchor boxes we also decouple the class prediction mechanism from the spatial location and instead predict class and objectness for every anchor box. Following YOLO, the objectness prediction still predicts the IOU of the ground truth and the proposed box and the class predictions predict he conditional probability of that class given that there is an object. $B= \begin{pmatrix} t_x & t_y & t_w & t_h & t_o & c_0 & c_1 & ... & \end{pmatrix} \\$ \begin{aligned} x &= logistic(t_x) + c_{xi}, & y &= logistic(t_y) + c_{yi}, & w &= p_w e^{t_w}, & h &= p_h e^{t_h} \\ C &= logistic(t_o), & p(c_k) &= softmax(c_k), & p_* &= anchor_* \end{aligned} $1^{obj} = max(IOU^{truth}_{anchor})$ \begin{aligned} Loss &= \lambda_{coord} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \left( 2 - w_{ij}h_{ij} \right) \left[ \left( x_{ij} - \hat{x}_{ij} \right)^2 + \left( y_{ij} - \hat{y}_{ij} \right)^2 + \left( w_{ij}- \hat{w}_{ij} \right)^2 + \left( h_{ij} - \hat{h}_{ij} \right)^2 \right] \\ &+ \lambda_{obj} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \left( IOU^{truth}_{pred} - \hat{C}_{ij} \right)^2 \\ &+ \lambda_{noobj} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{noobj}_{ij} \left( 0 - \hat{C}_{ij} \right)^2 \\ &+ \lambda_{class} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \sum_{c \in classes} \left( p_i(c) - \hat{p}_i(c) \right)^2 \end{aligned} # YOLOv3# During training we use sum of squared error loss. If the ground truth for some coordinate prediction is $t_*$ our gradient is the ground truth value (computed from the ground truth box) minus our prediction: $t_* − \hat{t}_*$. This ground truth value can be easily computed by inverting the equations above. YOLOv3 predicts an objectness score for each bounding box using logistic regression. This should be 1 if the bounding box prior overlaps a ground truth object by more than any other bounding box prior. If the bounding box prior is not the best but does overlap a ground truth object by more than some threshold we ignore the prediction, following Faster R-CNN. We use the threshold of 0.5. Unlike Faster R-CNN, our system only assigns one bounding box prior for each ground truth object. If a bounding box prior is not assigned to a ground truth object it incurs no loss for coordinate or class predictions, only objectness. Each box predicts the classes the bounding box may contain using multilabel classification. We do not use a softmax as we have found it is unnecessary for good performance, instead we simply use independent logistic classifiers. During training we use binary cross-entropy loss for the class predictions. $B= \begin{pmatrix} t_x & t_y & t_w & t_h & t_o & c_0 & c_1 & ... & \end{pmatrix} \\$ \begin{aligned} x &= logistic(t_x) + c_{xi}, & y &= logistic(t_y) + c_{yi}, & w &= p_w e^{t_w}, & h &= p_h e^{t_h} \\ C &= logistic(t_o), & p(c_k) &= logistic(c_k), & p_* &= anchor_* \end{aligned} $1^{obj} = max(IOU^{truth}_{anchor})$ \begin{aligned} Loss &= \lambda_{coord} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} 1^{IOU^{truth}_{anchor} > 0.5}_{ij} \left( 2 - w_{ij}h_{ij} \right) \\ & \qquad \qquad \left[ \left( t_{xij} - \hat{t_x}_{ij} \right)^2 + \left( t_{yij} - \hat{t_y}_{ij} \right)^2 + \left( t_{wij}- \hat{t_w}_{ij} \right)^2 + \left( t_{hij} - \hat{t_h}_{ij} \right)^2 \right] \\ &+ \lambda_{obj} \sum^{S^2}_{i=0} \sum^B_{j=0} \left[ - C_{ij} \log \hat{C}_{ij} - \left( 1 - C_{ij} \right) \log \left( 1 - \hat{C}_{ij} \right) \right] \\ &+ \lambda_{class} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \sum_{c \in classes} \left[ - p_i(c) \log \hat{p}_i(c) - \left( 1- p_i(c) \right) \log \left( 1 - \hat{p}_i(c) \right) \right] \end{aligned} # YOLOv4# Bag of Freebies (BoF) for backbone: CutMix and Mosaic data augmentation, DropBlock regularization, Class label smoothing Bag of Specials (BoS) for backbone: Mish activation, Cross-stage partial connections (CSP), Multi-input weighted residual connections (MiWRC) Bag of Freebies (BoF) for detector: CIoU-loss, CmBN, DropBlock regularization, Mosaic data augmentation, Self-Adversarial Training, Eliminate grid sensitivity, Using multiple anchors for a single ground truth, Cosine annealing scheduler(SGDR), Optimal hyper-parameters, Random training shapes Bag of Specials (BoS) for detector: Mish activation, SPP-block, SAM-block, PAN path-aggregation block, DIoU-NMS $B= \begin{pmatrix} t_x & t_y & t_w & t_h & t_o & c_0 & c_1 & ... & \end{pmatrix} \\$ \begin{aligned} x &= scale(logistic(t_x)) + c_{xi}, & y &= scale(logistic(t_y)) + c_{yi}, & w &= p_w e^{t_w}, & h &= p_h e^{t_h} \\ C &= logistic(t_o), & p(c_k) &= logistic(c_k), & p_* &= anchor_* \end{aligned} $1^{obj} = max(IOU^{truth}_{anchors}) \quad and \quad IOU^{truth}_{anchors} > iou\_thresh$ \begin{aligned} Loss &= \lambda_{coord} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \left( 1 - CIOU(pred, truth) \right) \\ &+ \lambda_{obj} \sum^{S^2}_{i=0} \sum^B_{j=0} \left[ - C_{ij} \log \hat{C}_{ij} - \left( 1 - C_{ij} \right) \log \left( 1 - \hat{C}_{ij} \right) \right] \\ &+ \lambda_{class} \sum^{S^2}_{i=0} \sum^B_{j=0} 1^{obj}_{ij} \sum_{c \in classes} \left[ - p_i(c) \log \hat{p}_i(c) - \left( 1- p_i(c) \right) \log \left( 1 - \hat{p}_i(c) \right) \right] \end{aligned} Last updated on
2021-03-04 09:05:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 23, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 5895.756858682544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368687.25/warc/CC-MAIN-20210304082345-20210304112345-00180.warc.gz"}
https://socratic.org/questions/how-do-you-graph-y-100-2-x-2-10000
# How do you graph (y-100)^2 +x^2 =10000? Jan 26, 2016 ${\left(y - 100\right)}^{2} + {x}^{2} = {10}^{4}$ ${\left(y - 100\right)}^{2} + {x}^{2} = {\left({10}^{2}\right)}^{2}$ This then can be seen to be the equation of a circle with radius ${10}^{2} = 100$ and with centre $\left(0 , 100\right)$
2019-10-21 15:53:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5365851521492004, "perplexity": 410.44099562044437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00263.warc.gz"}
http://planetmath.org/TheRealNumbersAreIndecomposableAsATopologicalSpace
# the real numbers are indecomposable as a topological space Let $\mathbb{R}$ be the set of real numbers with standard topology. We wish to show that if $\mathbb{R}$ is homeomorphic to $X\times Y$ for some topological spaces $X$ and $Y$, then either $X$ is one point space or $Y$ is one point space. First let us prove a lemma: Lemma. Let $X$ and $Y$ be path connected topological spaces such that cardinality of both $X$ and $Y$ is at least $2$. Then for any point $(x_{0},y_{0})\in X\times Y$ the space $X\times Y\setminus\{(x_{0},y_{0})\}$ with subspace topology is path connected. Proof. Let $x^{\prime}\in X$ and $y^{\prime}\in Y$ such that $x^{\prime}\neq x_{0}$ and $y^{\prime}\neq y_{0}$ (we assumed that such points exist). It is sufficient to show that for any point $(x_{1},y_{1})$ from $X\times Y\setminus\{(x_{0},y_{0})\}$ there exists a continous map $\sigma:\mathrm{I}\to X\times Y$ such that $\sigma(0)=(x_{1},y_{1})$, $\sigma(1)=(x^{\prime},y^{\prime})$ and $(x_{0},y_{0})\not\in\sigma(\mathrm{I})$. Let $(x_{1},y_{1})\in X\times Y\setminus\{(x_{0},y_{0})\}$. Therefore either $x_{1}\neq x_{0}$ or $y_{1}\neq y_{0}$. Assume that $y_{1}\neq y_{0}$ (the other case is analogous). Choose paths $\sigma:\mathrm{I}\to X$ from $x_{1}$ to $x^{\prime}$ and $\tau:\mathrm{I}\to Y$ from $y_{1}$ to $y^{\prime}$. Then we have induced paths: $\sigma^{\prime}:\mathrm{I}\to X\times Y\ \ \mathrm{such}\ \mathrm{that}\ % \sigma^{\prime}(t)=(\sigma(t),y_{1});$ $\tau^{\prime}:\mathrm{I}\to X\times Y\ \ \mathrm{such}\ \mathrm{that}\ \tau^{% \prime}(t)=(x^{\prime},\tau(t)).$ Then the path $\sigma^{\prime}*\tau^{\prime}:\mathrm{I}\to X\times Y$ defined by the formula $(\sigma^{\prime}*\tau^{\prime})(t)=\begin{cases}\sigma^{\prime}(2t)&\mathrm{% when}\ \ 0\leq t\leq\frac{1}{2}\\ \tau^{\prime}(2t-1)&\mathrm{when}\ \ \frac{1}{2}\leq t\leq 1\end{cases}$ is a desired path. $\square$ . If there exist topological spaces $X$ and $Y$ such that $\mathbb{R}$ is homeomorphic to $X\times Y$, then either $X$ has exactly one point or $Y$ has exactly one point. Proof. Assume that neither $X$ nor $Y$ has exactly one point. Now $X\times Y$ is path connected since it is homeomorphic to $\mathbb{R}$, so it is well known that both $X$ and $Y$ have to be path connected (please see this entry (http://planetmath.org/ProductOfPathConnectedSpacesIsPathConnected) for more details). Therefore for any point $(x,y)\in X\times Y$ the space $X\times Y\setminus\{(x,y)\}$ is also path connected (due to lemma), but there exists a real number $r\in\mathbb{R}$ such that $X\times Y\setminus\{(x,y)\}$ is homeomorphic to $\mathbb{R}\setminus\{r\}$. Contradiction, since $\mathbb{R}\setminus\{r\}$ is not path connected. $\square$ Title the real numbers are indecomposable as a topological space TheRealNumbersAreIndecomposableAsATopologicalSpace 2013-03-22 18:30:59 2013-03-22 18:30:59 joking (16130) joking (16130) 12 joking (16130) Theorem msc 54F99
2018-03-24 06:31:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 58, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432920217514038, "perplexity": 68.23243533568615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649931.17/warc/CC-MAIN-20180324054204-20180324074204-00642.warc.gz"}
https://www.physicsforums.com/threads/derivatives-the-slope-of-a-graph.390397/
# Derivatives & the Slope of a graph MitsuShai Given Last edited: ## Answers and Replies Homework Helper If I remember correctly, f(x) is an increasing function for f'(x) > 0 and a decreasing function for f'(x)< 0. MitsuShai If I remember correctly, f(x) is an increasing function for f'(x) > 0 and a decreasing function for f'(x)< 0. I know I found that it is increasing on the interval (-1,1), but that's wrong Homework Helper How did you get that? MitsuShai How did you get that? ok I re-did it again because I found an error now I got (-1,3) for increasing and (-infinity, -1) U (3, infinity) for decreasing and I got it by doing the number line test (i think that's what it's called) where I take all the criticals numbers and line them up and put test numbers in between them. Last edited: Gold Member Now how about d and e? Here's a clue... What is the shape of the graph when the derivative is zero? MitsuShai Now how about d and e? Here's a clue... What is the shape of the graph when the derivative is zero? it's a straight light, and on the original function that means that there's a point there where it is neither increasing or decreasing or a max or min. I found an answer for them, but I'm not really confident f(3)= error..... f(-1)= 9/16 I'm suppose to get two numbers but I think I did something wrong again....I hate these problems.... Last edited: Gold Member Hmm... I'm sorry, but I don't get (-1,3) for increasing... The derivative is this, as you have written, correct? $$\frac{5(1-x)(x-1)}{(3x^2-10x+3)^2}$$ Ignore the denominator... where is the numerator zero? Where are each of the three components zero? MitsuShai Hmm... I'm sorry, but I don't get (-1,3) for increasing... The derivative is this, as you have written, correct? $$\frac{5(1-x)(x-1)}{(3x^2-10x+3)^2}$$ Ignore the denominator... where is the numerator zero? Where are each of the three components zero? in the numerator: -1,1 denominator: 1/3, 3 dang it I redid it again and this time I got (-1,1) as increasing and I know that's wrong.... Gold Member I said ignore the denominator... if the denominator is zero, your graph is really screwing up. OK, so you know the graph of the derivative is zero at -1 and 1. Thus, the minimum and the maximum must be at those two points. MitsuShai I said ignore the denominator... if the denominator is zero, your graph is really screwing up. OK, so you know the graph of the derivative is zero at -1 and 1. Thus, the minimum and the maximum must be at those two points. That's what I did when I first started it and I got (-1,1) as increasing but that's wrong..... Gold Member Well, that's strange... That is the point where it's increasing. So I don't know why you got it wrong. Staff Emeritus Homework Helper There's a singularity at x=1/3, which is in (-1,1). That's probably why. MitsuShai There's a singularity at x=1/3, which is in (-1,1). That's probably why. there's an error at 3 too, so does that mean I do count the denominator? Even if I do count the denominator I get an error as the maximum Gold Member Oh... So it would be -1<x<1, x=/=(1/3) then... MitsuShai Oh... So it would be -1<x<1, x=/=(1/3) then... huh? oh are you saying because it's undefined there its (-1, 1/3) U (1/3, 1)??? Staff Emeritus Homework Helper Yes, exactly. The function isn't increasing at x=1/3 because it isn't defined there, so you have to split the interval as you have done. MitsuShai Yes, exactly. The function isn't increasing at x=1/3 because it isn't defined there, so you have to split the interval as you have done. ok, so I redid it over again and do you think these are correct: b) (-1, 1/3) U (1/3, 1) --increasin interval c) (-infinity, -1) U (1, 3) U (3, infinity) --decreasng interval d) f(1)= 1/4 --max e) f(-1)= 9/16 ---min Last edited: Gold Member I do believe so. And that would solve your problem as well as allow you to solve d. Staff Emeritus
2022-08-13 21:27:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8438350558280945, "perplexity": 1393.8425077041536}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00021.warc.gz"}
https://eprint.iacr.org/2020/1404
## Cryptology ePrint Archive: Report 2020/1404 A Practical Key-Recovery Attack on 805-Round Trivium Chen-Dong Ye and Tian Tian Abstract: The cube attack is one of the most important cryptanalytic techniques against Trivium. Many improvements have been proposed and lots of key-recovery attacks based on cube attacks have been established. However, among these key-recovery attacks, few attacks can recover the 80-bit full key practically. In particular, the previous best practical key-recovery attack was on 784-round Trivium proposed by Fouque and Vannet at FSE 2013 with on-line complexity about $2^{39}$. To mount a practical key-recovery attack against Trivium on a PC, a sufficient number of low-degree superpolies should be recovered, which is around 40. This is a difficult task both for experimental cube attacks and division property based cube attacks with randomly selected cubes due to lack of efficiency. In this paper, we give a new algorithm to construct candidate cubes targeting at linear superpolies in cube attacks. It is shown by our experiments that the new algorithm is very effective. In our experiments, the success probability is $100\%$ for finding linear superpolies using the constructed cubes. As a result, we mount a practical key-recovery attack on 805-round Trivium, which increases the number of attacked initialisation rounds by 21. We obtain over 1000 cubes with linear superpolies for 805-round Trivium, where 42 linearly independent ones could be selected. With these superpolies, for 805-round Trivium, the 80-bit key could be recovered within on-line complexity $2^{41.40}$, which could be carried out on a single PC equipped with a GTX-1080 GPU in several hours. Furthermore, the new algorithm is applied to 810-round Trivium, a cube of size 43 is constructed and two subcubes of size 42 with linear superpolies for 810-round Trivium are found. Category / Keywords: secret-key cryptography / Cube Attacks, Key-Recovery Attacks, Trivium, Heuristic Algorithm, Moebius Transformation Date: received 11 Nov 2020, last revised 15 Dec 2020 Contact author: ye_chendong at 126 com Available format(s): PDF | BibTeX Citation Short URL: ia.cr/2020/1404 [ Cryptology ePrint archive ]
2021-03-05 00:39:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5019713640213013, "perplexity": 4482.819288915523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00638.warc.gz"}
https://courses.lumenlearning.com/physics/chapter/15-6-entropy-and-the-second-law-of-thermodynamics-disorder-and-the-unavailability-of-energy/
Entropy and the Second Law of Thermodynamics: Disorder and the Unavailability of Energy Learning Objectives By the end of this section, you will be able to: • Define entropy. • Calculate the increase of entropy in a system with reversible and irreversible processes. • Explain the expected fate of the universe in entropic terms. • Calculate the increasing disorder of a system. Figure 1. The ice in this drink is slowly melting. Eventually the liquid will reach thermal equilibrium, as predicted by the second law of thermodynamics. (credit: Jon Sullivan, PDPhoto.org) There is yet another way of expressing the second law of thermodynamics. This version relates to a concept called entropy. By examining it, we shall see that the directions associated with the second law—heat transfer from hot to cold, for example—are related to the tendency in nature for systems to become disordered and for less energy to be available for use as work. The entropy of a system can in fact be shown to be a measure of its disorder and of the unavailability of energy to do work. Making Connections: Entropy, Energy, and Work Recall that the simple definition of energy is the ability to do work. Entropy is a measure of how much energy is not available to do work. Although all forms of energy are interconvertible, and all can be used to do work, it is not always possible, even in principle, to convert the entire available energy into work. That unavailable energy is of interest in thermodynamics, because the field of thermodynamics arose from efforts to convert heat to work. We can see how entropy is defined by recalling our discussion of the Carnot engine. We noted that for a Carnot cycle, and hence for any reversible processes, $\displaystyle\frac{Q_{\text{c}}}{Q_{\text{h}}}=\frac{T_{\text{c}}}{T_{\text{h}}}\\$. Rearranging terms yields $\displaystyle\frac{Q_{\text{c}}}{T_{\text{c}}}=\frac{Q_{\text{h}}}{T_{\text{h}}}\\$ for any reversible process. Qc and Qh are absolute values of the heat transfer at temperatures Tc and Th, respectively. This ratio of $\frac{Q}{T}\\$ is defined to be the change in entropy ΔS for a reversible process, $\Delta{S}=\left(\frac{Q}{T}\right)_{\text{rev}}\\$, where Q is the heat transfer, which is positive for heat transfer into and negative for heat transfer out of, and T is the absolute temperature at which the reversible process takes place. The SI unit for entropy is joules per kelvin (J/K). If temperature changes during the process, then it is usually a good approximation (for small changes in temperature) to take T to be the average temperature, avoiding the need to use integral calculus to find ΔS. The definition of ΔS is strictly valid only for reversible processes, such as used in a Carnot engine. However, we can find ΔS precisely even for real, irreversible processes. The reason is that the entropy S of a system, like internal energy U, depends only on the state of the system and not how it reached that condition. Entropy is a property of state. Thus the change in entropy ΔS of a system between state 1 and state 2 is the same no matter how the change occurs. We just need to find or imagine a reversible process that takes us from state 1 to state 2 and calculate ΔS for that process. That will be the change in entropy for any process going from state 1 to state 2. (See Figure 2.) Figure 2. When a system goes from state 1 to state 2, its entropy changes by the same amount ΔS, whether a hypothetical reversible path is followed or a real irreversible path is taken. Now let us take a look at the change in entropy of a Carnot engine and its heat reservoirs for one full cycle. The hot reservoir has a loss of entropy $\Delta{S}_{\text{h}}=\frac{-Q_{\text{h}}}{T_{\text{h}}}\\$, because heat transfer occurs out of it (remember that when heat transfers out, then Q has a negative sign). The cold reservoir has a gain of entropy$\Delta{S}_{\text{c}}=\frac{Q_{\text{c}}}{T_{\text{c}}}\\$, because heat transfer occurs into it. (We assume the reservoirs are sufficiently large that their temperatures are constant.) So the total change in entropy is ΔStot = ΔSh + ΔSc . Thus, since we know that $\frac{Q_{\text{h}}}{T_{\text{h}}}=\frac{Q_{\text{c}}}{T_{\text{c}}}\\$ for a Carnot engine, $\Delta{S}_{\text{tot}}=\frac{Q_{\text{h}}}{T_{\text{h}}}=\frac{Q_{\text{c}}}{T_{\text{c}}}=0\\$. This result, which has general validity, means that the total change in entropy for a system in any reversible process is zero. The entropy of various parts of the system may change, but the total change is zero. Furthermore, the system does not affect the entropy of its surroundings, since heat transfer between them does not occur. Thus the reversible process changes neither the total entropy of the system nor the entropy of its surroundings. Sometimes this is stated as follows: Reversible processes do not affect the total entropy of the universe. Real processes are not reversible, though, and they do change total entropy. We can, however, use hypothetical reversible processes to determine the value of entropy in real, irreversible processes. Example 1 illustrates this point. Example 1. Entropy Increases in an Irreversible (Real) Process Spontaneous heat transfer from hot to cold is an irreversible process. Calculate the total change in entropy if 4000 J of heat transfer occurs from a hot reservoir at Th = 600 K(327ºC) to a cold reservoir at Tc = 250 K(−23ºC), assuming there is no temperature change in either reservoir. (See Figure 3.) Figure 3. (a) Heat transfer from a hot object to a cold one is an irreversible process that produces an overall increase in entropy. (b) The same final state and, thus, the same change in entropy is achieved for the objects if reversible heat transfer processes occur between the two objects whose temperatures are the same as the temperatures of the corresponding objects in the irreversible process. Strategy How can we calculate the change in entropy for an irreversible process when ΔStot = ΔSh + ΔSc is valid only for reversible processes? Remember that the total change in entropy of the hot and cold reservoirs will be the same whether a reversible or irreversible process is involved in heat transfer from hot to cold. So we can calculate the change in entropy of the hot reservoir for a hypothetical reversible process in which 4000 J of heat transfer occurs from it; then we do the same for a hypothetical reversible process in which 4000 J of heat transfer occurs to the cold reservoir. This produces the same changes in the hot and cold reservoirs that would occur if the heat transfer were allowed to occur irreversibly between them, and so it also produces the same changes in entropy. Solution We now calculate the two changes in entropy using ΔStot = ΔSh + ΔSc. First, for the heat transfer from the hot reservoir, $\displaystyle\Delta{S}_{\text{h}}=\frac{-Q_{\text{h}}}{T_{\text{h}}}=\frac{-4000\text{ J}}{600\text{ K}}=-6.67\text{ J/K}\\$ And for the cold reservoir, $\displaystyle\Delta{S}_{\text{c}}=\frac{-Q_{\text{c}}}{T_{\text{c}}}=\frac{4000\text{ J}}{250\text{ K}}=16.0\text{ J/K}\\$ Thus the total is $\begin{array}{lll}\Delta{S}_{\text{tot}}&=&\Delta{S}_{\text{h}}+\Delta{S}_{\text{c}}\\\text{ }&=&\left(-6.67+16.0\right)\text{ J/K}\\\text{ }&=&9.33\text{ J/K}\end{array}\\$ Discussion There is an increase in entropy for the system of two heat reservoirs undergoing this irreversible heat transfer. We will see that this means there is a loss of ability to do work with this transferred energy. Entropy has increased, and energy has become unavailable to do work. It is reasonable that entropy increases for heat transfer from hot to cold. Since the change in entropy is $\frac{Q}{T}\\$, there is a larger change at lower temperatures. The decrease in entropy of the hot object is therefore less than the increase in entropy of the cold object, producing an overall increase, just as in the previous example. This result is very general: There is an increase in entropy for any system undergoing an irreversible process. With respect to entropy, there are only two possibilities: entropy is constant for a reversible process, and it increases for an irreversible process. There is a fourth version of the second law of thermodynamics stated in terms of entropy: The total entropy of a system either increases or remains constant in any process; it never decreases. For example, heat transfer cannot occur spontaneously from cold to hot, because entropy would decrease. Entropy is very different from energy. Entropy is not conserved but increases in all real processes. Reversible processes (such as in Carnot engines) are the processes in which the most heat transfer to work takes place and are also the ones that keep entropy constant. Thus we are led to make a connection between entropy and the availability of energy to do work. Entropy and the Unavailability of Energy to Do Work What does a change in entropy mean, and why should we be interested in it? One reason is that entropy is directly related to the fact that not all heat transfer can be converted into work. Example 2 gives some indication of how an increase in entropy results in less heat transfer into work. Example 2. Less Work is Produced by a Given Heat Transfer When Entropy Change is Greater 1. Calculate the work output of a Carnot engine operating between temperatures of 600 K and 100 K for 4000 J of heat transfer to the engine. 2. Now suppose that the 4000 J of heat transfer occurs first from the 600 K reservoir to a 250 K reservoir (without doing any work, and this produces the increase in entropy calculated above) before transferring into a Carnot engine operating between 250 K and 100 K. What work output is produced? (See Figure 4.) Figure 4. (a) A Carnot engine working at between 600 K and 100 K has 4000 J of heat transfer and performs 3333 J of work. (b) The 4000 J of heat transfer occurs first irreversibly to a 250 K reservoir and then goes into a Carnot engine. The increase in entropy caused by the heat transfer to a colder reservoir results in a smaller work output of 2400 J. There is a permanent loss of 933 J of energy for the purpose of doing work. Strategy In both parts, we must first calculate the Carnot efficiency and then the work output. Solution to Part 1 The Carnot efficiency is given by $\mathit{Eff}_{\text{C}}=1-\frac{T_{\text{c}}}{T_{\text{h}}}\\$. Substituting the given temperatures yields $\mathit{Eff}_{\text{C}}=1-\frac{100\text{ K}}{600\text{ K}}=0.833\\$. Now the work output can be calculated using the definition of efficiency for any heat engine as given by $\mathit{Eff}=\frac{W}{Q_{\text{h}}}\\$. Solving for W and substituting known terms gives $\begin{array}{lll}W&=&\mathit{Eff}_{\text{C}}Q_{\text{h}}\\\text{ }&=&\left(0.833\right)\left(4000\text{ J}\right)=3333\text{ J}\end{array}\\$ Solution to Part 2 Similarly, $\mathit{Eff}\prime_{\text{C}}=1-\frac{T_{\text{c}}}{T\prime_{\text{c}}}=\frac{100\text{ K}}{250\text{ K}}=0.600\\$ so that $\begin{array}{lll}W&=&\mathit{Eff}\prime_{\text{C}}Q_{\text{h}}\\\text{ }&=&\left(0.600\right)\left(4000\text{ J}\right)=2400\text{ J}\end{array}\\$ Discussion There is 933 J less work from the same heat transfer in the second process. This result is important. The same heat transfer into two perfect engines produces different work outputs, because the entropy change differs in the two cases. In the second case, entropy is greater and less work is produced. Entropy is associated with the unavailability of energy to do work. When entropy increases, a certain amount of energy becomes permanently unavailable to do work. The energy is not lost, but its character is changed, so that some of it can never be converted to doing work—that is, to an organized force acting through a distance. For instance, in Example 2, 933 J less work was done after an increase in entropy of 9.33 J/K occurred in the 4000 J heat transfer from the 600 K reservoir to the 250 K reservoir. It can be shown that the amount of energy that becomes unavailable for work is Wunavail = Δ⋅ T0, where T0 is the lowest temperature utilized. In Example 2, Wunavail = (9.33 J/K)(100 K) = 933 J as found. Heat Death of the Universe: An Overdose of Entropy In the early, energetic universe, all matter and energy were easily interchangeable and identical in nature. Gravity played a vital role in the young universe. Although it may have seemed disorderly, and therefore, superficially entropic, in fact, there was enormous potential energy available to do work—all the future energy in the universe. As the universe matured, temperature differences arose, which created more opportunity for work. Stars are hotter than planets, for example, which are warmer than icy asteroids, which are warmer still than the vacuum of the space between them. Most of these are cooling down from their usually violent births, at which time they were provided with energy of their own—nuclear energy in the case of stars, volcanic energy on Earth and other planets, and so on. Without additional energy input, however, their days are numbered. As entropy increases, less and less energy in the universe is available to do work. On Earth, we still have great stores of energy such as fossil and nuclear fuels; large-scale temperature differences, which can provide wind energy; geothermal energies due to differences in temperature in Earth’s layers; and tidal energies owing to our abundance of liquid water. As these are used, a certain fraction of the energy they contain can never be converted into doing work. Eventually, all fuels will be exhausted, all temperatures will equalize, and it will be impossible for heat engines to function, or for work to be done. Entropy increases in a closed system, such as the universe. But in parts of the universe, for instance, in the Solar system, it is not a locally closed system. Energy flows from the Sun to the planets, replenishing Earth’s stores of energy. The Sun will continue to supply us with energy for about another five billion years. We will enjoy direct solar energy, as well as side effects of solar energy, such as wind power and biomass energy from photosynthetic plants. The energy from the Sun will keep our water at the liquid state, and the Moon’s gravitational pull will continue to provide tidal energy. But Earth’s geothermal energy will slowly run down and won’t be replenished. But in terms of the universe, and the very long-term, very large-scale picture, the entropy of the universe is increasing, and so the availability of energy to do work is constantly decreasing. Eventually, when all stars have died, all forms of potential energy have been utilized, and all temperatures have equalized (depending on the mass of the universe, either at a very high temperature following a universal contraction, or a very low one, just before all activity ceases) there will be no possibility of doing work. Either way, the universe is destined for thermodynamic equilibrium—maximum entropy. This is often called the heat death of the universe, and will mean the end of all activity. However, whether the universe contracts and heats up, or continues to expand and cools down, the end is not near. Calculations of black holes suggest that entropy can easily continue for at least 10100 years. Order to Disorder Entropy is related not only to the unavailability of energy to do work—it is also a measure of disorder. This notion was initially postulated by Ludwig Boltzmann in the 1800s. For example, melting a block of ice means taking a highly structured and orderly system of water molecules and converting it into a disorderly liquid in which molecules have no fixed positions. (See Figure 5.) There is a large increase in entropy in the process, as seen in the following example. Figure 5. When ice melts, it becomes more disordered and less structured. The systematic arrangement of molecules in a crystal structure is replaced by a more random and less orderly movement of molecules without fixed locations or orientations. Its entropy increases because heat transfer occurs into it. Entropy is a measure of disorder. Example 3. Entropy Associated with Disorder Find the increase in entropy of 1.00 kg of ice originally at 0º C that is melted to form water at 0º C. Strategy As before, the change in entropy can be calculated from the definition of ΔS once we find the energy Q needed to melt the ice. Solution The change in entropy is defined as: $\Delta{S}=\frac{Q}{T}\\$. Here Q is the heat transfer necessary to melt 1.00 kg of ice and is given by QmLf, where m is the mass and Lf is the latent heat of fusion. Lf = 334 kJ/kg for water, so that Q = (1.00 kg)(334 kJ/kg) = 3.34 × 105 J. Now the change in entropy is positive, since heat transfer occurs into the ice to cause the phase change; thus, $\displaystyle\Delta{S}=\frac{Q}{T}=\frac{3.34\times10^5\text{ J}}{T}\\$ T is the melting temperature of ice. That is, = 0ºC = 273 K. So the change in entropy is $\begin{array}{lll}\Delta{S}&=&\frac{3.34\times10^5\text{ J}}{273\text{ K}}\\\text{ }&=&1.22\times10^3\text{ J/K}\end{array}\\$ Discussion This is a significant increase in entropy accompanying an increase in disorder. In another easily imagined example, suppose we mix equal masses of water originally at two different temperatures, say 20.0ºC and 40.0ºC. The result is water at an intermediate temperature of 30.0ºC. Three outcomes have resulted: entropy has increased, some energy has become unavailable to do work, and the system has become less orderly. Let us think about each of these results. First, entropy has increased for the same reason that it did in Example 3. Mixing the two bodies of water has the same effect as heat transfer from the hot one and the same heat transfer into the cold one. The mixing decreases the entropy of the hot water but increases the entropy of the cold water by a greater amount, producing an overall increase in entropy. Second, once the two masses of water are mixed, there is only one temperature—you cannot run a heat engine with them. The energy that could have been used to run a heat engine is now unavailable to do work. Third, the mixture is less orderly, or to use another term, less structured. Rather than having two masses at different temperatures and with different distributions of molecular speeds, we now have a single mass with a uniform temperature. These three results—entropy, unavailability of energy, and disorder—are not only related but are in fact essentially equivalent. Life, Evolution, and the Second Law of Thermodynamics Some people misunderstand the second law of thermodynamics, stated in terms of entropy, to say that the process of the evolution of life violates this law. Over time, complex organisms evolved from much simpler ancestors, representing a large decrease in entropy of the Earth’s biosphere. It is a fact that living organisms have evolved to be highly structured, and much lower in entropy than the substances from which they grow. But it is always possible for the entropy of one part of the universe to decrease, provided the total change in entropy of the universe increases. In equation form, we can write this as ΔStot = ΔSsyst + ΔSenvir > 0. Thus ΔSsyst can be negative as long as ΔSenvir is positive and greater in magnitude. How is it possible for a system to decrease its entropy? Energy transfer is necessary. If I pick up marbles that are scattered about the room and put them into a cup, my work has decreased the entropy of that system. If I gather iron ore from the ground and convert it into steel and build a bridge, my work has decreased the entropy of that system. Energy coming from the Sun can decrease the entropy of local systems on Earth—that is, ΔSsyst is negative. But the overall entropy of the rest of the universe increases by a greater amount—that is, ΔSenvir is positive and greater in magnitude. Thus, ΔStot = ΔSsyst + ΔSenvir > 0, and the second law of thermodynamics is not violated. Every time a plant stores some solar energy in the form of chemical potential energy, or an updraft of warm air lifts a soaring bird, the Earth can be viewed as a heat engine operating between a hot reservoir supplied by the Sun and a cold reservoir supplied by dark outer space—a heat engine of high complexity, causing local decreases in entropy as it uses part of the heat transfer from the Sun into deep space. There is a large total increase in entropy resulting from this massive heat transfer. A small part of this heat transfer is stored in structured systems on Earth, producing much smaller local decreases in entropy. (See Figure 6.) Figure 6. Earth’s entropy may decrease in the process of intercepting a small part of the heat transfer from the Sun into deep space. Entropy for the entire process increases greatly while Earth becomes more structured with living systems and stored energy in various forms. PhET Explorations: Reversible Reactions Watch a reaction proceed over time. How does total energy affect a reaction rate? Vary temperature, barrier height, and potential energies. Record concentrations and time in order to extract rate coefficients. Do temperature dependent studies to extract Arrhenius parameters. This simulation is best used with teacher guidance because it presents an analogy of chemical reactions. Section Summary • Entropy is the loss of energy available to do work. • Another form of the second law of thermodynamics states that the total entropy of a system either increases or remains constant; it never decreases. • Entropy is zero in a reversible process; it increases in an irreversible process. • The ultimate fate of the universe is likely to be thermodynamic equilibrium, where the universal temperature is constant and no energy is available to do work. • Entropy is also associated with the tendency toward disorder in a closed system. Conceptual Questions 1. A woman shuts her summer cottage up in September and returns in June. No one has entered the cottage in the meantime. Explain what she is likely to find, in terms of the second law of thermodynamics. 2. Consider a system with a certain energy content, from which we wish to extract as much work as possible. Should the system’s entropy be high or low? Is this orderly or disorderly? Structured or uniform? Explain briefly. 3. Does a gas become more orderly when it liquefies? Does its entropy change? If so, does the entropy increase or decrease? Explain your answer. 4. Explain how water’s entropy can decrease when it freezes without violating the second law of thermodynamics. Specifically, explain what happens to the entropy of its surroundings. 5. Is a uniform-temperature gas more or less orderly than one with several different temperatures? Which is more structured? In which can heat transfer result in work done without heat transfer from another system? 6. Give an example of a spontaneous process in which a system becomes less ordered and energy becomes less available to do work. What happens to the system’s entropy in this process? 7. What is the change in entropy in an adiabatic process? Does this imply that adiabatic processes are reversible? Can a process be precisely adiabatic for a macroscopic system? 8. Does the entropy of a star increase or decrease as it radiates? Does the entropy of the space into which it radiates (which has a temperature of about 3 K) increase or decrease? What does this do to the entropy of the universe? 9. Explain why a building made of bricks has smaller entropy than the same bricks in a disorganized pile. Do this by considering the number of ways that each could be formed (the number of microstates in each macrostate). Problems & Exercises 1. (a) On a winter day, a certain house loses 5.00 × 108 J of heat to the outside (about 500,000 Btu). What is the total change in entropy due to this heat transfer alone, assuming an average indoor temperature of 21.0ºC and an average outdoor temperature of 5.00ºC? (b) This large change in entropy implies a large amount of energy has become unavailable to do work. Where do we find more energy when such energy is lost to us? 2. On a hot summer day, 4.00 × 106 J of heat transfer into a parked car takes place, increasing its temperature from 35.0ºC to 45.0ºC. What is the increase in entropy of the car due to this heat transfer alone? 3. A hot rock ejected from a volcano’s lava fountain cools from 1100ºC to 40.0ºC, and its entropy decreases by 950 J/K. How much heat transfer occurs from the rock? 4. When 1.60 × 105 J of heat transfer occurs into a meat pie initially at 20.0ºC, its entropy increases by 480 J/K. What is its final temperature? 5. The Sun radiates energy at the rate of 3.80 × 1026 W from its 5500ºC surface into dark empty space (a negligible fraction radiates onto Earth and the other planets). The effective temperature of deep space is −270ºC. (a) What is the increase in entropy in one day due to this heat transfer? (b) How much work is made unavailable? 6. (a) In reaching equilibrium, how much heat transfer occurs from 1.00 kg of water at 40.0ºC when it is placed in contact with 1.00 kg of 20.0ºC water in reaching equilibrium? (b) What is the change in entropy due to this heat transfer? (c) How much work is made unavailable, taking the lowest temperature to be 20.0ºC? Explicitly show how you follow the steps in the Problem-Solving Strategies for Entropy. 7. What is the decrease in entropy of 25.0 g of water that condenses on a bathroom mirror at a temperature of 35.0ºC, assuming no change in temperature and given the latent heat of vaporization to be 2450 kJ/kg? 8. Find the increase in entropy of 1.00 kg of liquid nitrogen that starts at its boiling temperature, boils, and warms to 20.0ºC at constant pressure. 9. A large electrical power station generates 1000 MW of electricity with an efficiency of 35.0%. (a) Calculate the heat transfer to the power station, Qh, in one day. (b) How much heat transfer Qc occurs to the environment in one day? (c) If the heat transfer in the cooling towers is from 35.0ºC water into the local air mass, which increases in temperature from 18.0ºC to 20.0ºC, what is the total increase in entropy due to this heat transfer? (d) How much energy becomes unavailable to do work because of this increase in entropy, assuming an 18.0ºC lowest temperature? (Part of Qc could be utilized to operate heat engines or for simply heating the surroundings, but it rarely is.) 10. (a) How much heat transfer occurs from 20.0 kg of 90.0ºC water placed in contact with 20.0 kg of 10.0ºC water, producing a final temperature of 50.0ºC? (b) How much work could a Carnot engine do with this heat transfer, assuming it operates between two reservoirs at constant temperatures of 90.0ºC and 10.0ºC? (c) What increase in entropy is produced by mixing 20.0 kg of 90.0ºC water with 20.0 kg of 10.0ºC water? (d) Calculate the amount of work made unavailable by this mixing using a low temperature of 10.0ºC, and compare it with the work done by the Carnot engine. Explicitly show how you follow the steps in the Problem-Solving Strategies for Entropy. (e) Discuss how everyday processes make increasingly more energy unavailable to do work, as implied by this problem. Glossary entropy: a measurement of a system’s disorder and its inability to do work in a system change in entropy: the ratio of heat transfer to temperature $\frac{Q}{T}\\$ second law of thermodynamics stated in terms of entropy: the total entropy of a system either increases or remains constant; it never decreases Selected Solutions to Problems & Exercises 1. (a) 9.78 × 104 J/K; (b) In order to gain more energy, we must generate it from things within the house, like a heat pump, human bodies, and other appliances. As you know, we use a lot of energy to keep our houses warm in the winter because of the loss of heat to the outside. 3. 8.01 × 105 J 5. (a) 1.04 × 1031 J/K; (b) 3.28 × 1031 J 7. 199 J/K 9. (a) 2.47 × 1014 J; (b) 1.60 × 1014 J; (c) 2.85 × 1010 J/K; (d) 8.29 × 1012 J
2020-06-01 23:00:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6244797706604004, "perplexity": 458.1795275898787}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419639.53/warc/CC-MAIN-20200601211310-20200602001310-00394.warc.gz"}
https://www.nextgurukul.in/wiki/concept/cbse/class-9/economics/food-security-in-india/food-insecure-groups-in-india/3959822
Notes On Food Insecure Groups in India - CBSE Class 9 Economics The economically backward states, the tribal and remote areas, and areas prone to natural disasters like droughts and floods have a higher percentage of people with food insecurity. Hunger is both a cause and effect of poverty and indicates food insecurity. Hunger is of two types: Chronic hunger and seasonal hunger. Chronic hunger is a result of consistently low quantity and quality of diet. Seasonal hunger is a result of low quantity and quality of diet for a short period of time. Both chronic and seasonal hunger has decreased in rural and urban India. Food security requires elimination of present and future hunger. India has made rapid strides in attaining self-sufficiency in food, and to provide food security to its large population. The introduction of modern farming methods brought about the Green Revolution in India marked by a dramatic increase in the production of food grains. The success of the Green Revolution was not uniform across India. In the states of Punjab and Haryana, wheat production increased by more than four times from 1965 to 1995. The states of Tamil Nadu and Andhra Pradesh saw a significant rise in rice production. The states of Maharashtra, Madhya Pradesh, Bihar and Orissa, and the north-eastern states did not show any significant rise in food grain production. #### Summary The economically backward states, the tribal and remote areas, and areas prone to natural disasters like droughts and floods have a higher percentage of people with food insecurity. Hunger is both a cause and effect of poverty and indicates food insecurity. Hunger is of two types: Chronic hunger and seasonal hunger. Chronic hunger is a result of consistently low quantity and quality of diet. Seasonal hunger is a result of low quantity and quality of diet for a short period of time. Both chronic and seasonal hunger has decreased in rural and urban India. Food security requires elimination of present and future hunger. India has made rapid strides in attaining self-sufficiency in food, and to provide food security to its large population. The introduction of modern farming methods brought about the Green Revolution in India marked by a dramatic increase in the production of food grains. The success of the Green Revolution was not uniform across India. In the states of Punjab and Haryana, wheat production increased by more than four times from 1965 to 1995. The states of Tamil Nadu and Andhra Pradesh saw a significant rise in rice production. The states of Maharashtra, Madhya Pradesh, Bihar and Orissa, and the north-eastern states did not show any significant rise in food grain production. Previous Next
2023-02-08 04:51:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25803396105766296, "perplexity": 4479.806942677876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00058.warc.gz"}
https://iteffect.nl/240_2019-Jun-Sat.html
# Products Get a Free Quote / Submit Enquiry # Process limestone heated ### MJ Series Jaw Crusher MJ series jaw crusher is mainly used as a coarse crushing crusher. Its purpose is to crush rocks into smaller particle sizes for subsequent processing… ### MC Series Single-Cylinder Hydraulic Cone Crusher MC series single cylinder hydraulic cone crusher is used in secondary and fine crushing operations. It is widely used in metallurgy, construction, highway,… ### ML Series Vertical Shaft Impact Crusher Vertical shaft impact crusher is often used in the final crushing circuit. Due to the ability to produce fine-grained final products, ML series vertical… ### MD Series Multi-Cylinder Hydraulic Cone Crusher MD series multi-cylinder hydraulic cone crusher is used in the second and third stages of mineral processing and stone crushing, as well as the superfine… ### MF Series Fixed Shaft Circular Vibrating Screen In order to eliminate the phenomenon of unbalanced vibration, unstable amplitude, on/off bounce, poor screening effect, and cracking of the screen box… ### MGD Series Vibrating Feeder MGD series vibrating feeder is designed for ultra-heavy working conditions and is suitable for feeding materials to primary jaw crushers, primary impact… ### MGB series hopper discharge feeder MGB series hopper discharge feeder is mainly used for the uniform, quantitative and automatic control of under-silo feeding of bulk materials.… ### MZA/K Series Circular Vibrating Screen MZA/K series circular vibrating screen produced by Meilan has an axis-eccentric circular vibrating screen, which can be used for dry and wet classification… ## Need any help? We sincerely welcome you to contact us through the hotline and other instant messaging methods. Whether it's project consultation or feedback, we will serve you in the fastest way and spare no effort to save time for customers. • OBORTS Company, Opposite the residence of old General Mambou, Near Dragage, Bastos, Yaoundé, Cameroon Email: [email protected] • #### How Is Marble Formed From Limestone? Mar 30, 2020· Metamorphism occurs in limestone when the limestone is located to convergent plate boundaries or when it is heated by a nearby body of hot magma. Prior to the conversion to marble, the calcite in the limestone consists primarily of mineralized … • #### Marble: Metamorphic Rock: Pictures, Definition, Properties The transformation of limestone into marble usually occurs at convergent plate boundaries where large areas of Earth's crust are exposed to the heat and pressure of regional metamorphism. Some marble also forms by contact metamorphism when a hot magma body heats adjacent limestone or dolostone. This process also occurs at convergent plate • #### Limestone, scaling, fouling in water - Vesi Process Hard water creates white deposits (limestone) inside the pipes and also outside the pipes at the joints. This is why limestone is, after the taste, the second ground of not drinking tap water. These disadvantages increase especially with heated water. Indeed the rate of limestone deposits increases with the temperature. • #### The Cement Manufacturing Process - Advancing Mining Aug 20, 2015· Cement manufacturing is a complex process that begins with mining and then grinding raw materials that include limestone and clay, to a fine powder, called raw meal, which is then heated to a sintering temperature as high as 1450 °C in a cement kiln. • #### Caveman to Chemist Projects: Lime and Lye In ancient times this was done by simply heating crushed limestone in a bonfire in a process called simply "burning lime." Today, huge drums are used to hold the crushed limestone, which is heated by coal, oil, or natural gas, but the chemical process is essentially the same. Another name for "heating the bejeesus" out of something is calcination. • #### The Cement Manufacturing Process - CMA India If the limestone used in the cement manufacturing process is of high grade then low-grade coal is used and vice versa. Cement Manufacturing Process. The cement manufacturing process starts with the mining of limestone that is excavated from open cast mines. Then this limestone is crushed to -80 mm size and is loaded in longitudinal stockpiles. • #### Production – EuLA: European Lime Association ‘Preheating zone’ – limestone is heated to approximately 800°C by direct contact with gases leaving the calcining zone. ‘Calcining zone’ – fuel is burnt in preheated air from the cooling zone. This produces heat at above 900°C and turns limestone into quicklime and CO2. • #### Calculating CO2 Emissions from the Production of Lime three step process: stone preparation, calcination, and hydration. Calcination is the process by which limestone, which is mostly calcium carbonate (CaCO 3) is heated in a kiln to produce quick lime (CaO). Carbon dioxide is a byproduct of this reaction and is usually emitted to the atmosphere. • #### Limestone – Its Processing and Application in Iron and Jul 07, 2017· The limestone surface is to be heated to greater than 900 deg C to maintain the required temperature gradient and overcome the insulating effect of the calcined material in the limestone surface. However, when producing quicklime, the surface temperature must not exceed 1,100 deg C to 1,150 deg C as otherwise re –crystallization of the CaO • #### What is Quicklime? (with pictures) Feb 13, 2021· Quicklime is also known as burnt lime, a reference to its manufacturing process, or simply lime. To make it, limestone (CaCO3) is broken up and shoveled into a kiln, which is heated to very high temperatures. The high temperatures release carbon dioxide (CO2) from the stone, turning it into calcium oxide. After it is cooled, the compound can be • #### Stone Boiling is an Ancient Cooking Method Mar 03, 2019· The Benefits of Limestone Cookery . A recent experimental study based on assumptions about American southwestern Basketmaker II (200–400 CE) stone boiling used local limestone rocks as heating elements in baskets to cook maize. Basketmaker societies did not have pottery containers until after the introduction of beans: but corn was an • #### Calcium carbonate - Essential Chemical Industry In the chemical industry, large quantities of limestone are heated to ca 1500 K to form calcium oxide, known as quicklime: Water can be added to lime to form calcium hydroxide. The process is known as 'slaking'. Solid calcium hydroxide is known as slaked lime or hydrated lime, and solutions and suspensions in water as milk of lime. • #### Blast furnace metallurgy Britannica Blast furnaces produce pig iron from iron ore by the reducing action of carbon (supplied as coke) at a high temperature in the presence of a fluxing agent such as limestone.Ironmaking blast furnaces consist of several zones: a crucible-shaped hearth at the bottom of the furnace; an intermediate zone called a bosh between the hearth and the stack; a vertical shaft (the stack) that extends from • #### Thermal Decomposition of Calcium Carbonate (solutions This activity illustrates some of the chemistry of limestone (calcium carbonate) and other materials made from it. Calcium carbonate is heated strongly until it undergoes thermal decomposition to form calcium oxide and carbon dioxide. The calcium oxide (unslaked lime) is dissolved in water to form calcium hydroxide (limewater). • #### Unit operation and Unit Process Difference between unit In the chemical process, limestone is heated and decomposes into lime and carbon dioxide is released which is a unit process. Difference between unit operation and unit process. Unit Operation. Process in which only physical changes and not chemical changes takes place are known as unit operation. • #### How Cement Is Made The heated air from the coolers is returned to the kilns, a process that saves fuel and increases burning efficiency. After the clinker is cooled, cement plants grind it and mix it with small amounts of gypsum and limestone. Cement is so fine that 1 pound of cement contains 150 billion grains. • #### How does a Lime Kiln Work - Professional Manufacturer of Jul 30, 2019· In the calcining process, the partially burnt limestone will be burnt thoroughly. And usually the temperature in this stage is the highest in the lime kiln. In the cooling stage, the burnt limestone will be cooled down by the air so that it can be handled by conveyors and so on. In fact, except for these three stages, there are also crushing • #### What happens when limestone is heated? - Quora Limestone is chemically calcium carbonate. As Dolomits it is a mixture of magnesium and calcium carbonate. Ket’s stick to limestone, calcium carbonate. When heated it will decompose to form carbon dioxide and calcium oxide. • #### 11.17 Lime Manufacturing - US EPA 11.17.1 Process Description 1-5 Lime is the high-temperature product of the calcination of limestone. Although limestone deposits are found in every state, only a small portion is pure enough for industrial lime manufacturing. To be classified as limestone, the rock must contain at least 50 percent calcium carbonate. When the rock contains • #### 5.13: Industrial Chemical Reactions - The Solvay Process Mar 16, 2021· In addition to NaCl, the major consumable raw material in the Solvay process is calcium carbonate, CaCO 3, which is abundantly available from deposits of limestone. It is heated (calcined) $\ce{CaCO3 + heat \rightarrow CaO + CO2}$ to produce calcium oxide and carbon dioxide gas. • #### Calcination of Limestone – IspatGuru May 02, 2013· Hence, the process depends on an adequate firing temperature of at least more than 800 deg C in order to ensure decomposition and a good residence time, i.e. ensuring that the lime/limestone is held for a sufficiently long period at temperatures of 1,000 deg C to 1,200 deg C to control its reactivity. • #### An environment-friendly process for limestone calcination Dec 10, 2019· Through the gas-solid direct heat transfer, the limestone is heated to the reaction temperature. Limestone calcination in the process occurs in a pure CO 2 environment. To achieve efficient calcination, the CO 2 pressure and temperature need to … • #### What is Hydrated Lime? (with pictures) Jan 26, 2021· In this process, limestone is first broken up to reduce its size. Then it is washed and taken to kilns to be heated through a three step process: preheating, calcining, and cooling. Once cooled, the quicklime is crushed and then water is added. • #### Chapter 7 Historical Overview of Lime Burning from limestone. Lime is made by the process of cakining limestone, that is, burning Che limestone without fusing (melcing) it. Pure lime (quicklime, burnt lime, caustic lime) is composed of calcium oxide-. When tre.ated with water, lime gives off heat, forming calcium hydroxide. and is sold commercially as slaked (or hydrated) lime. • #### process of thermal decomposition of limestone Calcium carbonate - The Essential Chemical Industry. Limestone and chalk are both forms of calcium carbonate and dolomite is a . is decomposed to quicklime at gas temperatures of 1500 K, a process known as just above the burning zone and the limestone absorbs most of the heat released
2021-08-04 21:11:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5327588319778442, "perplexity": 9021.141848776102}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00260.warc.gz"}
https://mathoverflow.net/questions/352727/set-of-orthogonal-complements-to-open-set-in-grk-mathbbcn-open-in-grn
# Set of orthogonal complements to open set in $Gr(k,\mathbb{C}^n)$ open in $Gr(n-k,\mathbb{C}^n)$? $$\DeclareMathOperator\Gr{Gr}$$Consider $$\mathbb{C}^n$$ endowed with the Hermitian inner product $$\langle u,v\rangle=u^*v$$, and let $$U \subseteq \Gr(k,\mathbb{C}^n)$$ be a Zariski open dense subset of the Grassmannian of $$k$$ planes in $$\mathbb{C}^n$$. Is the set \begin{align} V=\{u^{\perp} | u \in U\}\subseteq \Gr(n-k,\mathbb{C}^n) \end{align} of orthogonal complements (under $$\langle\cdot,\cdot\rangle$$) open dense in $$\Gr(n-k,\mathbb{C}^n)$$? Or does it at least contain an open dense subset of $$\Gr(n-k,\mathbb{C}^n)$$? If the bijection $$\Gr(k,\mathbb{C}^n)\leftrightarrow \Gr(n-k,\mathbb{C}^n)$$ given by $$u \leftrightarrow u^\perp$$ were an isomorphism of algebraic varieties then this would be obvious, but unfortunately it appears to only be an isomorphism when these are viewed as varieties over the reals. Another idea is to somehow use Chevalley's theorem, although this result doesn't seem to hold over the reals. • Judging by the question you link to, your $u^{\perp}$ is orhogonality with respect to the Hermitian inner product. If it were orthogonality by the standard complex-linear inner product, then you would have an isomorphism of varieties as described that question. But the Hermitian orthogonal complement is the composition of the complex linear orthogonal complement and complex conjugation! Both are automorphisms of the Zariski topology, so the answer is yes! – David E Speyer Feb 14 at 22:13 • @DavidESpeyer Ahh... so simple. Thank you! – doremifasolatido Feb 15 at 15:14
2020-02-23 18:01:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970416247844696, "perplexity": 235.27196044724164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00393.warc.gz"}
http://mathhelpforum.com/discrete-math/188436-predicate-array.html
# Math Help - Predicate and array 1. ## Predicate and array Hi Is it possible to have a predicate like do(I,O) with truth value true or false where I = {i1,i2,i3} meaning it is a set of inputs? Thanks 2. ## Re: Predicate and array You can define almost anything. Whether it makes sense or is interesting depends on the particular subject. You can define a predicate whose first argument is a set, for example, if you are dealing with arrays, but more often the first argument is some element of the set of all possible inputs.
2015-01-30 03:19:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072218537330627, "perplexity": 805.5383522799989}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115927113.72/warc/CC-MAIN-20150124161207-00038-ip-10-180-212-252.ec2.internal.warc.gz"}
https://support-content-staging.sandbox.google.com/docs/answer/9982776?hl=en&ref_topic=3105600
# BINOM.DIST.RANGE Returns the probability of drawing a specific number of successes or range of successes given a probability and number of tries. ### Parts of a BINOM.DIST.RANGE function BINOM.DIST.RANGE(num_trials, prob_success, num_successes, max_num_successes) Part Description num_trials The number of independent trials. Must be greater than or equal to 0. prob_success The probability of success in any given trial. Must be between 0 and 1, both exclusive. num_successes The number of successes for which to calculate the probability in `num_trials` trials. Must be between 0 and num_trials, both exclusive. max_num_successes Optional: The maximum number of successes for which to calculate the  probability in `num_trials` trials. If omitted, then we compute the probability of  just `num_successes`. Must be between num_successes and num_trials, both exclusive. ### Notes • If any arguments does not meet its constraints, this function returns a #NUM! error value. • If any argument is non-numeric, this functions returns a #VALUE! error value. • Except for prob_success, this function truncates any numerical argument to an integer. ### Examples A B 1 Function input Function output 2 =BINOM.DIST.RANGE(100, 0.5, 45) 0.04847429663 3 =BINOM.DIST.RANGE(100, 0.5, 30, 45) 0.1840847287 4 =BINOM.DIST.RANGE(100, 0.5, 30) 0.00002317069058 ### Related functions • BINOM.DIST/BINOMDIST
2022-11-27 02:14:27
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8733598589897156, "perplexity": 3545.1775408583308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710155.67/warc/CC-MAIN-20221127005113-20221127035113-00206.warc.gz"}
https://hackaday.com/2017/04/13/say-it-with-me-root-mean-square/
# Say It With Me: Root-Mean-Square If you measure a DC voltage, and want to get some idea of how “big” it is over time, it’s pretty easy: just take a number of measurements and take the average. If you’re interested in the average power over the same timeframe, it’s likely to be pretty close (though not identical) to the same answer you’d get if you calculated the power using the average voltage instead of calculating instantaneous power and averaging. DC voltages don’t move around that much. Try the same trick with an AC voltage, and you get zero, or something nearby. Why? With an AC waveform, the positive voltage excursions cancel out the negative ones. You’d get the same result if the flip were switched off. Clearly, a simple average isn’t capturing what we think of as “size” in an AC waveform; we need a new concept of “size”. Enter root-mean-square (RMS) voltage. To calculate the RMS voltage, you take a number of voltage readings, square them, add them all together, and then divide by the number of entries in the average before taking the square root: $\sqrt{\frac{1}{n} \left(v_1^2 + v_2^2 +...+ v_n^2\right)}$. The rationale behind this strange averaging procedure is that the resulting number can be used in calculating average power for AC waveforms through simple multiplication as you would for DC voltages. If that answer isn’t entirely satisfying to you, read on. Hopefully we’ll help it make a little more sense. ## Necessity When it comes to averages, the ideas of “big” and “little” for AC and DC voltages are fundamentally different. DC waveforms are roughly constant, and what matters is the distance from zero. AC waveforms are always wiggling around a center point, and this is often ground. If the waveform is symmetric, and you take enough samples, it’s going to average out to zero. One way to measure the size of AC voltages is to take the maximum and minimum over time: the peak-to-peak voltage. Another possibility would be to take the absolute value of each voltage and average them together. That works too. A third choice is to square all of the individual voltage measurements before adding them up. This has the same effect as taking the absolute value — all of the individual terms are positive now and don’t cancel out — and has the additional side-effect of making the big values bigger and the small values smaller. Which do we choose? ## Physics Using the squared voltages in the average gets the physics right. If you’re interested in the power that you can get out of the AC signal, it’s the squares of the voltage that are relevant anyway. Let’s pretend you’re driving a resistive load for now — maybe you’re heating your apartment or using an electric stove — and do a tiny bit of algebra. Remember that power is equal to the current flowing through our imaginary device times the voltage being dropped across it: P = IV. And who could forget Ohm’s Law? V = IR or I = V / R. Put them together, and P = V² / R. The power in the system, at any given instant, is proportional to the voltage squared. The average power over time is thus proportional to the average of the squared voltages. Sounding familiar? Since the average of squared instantaneous voltages is in units of volts squared, taking the square root at the end (“root of the mean of the squares”) brings it on home. The same logic holds for RMS current measurements as well. Substituting Ohm’s Law the other way, you get P = I² R and power is proportional to current squared. Average current in a balanced AC waveform is zero, but RMS-averaged current, squared, is proportional to power. Again, the big takeaway is that RMS voltage is the measure of average AC voltage or current that lets you pretend it was a DC average to get the average power. By doing the squaring inside the average, you avoid voltages of opposite signs cancelling, and by taking the square root at the end, it gets the units right. If you have an AC voltage that’s riding on top of a DC component, the RMS value still delivers. In that case, the squared DC component adds up n times before dividing by n again, and you get something like this: $\sqrt{v_{dc}^2 + \frac{1}{n} \left(v_1^2 + v_2^2 +...+ v_n^2\right)}$, where v is just the pure AC voltage. ## Rules of Thumb One place you’ll see RMS voltages is in mains power. Indeed, the 120 V in the US (or 230 V in the EU) coming out of your walls right now is an RMS figure. For sine waves, like what you get from the electrical company, the peak voltage is a factor of sqrt(2) higher than the RMS voltage. The peak voltage in the States is something like 120 V * sqrt(2) = 170 V, and the peak-to-peak is 340 V. That’s 650 V peak-to-peak in Europe; yikes! This also means that if you’re lacking an RMS meter and need a quick-and-dirty estimate of something that’s sine-wave-like, you can take the amplitude and divide by 1.414, or take the peak-to-peak and divide by twice that. Another waveform you might care about is the PWM’ed square wave that we often use to drive motors from microcontrollers. Clearly, if you alternate between zero volts and twelve volts, it’s only supplying power to the motor when it’s at twelve volts. Correspondingly, you won’t be surprised to hear that the RMS voltage of a PWM waveform is the square root of the duty cycle times the on-voltage. Wikipedia has you covered for triangle waves and other funny waveforms. ## RMS Everywhere It turns out that you’re often concerned with squared quantities. Kinetic energy is proportional to speed squared, for instance, so RMS speed is used in calculating temperature from the average velocity of molecules in a gas. If you have a measurement procedure that may be right on average, but you’re worried about the spread of the results as well, you might like to minimize RMS error. The statistician’s concept of standard deviation is similar, with the average value subtracted off beforehand. You even calculate the hypotenuse of a triangle by the same procedure, just without dividing by n. (OK, that’s a stretch, but square roots of sums of squares are everywhere!) I’m going to leave it to the mathematical philosophers among you to duke it out in the comments as to why the L2 norm appears so often. For the electrical hackers out there, it’s enough to remember the Ohm’s law rationale: when you’re interested in power, you’re interested in squares. ## 23 thoughts on “Say It With Me: Root-Mean-Square” 1. forthprgrmr says: In the 70’s there was an interesting “battle” between dbx and Dolby. Dolby’s system had problems with compression/expansion on tape because the low frequencies would get shifted in phase by the recording/playback process. And Dolby used average, not RMS, detection. So expansion did not match compression. You need a system that ignores the phase shifting, which RMS does. We (dbx) used true RMS detection. And our VCAs were heads above the simple gain control circuit of the competition. But like many tech things – the better technology doesn’t always win. Of course that was another life ago, and I was just an employee (product engineer) at dbx. But fun times. 1. Wasn’t Dolby-processed audio more “listenable” without decoding than dbx? That is, a preencoded tape played in deck that didn’t support decoding. Or am I thinking of something else? Regardless, I remember thinking that dbx was superior. 2. Ren says: I recall being taught that the RMS of AC was the heating equivalent of DC. 1. That works too! Why? Because heat = power, and power is proportional to V^2 (or I^2), and RMS is an average of squared quantities. (Man, I could have saved a lot of typing…) 1. RÖB says: We were taught this very graphically. RMS is the *area* between a complex (or simple) waveform and the x axis. It is also called the Integral. The vector that is the tangent to the wave form at any specific point is called the Derivative. Now add a wave that is Proportional to the original wave and you have the components of PID. The RMS voltage was also called the DC effective voltage because it would have the same effect as that DC voltage into a DC load. The sqrt(2) or 1/sqrt(2) appears everywhere in linear power supply design or anything really to do with AC voltage / current / power from a sine wave. 1. Be careful about what you were taught, or what you remember. Most of the above is wrong. Here’s some hand-waving about the math that might make things clearer. Averages are a lot like integrals in that they’re sums. The average is taken over a discrete number of points, and the integral is the sum over _all_ of the infinitessimal points. In the average, you divide by N. In the integral, you multiply by the width of the infinitessimal, dx. One way of conceiving of integrals, actually, is the average value as the number of samples in the average goes to infinity, divided by the range. https://en.wikipedia.org/wiki/Riemann_integral The integral under a centered sine wave is zero, just like with the average. RMS is the integral of the squared value: sin(x)^2, which keeps the positive half-cycle from cancelling the negative. Just like the RMS average, it’s the squaring that works the magic. http://www.wolframalpha.com/input/?i=integrate+sin(x)+from+0+to+2*pi http://www.wolframalpha.com/input/?i=integrate+sin(x)%5E2+from+0+to+2*pi And that’s exactly where the 1/2 slash 1/sqrt(2) comes from that you’re used to seeing. The integral of sin^2 over [0,2pi) is pi. Divide that by the range (2pi), and you get 1/2. Square-root that, and there’s your factor in RMS voltage versus instantaneous voltage. (That only works when you’ve got a sine wave.) (This is stuff that didn’t make the cut for the article — already too mathy — but that might interest the interested reader.) I haven’t thought about the PID stuff, but it strikes me as only loosely related. Not sure if any of that is right or wrong, but my spider sense is tingling. 1. Ostracus says: The whole thread is full of win. 2. SomeBody says: I enjoy Elliot Williams articles a lot more than those by Al Williams. Based on the questionable quality of several of his own articles, I don’t think *Al* should be bragging about putting a professor in his place about RMS. In contrast, Elliot’s articles are usually thorough, well thought out, and informative. 1. MeinHack says: Well, I think we have identified Al’s professor. Nothing against Elliot, but I enjoy the articles that both of the Williams’s do. And I learn a lot from both of them and all the other guys too. I will say that a lot of the small posts from the blog are kind of dodgy for all of the writers, but that’s why I don’t have to read any of them that don’t look interesting to me. 3. That shoudl be P = i^2 / R in one of the grey little magic CSS box thingies…. 1. P = V^2/R P = I^2 x R P = IV and V = IR, so P = IxIxR = I^2 x R P = IV and I = V/R so P = V^2/R 1. What Steve said. P = IV , V = IR. P = I^2 R But thanks for the heads-up on the CSS magic box thingy. They make hell with fancy math formatting. Fixed. 1. Ostracus says: MathML would be nice. 2. rob says: Rather P = (I^2) * R: P = I*V and V =I*R so P = I*(I*R). Not [P = I^2^ R], wat? (V/R) noticed that too :P But it was a laughably long time before I realized the P was also the rate of heat energy getting wasted at every nonzero R during nonzero I. 4. I like those kind of articles, but if there is some math in it, please use also numbers and not only words. If it is math, don’t be afraid of use notations, I am sure that people will still read with the same enthusiasm if it is kept the current verbosity :) 1. RÖB says: That stifles me a bit. It easy to do things like n^2 or sqr(n) or sqrt(n) but when it comes to integrals and derivatives the keyboard is not a useful tool. 1. kaidenshi says: I mean that a basic every day keyboard doesn’t have the symbols on it to express more complex math. 1. I am sorry but this was only a choice of the author, not due to a lack of instruments, that is why my suggestion come from. They have a whole website platform, but even in a basic free WordPress you can write in Latex whatever you want. So, that is not an excuse :) And there are also other more stupid methods/arrangements… 2. And it does not exist a magic keyboard to write formulae, but math still exist… This site uses Akismet to reduce spam. Learn how your comment data is processed.
2019-08-18 07:09:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7289161682128906, "perplexity": 918.5887834356553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313715.51/warc/CC-MAIN-20190818062817-20190818084817-00020.warc.gz"}
https://proofwiki.org/wiki/Definition:Congruence_(Geometry)
Definition:Congruence (Geometry) Definition In the field of Euclidean geometry, two geometric figures are congruent if they are, informally speaking, both "the same size and shape". That is, one figure can be overlaid on the other figure with a series of rotations, translations, and reflections. Specifically: all corresponding angles of the congruent figures must have the same measurement all corresponding sides of the congruent figures must be be the same length. Historical Note The symbol introduced by Gottfried Wilhelm von Leibniz to denote geometric congruence was $\simeq$. This is still in use and can still be seen, but is not universal. Also in current use is $\cong$.
2021-09-19 05:48:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111345410346985, "perplexity": 1031.0187500612774}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00494.warc.gz"}
https://stats.stackexchange.com/questions/171932/area-under-curve-roc-penalizes-somehow-models-with-too-many-explanatory-variable/172332
# Area Under Curve ROC penalizes somehow models with too many explanatory variables? I'm using Area Under Curve ROC as a performance measure of my classification algorithms (logistic regressions). Since I'm going to choose the model that maximize the Area Under Curve ROC, I would like to know if AUC penalizes somehow models with too many regressors (for example, like BIC information criterion). • No, it does not. AUC only assesses predictive performance and is completely agnostic to model complexity. – Marc Claesen Sep 10 '15 at 23:14 • Thanks Marc. So, what I don't understand is: I can choose between a set of 30 regressors, why AUC is maxim with a subset of just 8 regressors? I mean, if it does not penalize model complexity, then why it does not choose all the possible regressors? – Luca Dibo Sep 11 '15 at 6:59 • did you compute the AUC 'in sample' (i.e on the same data you used for estimating the logistic regression) of 'out-of-sample' (on other data than the data you used for estimating) ? id you compare the AUC using the same data ? – user83346 Sep 13 '15 at 10:02 • I estimate the logistic parameters in the training set (75% of the whole dataset) and then I compute the AUC in the test set (25%), and yes, I compare the AUC using the same data (the same test set). – Luca Dibo Sep 13 '15 at 10:22 • @LucaDibo the reason you are seeing AUC favor fewer regressors is not because of any special property of AUC. It is just because you are utilizing a train-test split. See my answer below. – Paul Sep 13 '15 at 14:08 You mention in the comments that you are computing the AUC using a 75-25 train-test split, and you are puzzled why AUC is maximized when training your model on only 8 of your 30 regressors. From this you have gotten the impression that AUC is somehow penalizing complexity in your model. In reality there is something penalizing complexity in your model, but it is not the AUC metric. It is the train-test split. Train-test splitting is what makes it possible to use pretty much any metric, even AUC, for model selection, even if they have no inherent penalty on model complexity. As you probably know, we do not measure performance on the same data that we train our models on, because the training data error rate is generally an overly optimistic measure of performance in practice (see Section 7.4 of the ESL book). But this is not the most important reason to use train-test splits. The most important reason is to avoid overfitting with excessively complex models. Given two models A and B such that B "contains A" (the parameter set of B contains that of A) the training error is mathematically guaranteed to favor model B, if you are fitting by optimizing some fit criterion and measuring error by that same criterion. That's because B can fit the data in all the ways that A can, plus additional ways that may produce lower error than A's best fit. This is why you were expecting to see lower error as you added more predictors to your model. However, by splitting your data into two reasonably independent sets for training and testing, you guard yourself against this pitfall. When you fit the training data aggressively, with many predictors and parameters, it doesn't necessarily improve the test data fit. In fact, no matter what the model or fit criterion, we can generally expect that a model which has overfit the training data will not do well on an independent set of test data which it has never seen. As model complexity increases into overfitting territory, test set performance will generally worsen as the model picks up on increasingly spurious training data patterns, taking its predictions farther and farther away from the actual trends in the system it is trying to predict. See for example slide 4 of this presentation, and sections 7.10 and 7.12 of ESL. If you still need convincing, a simple thought experiment may help. Imagine you have a dataset of 100 points with a simple linear trend plus gaussian noise, and you want to fit a polynomial model to this data. Now let's say you split the data into training and test sets of size 50 each and you fit a polynomial of degree 50 to the training data. This polynomial will interpolate the data and give zero training set error, but it will exhibit wild oscillatory behavior carrying it far, far away from the simple linear trendline. This will cause extremely large errors on the test set, much larger than you would get using a simple linear model. So the linear model will be favored by CV error. This will also happen if you compare the linear model against a more stable model like smoothing splines, although the effect will be less dramatic. In conclusion, by using train-test splitting techniques such as CV, and measuring performance on the test data, we get an implicit penalization of model complexity, no matter what metric we use, just because the model has to predict on data it hasn't seen. This is why train-test splitting is universally used in the modern approach to evaluating performance in regression and classification. There is a good reason why the regression coefficients in logistic regression are estimated by maximizing the likelihood or penalized likelihood. This leads to certain optimality properties. The concordance probability ($c$-index; AUROC) is a useful supplemental measure for describing the final model's predictive discrimination, but it is not sensitive enough for the use you envisioned nor would it lead to an optimal model. This is quite aside from the overfitting issue, which affects both the $c$-index and the (unpenalized) likelihood. The $c$-index can reach its maximum with a misleadingly small number of predictors, even though it does not penalize for model complexity, because the concordance probability does not reward extreme predictions that are "correct". $c$ uses only the rank order of predictions and not the absolute predicted values. $c$ is not sensitive enough to be used to compare two models. Seeking a model that does not use the entire list of predictors is often not well motivated. Model selection brings instability and extreme difficulty with co-linearities. If you want optimum prediction, using all candidate features and incorporating penalization will work best in most situations you are likely to encounter. The data seldom have sufficient information to allow one to make correct choices about which variables are "important" and which are worthless. • I estimate logistic parameters in the training set and then I compute the AUC in the test set. In this way I overcome the over fitting issue (I think). What I don't understand is the following: since I can choose between a set of 30 regressors, why AUC is maxim with a subset of just 8 regressors? I mean, if it does not penalize model complexity, then why it does not choose all the possible regressors? – Luca Dibo Sep 13 '15 at 13:18 • AUC should play absolutely no role in that process. Logistic regression is all about the likelihood (or deviance). You should be optimizing the deviance in the test sample. This assumes you have a huge training and a huge test sample otherwise split sample validation is unstable. I've expanded my answer to deal with your other question. – Frank Harrell Sep 13 '15 at 13:32 This should help clarify a few things, in as few words as possible: • AUC = measure of model's actual predictive performance • BIC = estimate of model's predictive performance Performance Measures, like AUC, are something you would use to evaluate a model's predictions on data it has never seen before. Information Criteria, like BIC, on the other hand, attempt to guess at how well a model would make predictions by using how well the model fit the training data AND the number of parameters used to make that fit as a penalty (using the number of parameters makes for better guesses). Simply put, BIC (and other information criteria), approximate what performance measures, like AUC, give you directly. To be more precise: • Information criteria attempt to approximate out-of-sample deviance using only training data, and make better approximations when accounting for the number of parameters used. • Direct performance measures, like deviance or AUC, are used to asses how well a model makes predictions on validation/test data. The number of parameters is irrelevant to them because they're illustrating performance in the most straightforward way possible. I thought the link between information criteria and performance measures was hard to understand at first, but it's actually quite simple. If you were to use deviance instead of AUC as a performance measure then BIC would basically tell you what deviance you could expect if you actually made predictions with your model, and then measured their deviance. This begs the question, why use information criteria at all? Well you shouldn't if you're just trying to build the most accurate model possible. Stick to AUC because models that have unnecessary predictors are likely to make worse predictions (so AUC doesn't penalize them per se, they just happen to have less predictive power). In logistic regression (I do it univariate for easier typing) you try to explain a binary outome $y_i \in \{0,1\}$ by assuming that it is the outcome of a Bernouilli random variable with a success probability $p_i$ that depends on your explanatory variable $x_i$, i.e. $p_i=P(y_i=1|_{x_i})=f(x_i)$, where $f$ is the logistic function: $f(x)=\frac{1}{1+e^{-(\beta_0+\beta_1 x)}}$. The parameters $\beta_i$ are estimated by maximum likelihood. This works as follows: for the $i$-th observation you observe the outcome $y_i$ and the success probability is $p_i=f(x_i)$, the probability to observe $y_i$ for a Bernouilli with success probability $p_i$ is $p_i^{y_i}(1-p_i)^{(1-y_i)}$. So, for all the observations in the sample, assuming independence between observations, the probability of observing $y_i, i=1,2, \dots n$ is $\prod_{i=1}^np_i^{y_i}(1-p_i)^{(1-y_i)}$. Using the above definition of $p_i=f(x_i)$ this becomes $\prod_{i=1}^nf(x_i)^{y_i}(1-f(x_i))^{(1-y_i)}=$. As the $y_i$ and $x_i$ are observed values, we can see this as a function of the unknown parameters $\beta_i$, i.e. $\mathcal{L}(\beta_0, \beta_1)=\prod_{i=1}^n\left(\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}\right)^{y_i}\left(1-\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}\right)^{(1-y_i)}$. Maximimum likelihood finds the values for $\beta_i$ that maximise $\mathcal{L}(\beta_0, \beta_1)$. Let us denote this maximum $(\hat{\beta}_0, \hat{\beta}_1)$, then the value of the likelihood in this maximum is $\mathcal{L}(\hat{\beta}_0, \hat{\beta}_1)$. In a similar way, if you would have used two explanatory variables $x_1$ and $x_2$, then the likelihood function would have had three parameters $\mathcal{L}'(\beta_0, \beta_1, \beta_2)$ and the maximum would be $(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2)$ and the value of the likelihood would be $\mathcal{L}'(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2)$. Obviously it would hold that $\mathcal{L}'(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2) > \mathcal{L}(\hat{\beta}_0, \hat{\beta}_1)$, whether the incerase in likelihood is significant has to be 'tested' with e.g. a likelihood ratio test. So likelihood ratio tests allow you te 'penalize' models with too many regressors. This is not so for AUC ! In fact AUC does not even tell you whether your 'success probabilities' are well predicted ! If you take all possible couples $(i,j)$ where $y_i=1$ and $y_j=0$ then AUC will be equal to the fraction of all these couples that have $p_i < p_j$. So AUC has to do with (1) how good your model is in distinguishing between '0' and '1' (it tells you about couples with one 'zero' and one 'one'), it does not say anything about how good your model is in predicting the probabilities ! and (2) it is only based on the 'ranking' ($p_i < p_j$) of the probabilities. If adding 1 explanatory variable does not change anything to the ranking of the probabilities of the subjects, then AUC will not change by adding an explanatory variable. So the first question you have to ask is what you want to predict: do you want to distinguish between zeroes and ones or do you want to have 'well predicted probabilities' ? Only after you have answered this question you can look for the most parsimonious technique. If you want to distinguish between zeroes and ones then ROC/AUC may be an option, if you want well predicted probabilities you should take a look at Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. As Marc said, AUC is only a measure of performance, just like missclassification rate. It does not require any information about the model. Conversely, BIC, AIC, need to know the number of parameters of your model to be evaluated. There is no good reason, if all of your predictors are relevant, that the missclassification rate or the AUC decreases when removing variables. However, it is quite common that combining a learning algorithm, an importance measure of the variables and variable selection (based on the importance the algorithm grants them) will perform better than fitting the model on the whole data set. You have an implementation of this method for Random Forests in the R RFauc package.
2019-07-19 16:59:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.694994330406189, "perplexity": 558.7905006562885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526324.57/warc/CC-MAIN-20190719161034-20190719183034-00279.warc.gz"}
https://www.ademcetinkaya.com/2022/12/adh-adairs-limited_28.html
Outlook: ADAIRS LIMITED assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 28 Dec 2022 for (n+16 weeks) Methodology : Statistical Inference (ML) ## Abstract The aim of this study is to evaluate the effectiveness of using external indicators, such as commodity prices and currency exchange rates, in predicting movements. The performance of each technique is evaluated using different domain specific metrics. A comprehensive evaluation procedure is described, involving the use of trading simulations to assess the practical value of predictive models, and comparison with simple benchmarks that respond to underlying market growth.(Kim, K.J. and Han, I., 2000. Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index. Expert systems with Applications, 19(2), pp.125-132.) We evaluate ADAIRS LIMITED prediction models with Statistical Inference (ML) and Chi-Square1,2,3,4 and conclude that the ADH stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell ## Key Points 1. What is the use of Markov decision process? 2. Technical Analysis with Algorithmic Trading 3. Decision Making ## ADH Target Price Prediction Modeling Methodology We consider ADAIRS LIMITED Decision Process with Statistical Inference (ML) where A is the set of discrete actions of ADH stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Chi-Square)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Statistical Inference (ML)) X S(n):→ (n+16 weeks) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$ n:Time series to forecast j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? Sample Set: Neural Network Time series to forecast n: 28 Dec 2022 for (n+16 weeks) According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% 1. There is a rebuttable presumption that unless inflation risk is contractually specified, it is not separately identifiable and reliably measurable and hence cannot be designated as a risk component of a financial instrument. However, in limited cases, it is possible to identify a risk component for inflation risk that is separately identifiable and reliably measurable because of the particular circumstances of the inflation environment and the relevant debt market 2. Accordingly the date of the modification shall be treated as the date of initial recognition of that financial asset when applying the impairment requirements to the modified financial asset. This typically means measuring the loss allowance at an amount equal to 12-month expected credit losses until the requirements for the recognition of lifetime expected credit losses in paragraph 5.5.3 are met. However, in some unusual circumstances following a modification that results in derecognition of the original financial asset, there may be evidence that the modified financial asset is credit-impaired at initial recognition, and thus, the financial asset should be recognised as an originated credit-impaired financial asset. This might occur, for example, in a situation in which there was a substantial modification of a distressed asset that resulted in the derecognition of the original financial asset. In such a case, it may be possible for the modification to result in a new financial asset which is credit-impaired at initial recognition. 3. For the purposes of applying the requirements in paragraphs 5.7.7 and 5.7.8, an accounting mismatch is not caused solely by the measurement method that an entity uses to determine the effects of changes in a liability's credit risk. An accounting mismatch in profit or loss would arise only when the effects of changes in the liability's credit risk (as defined in IFRS 7) are expected to be offset by changes in the fair value of another financial instrument. A mismatch that arises solely as a result of the measurement method (ie because an entity does not isolate changes in a liability's credit risk from some other changes in its fair value) does not affect the determination required by paragraphs 5.7.7 and 5.7.8. For example, an entity may not isolate changes in a liability's credit risk from changes in liquidity risk. If the entity presents the combined effect of both factors in other comprehensive income, a mismatch may occur because changes in liquidity risk may be included in the fair value measurement of the entity's financial assets and the entire fair value change of those assets is presented in profit or loss. However, such a mismatch is caused by measurement imprecision, not the offsetting relationship described in paragraph B5.7.6 and, therefore, does not affect the determination required by paragraphs 5.7.7 and 5.7.8. 4. The decision of an entity to designate a financial asset or financial liability as at fair value through profit or loss is similar to an accounting policy choice (although, unlike an accounting policy choice, it is not required to be applied consistently to all similar transactions). When an entity has such a choice, paragraph 14(b) of IAS 8 requires the chosen policy to result in the financial statements providing reliable and more relevant information about the effects of transactions, other events and conditions on the entity's financial position, financial performance or cash flows. For example, in the case of designation of a financial liability as at fair value through profit or loss, paragraph 4.2.2 sets out the two circumstances when the requirement for more relevant information will be met. Accordingly, to choose such designation in accordance with paragraph 4.2.2, the entity needs to demonstrate that it falls within one (or both) of these two circumstances. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions ADAIRS LIMITED assigned short-term Ba1 & long-term Ba1 estimated rating. We evaluate the prediction models Statistical Inference (ML) with Chi-Square1,2,3,4 and conclude that the ADH stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period, the dominant strategy among neural network is: Sell Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB1Baa2 Balance SheetCBaa2 Leverage RatiosB1C Cash FlowBaa2Ba2 Rates of Return and ProfitabilityB1Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 83 out of 100 with 629 signals. ## References 1. M. J. Hausknecht and P. Stone. Deep recurrent Q-learning for partially observable MDPs. CoRR, abs/1507.06527, 2015 2. E. Collins. Using Markov decision processes to optimize a nonlinear functional of the final distribution, with manufacturing applications. In Stochastic Modelling in Innovative Manufacturing, pages 30–45. Springer, 1997 3. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., When to Sell and When to Hold AQN Stock. AC Investment Research Journal, 101(3). 4. Lai TL, Robbins H. 1985. Asymptotically efficient adaptive allocation rules. Adv. Appl. Math. 6:4–22 5. Kallus N. 2017. Balanced policy evaluation and learning. arXiv:1705.07384 [stat.ML] 6. Çetinkaya, A., Zhang, Y.Z., Hao, Y.M. and Ma, X.Y., When to Sell and When to Hold AQN Stock. AC Investment Research Journal, 101(3). 7. Bastani H, Bayati M. 2015. Online decision-making with high-dimensional covariates. Work. Pap., Univ. Penn./ Stanford Grad. School Bus., Philadelphia/Stanford, CA
2023-02-06 08:28:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42508354783058167, "perplexity": 3431.135678560311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00811.warc.gz"}
https://answers.ros.org/answers/270224/revisions/
# Revision history [back] For me none of this worked, I always have an error like "field data[] must be an integer type" (or a float if it's a float array of course.). How I achieved to send multiple value was to set one value a a line preceding by a hyphen. exemple: rostopic pub /modbus_wrapper/input std_msgs/Int32MultiArray "layout: dim: - label: '' size: 2 stride: 0 data_offset: 0 data: - 0 - 1 - 3 "
2019-12-05 20:18:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.212446391582489, "perplexity": 12417.49985735327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00375.warc.gz"}
https://math.stackexchange.com/questions/902905/improper-integral-int-01-left-left-frac1x-right-frac12-right-frac-log
Improper Integral $\int_0^1\left(\left\{\frac1x\right\}-\frac12\right)\frac{\log(x)}xdx$ My initial question was to find if this integral $$\int_0^1 \left(\left\{\frac 1x\right\}-\frac12\right)\frac{\log(x)}{x}dx$$ is convergent or divergent. ($\left\{\frac 1x\right\}$ is the fractional part of $\frac 1x$ ). My try :: \begin{align}\int_0^1\left(\left\{\frac 1x\right\}-\frac 12\right)\frac{\log(x)}{x} dx & =-\int_1^\infty (\left\{y\right\}-1/2)\frac{\log(y)}{y} dy \\ & = \sum_{m=1}^{\infty} \int_{m}^{m+1} (\left\{y\right\}-1/2)\frac{\log(y)}{y} dx \\ & = \frac14\sum_{m=1}^{\infty} \left(\log^2 (m+1)+\log^2(m)-2\int_0^1\log^2(x+m) dx \right) \\ &= ... \end{align} Finally the integral is convergent since the series obtained is convergent. The curious thing is that Mathematica returns $0.\times 10^{-2}$ by numerical integration. Then my question is: Is this integral equal to zero? • How did you do this: $\sum_{m=1}^{\infty} \int_{m}^{m+1} (\left\{y\right\}-1/2)\frac{\log(y)}{y} dx \\ = \frac14\sum_{m=1}^{\infty} \left(\log^2 (m+1)+\log^2(m)-2\int_0^1\log^2(x+m) dx \right)$? ................... I can't seem to replicate it – ShakesBeer Aug 19 '14 at 12:22 • @coolydudey60 They are some missing steps... $m$ is the integer part of $y$, $\left\{y\right\}=y-m$ and then you integrate from $y=0$ to $y=1$... – Cador Aug 19 '14 at 12:31 • I know that, I already tried what you said, but don't forget we integrate $y$ from $m$ to $m+1$, not $0$ to $1$, (did you forget to note another substitution?). I'm getting a different answer though – ShakesBeer Aug 19 '14 at 12:34 • Wolfram Alpha Estimate 1 shows that the sum converges, since each term is $O(\log m / m^2)$. And this one Wolfram Alpha Estimate 2 shows that the sum does not converge to zero. – Sungjin Kim Aug 19 '14 at 13:03 • @MhenniBenghorbal Is it without minus sign? I would be surprised if so. In my observation, it keeps decreasing. – Sungjin Kim Aug 19 '14 at 17:32 This integral is not equal to zero. We may obtain the following closed form. \begin{align} \int_0^1 \left(\left\{\frac{1}{x}\right\}-\frac{1}{2}\right)\frac{\log(x)}{x} \mathrm{d}x & = \dfrac{\ln^2(2\pi)}{4}-\dfrac{\gamma^2}{4}+\dfrac{\pi^2}{48}-\dfrac{\gamma_1}{2}-1\tag1 \\\\ \end{align} where $\left\{x\right\}$ denotes the fractional part of $x$, $\gamma$ denotes the Euler–Mascheroni constant and where $\gamma_{1}$ denotes the Stieltjes constant defined by $$\gamma_{1} = \lim_{N \rightarrow \infty}\left(\sum_{k=1}^{N}\frac{\ln k}{k}-\frac{\ln^{2}N}{2} \right).$$ Consequently, we have the numerical evaluation: \begin{align} \int_0^1 \left(\left\{\frac{1}{x}\right\}-\frac{1}{2}\right)\frac{\log(x)}{x} \mathrm{d}x = \color{red}{0.00}31782279542924256050500... . \tag2 \end{align} Here is an approach. Step 1. Let $s$ be a complex number such that $0<\Re{s}<1$. Then $$\int_{0}^{1} x^{s-1}\left\{\frac{1}{x}\right\}\mathrm{d}x = -\frac{1}{1-s} -\frac{\zeta(s)}{s}\tag3$$ where $\left\{x\right\}$ denotes the fractional part of $x$ and where $\zeta$ denotes the Riemann zeta function. Proof. Let us assume that $0<\Re{s}<1$. We may write \begin{align} \int_{0}^{1} x^{s-1}\left\{\frac{1}{x}\right\}\mathrm{d}x & = \sum_{k=1}^{\infty} \int_{1/(k+1)}^{1/k} x^{s-1}\left\{\frac{1}{x}\right\}\mathrm{d}x \\ & = \sum_{k=1}^{\infty} \int_{k}^{k+1} \left\{x\right\} \frac{\mathrm{d}x}{x^{s+1}} \\ & = \sum_{k=1}^{\infty} \int_{k}^{k+1} (x-k) \frac{\mathrm{d}x}{x^{s+1}} \\ & = \sum_{k=1}^{\infty} \int_{0}^{1}\frac{v}{(v+k)^{s+1}}\mathrm{d}v \\ & = \sum_{k=1}^{\infty} \int_{0}^{1}\left(\frac{1}{(v+k)^{s}}-\frac{k}{(v+k)^{s+1}}\right)\mathrm{d}v \\ & = \sum_{k=1}^{\infty} \left.\left(\frac{1}{(-s+1)(v+k)^{s-1}} +\frac{k}{s(v+k)^s}\right) \right|_{0}^{1} \\ & = -\frac{1}{1-s}-\frac{\zeta(s)}{s}. \end{align} Step 2. We have $$\int_{0}^{1} x^{s-1}\left(\left\{\frac{1}{x}\right\}-\frac{1}{2}\right)\log(x)\mathrm{d}x = -\frac{1}{(1-s)^2} +\frac{1}{2s^2} +\frac{\zeta(s)}{s^2} -\frac{\zeta'(s)}{s}. \tag4$$ Using $(3)$, we readily get $$\int_{0}^{1} x^{s-1}\left(\left\{\frac{1}{x}\right\}-\frac{1}{2}\right)\mathrm{d}x = -\frac{1}{1-s}-\frac{1}{2s} -\frac{\zeta(s)}{s}$$ which we differentiate with respect to $s$ to obtain $(4)$. Step 3. For $s$ near $0$, we take into account the Taylor series expansion of the Riemann $\zeta$ function: \begin{align} & \zeta(s) =-\frac12-\dfrac{\ln(2\pi)}{2} s +\left(\dfrac{\gamma^2}{4}-\dfrac{\pi^2}{48}+\ln(2\pi)-\dfrac{\ln^2(2\pi)}{4}+\dfrac{\gamma_1}{2}\right)s^2+\mathcal{O}(s^3) \\& \zeta'(s) =-\dfrac{\ln(2\pi)}{2} +\left(\dfrac{\gamma^2}{2}-\dfrac{\pi^2}{24}+2\ln(2\pi)-\dfrac{\ln^2(2\pi)}{2}+\gamma_1\right)s+\mathcal{O}(s^2) \end{align} and upon letting $s$ tend to $0^+$ in $(4)$ we obtain $(1)$. Remark: A related result to $(3)$. • +1. Very nice approach. I was trying to do something but I arrived to some digammas functions. Then, I left because everything became quite ugly. – Felix Marin Dec 27 '14 at 1:49 • @Oliver Oloa can you please explain how did you arrive at the second step in this integral $\sum_{k=1}^{\infty} \int_{1/(k+1)}^{1/k} x^{s-1}\left\{\frac{1}{x}\right\}\mathrm{d}x= \sum_{k=1}^{\infty} \int_{k}^{k+1} \left\{x\right\} \frac{\mathrm{d}x}{x^{s+1}}$ – Siddhartha May 19 '17 at 12:59 • @Lelouch.D.Light Sure. If you make $u=1/x$, then $x=1/u$, $dx=-du/u^2$ and $\int_{1/(k+1)}^{1/k} x^{s-1}\left\{\frac{1}{x}\right\}\mathrm{d}x=-\int_{k+1}^{k} 1/u^{s-1}\left\{u\right\}du/u^2=\int_k^{k+1}1/u^{s+1}\left\{u\right\}du$, let me know if it is OK. – Olivier Oloa May 19 '17 at 13:52 • oh nice got it , thanks – Siddhartha May 19 '17 at 15:00 • @OlivierOloa. the Taylor series exansion coefficients of $\zeta$ function at neighbourhood of zero you got from Wolfram ? – Kays Tomy Aug 20 '18 at 20:42 Here is an alternative approach similar to my solution of Closed form of integral over fractional part $\int_0^1 \left\{\frac{1}{2}\left(x+\frac{1}{x}\right)\right\}\,dx$. The closed form expression of the integral is traced back to the asymptotic behaviour of $g(n) = \sum_{k=1}^n \log(k)^2$. 1. Calculation Letting $x=1/y$, and splitting the resulting integration range into equidistant pieces from $k$ to $k+1$ ($k=1,2,3,...$) gives $$i:=\int_{0}^1 \frac{\log(x)}{x} (\{\frac{1}{x}\}-\frac{1}{2})\,dy=-\int_{1}^\infty \frac{\log(y)}{y} (\{y\}-\frac{1}{2})\,dy=\sum_{k=1}^\infty a(k)$$ Where $$a_k = -\int_0^1 \frac{\left(\xi -\frac{1}{2}\right) \log (k+\xi )}{k+\xi } \, d\xi \\ =-\frac{1}{2} \log ^2(k+1)+\frac{1}{4} \left(\log ^2(k+1)-\log ^2(k)\right)+\frac{1}{2} \left((k+1) \log ^2(k+1)-k \log ^2(k)\right)-((k+1) \log (k+1)-k \log (k))+1$$ Forming the partial sum of $a_k$ most of the terms telescope witht the result $$i_s(n) := \sum_{k=1}^n a_k = i_{s1}(n) + i_{s2}(n)$$ $$i_{s1}(n) = -(-n+(n+1) \log (n+1)-\frac{1}{4} \log ^2(n+1)-\frac{1}{2} (n+1) \log ^2(n+1))$$ $$i_{s2}(n) = -\frac{1}{2}g(n+1)$$ With $$g(n) = \sum_{k=1}^{n}(\log(k))^2$$ In order to find the asymptotic behaviour of $g(n)$, we notice first that $$\nu(n,x) := \sum_{k=1}^n k^x =H(n,-x)$$ where $H$ is the generalized harmonic number, is a generating function for our finite sum. Its asymptotic expression is provided by Mathematica: $$\nu_a(n,x) = \left(\frac{-x^3+3 x^2-2 x}{720 n^3}+\frac{n}{x+1}+\frac{x}{12 n}+\frac{1}{2}\right) n^x+\zeta (-x)$$ Hence we have $$g_{a}(n) = \frac{\partial ^2 \nu_a(n,x)}{\partial x^2}| x\to 0 \\ = \frac{1}{120 n^3}+2 \left(-\frac{1}{360 n^3}-n+\frac{1}{12 n}\right) \log (n)+2 n+\left(n+\frac{1}{2}\right) \log ^2(n)+\gamma _1+\frac{\gamma ^2}{2}-\frac{\pi ^2}{24}-\frac{1}{2} (\log (2)+\log (\pi ))^2$$ The asymptotics of $i_{s1}(n)$ is easily calculated $$i_{s1}(n)\to \frac{5}{12 n^2}-\frac{5 \log (n)}{12 n^2}+n+\frac{1}{2} n \log ^2(n)+\frac{3 \log ^2(n)}{4}-n \log (n)+\frac{\log (n)}{n}-1$$ Finally, $$i_{s} = \lim_{n\to\infty} (i_{s1}(n)+i_{s2}(n)) \\=-\frac{\gamma_1}{2}-\frac{\gamma^2}{4}+\frac{\pi^2}{48}-1+\frac{1}{4} (\log(2 \pi )^2= 0.0031782279542924256051$$ in agreement with the previously obtained result of Olivier Oloa. 2. Discussion In the OP it was stated that Mathematica returns a strange result when the integral is calculated directly (I assume NIntegrate was used). I confirm that. So the representation as an infinite sum is better suited for numerical purposes. The following graph shows how the partial sums approoach the limiting value with increasing number of terms The changing sign of the partial sums indicates that a false value $0$ can be understood on the basis of missing accuracy.
2021-03-07 16:12:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992574453353882, "perplexity": 577.5348344919261}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00046.warc.gz"}
https://forum.azimuthproject.org/plugin/ViewComment/14824
How can a phase reversal occur? Recall that the formulation for the standing wave equation in temporal frequency space is $(-\omega^2+\omega_o^2)F(\omega) = Forcing(\omega)$ Note that the forcing has a sign change about the resonant condition $\omega_0$. So what happens if temporarily a forcing is applied that is near the resonance condition but with a frequency on the side of the peak that has the opposite sign of the prevailing standing wave phase? I will assert that this may be enough to force the output to change sign, and that this most likely occur at a zero crossing where the impact would be strongest. I can easily test this out but may have to add a stronger dampening term to make sure that the disturbance can die out.
2017-11-22 09:22:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828229546546936, "perplexity": 277.3002672461667}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806543.24/warc/CC-MAIN-20171122084446-20171122104446-00373.warc.gz"}
http://www.ck12.org/book/CK-12-Basic-Algebra-Concepts/r17/section/2.9/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> When to Use the Distributive Property | CK-12 Foundation You are reading an older version of this FlexBook® textbook: CK-12 Basic Algebra Concepts Go to the latest version. # 2.9: When to Use the Distributive Property Created by: CK-12 0  0  0 % Best Score Practice When to Use the Distributive Property Best Score % David and Denise are having an argument. David says that you can't use the Distributive Property to simplify the expression $\frac{4x + 5}{8}$ , while Denise says that you can. Who do you think is right? After completing this Concept, you'll know when to use the Distributive Property to simplify expressions so that you can settle arguments such as these. ### Guidance Identifying Expressions Involving the Distributive Property The Distributive Property often appears in expressions, and many times it does not involve parentheses as grouping symbols. In a previous Concept, we saw how the fraction bar acts as a grouping symbol. The following example involves using the Distributive Property with fractions. #### Example A Simplify $\frac{9-6y}{3}.$ Solution: The denominator needs to be distributed to each part of the expression in the numerator. We can rewrite the expression so that we can see how the Distributive Property should be used: $&\frac{9-6y}{3}=\frac{1}{3}(9-6y)=\\&\frac{1}{3}(9)-\frac{1}{3}(6y)=\frac{9}{3}-\frac{6y}{3}=\\& 3-2y.$ #### Example B Simplify $\frac{2x+4}{8}.$ Solution: Think of the denominator as $\frac{1}{8}$ : $\frac{2x+4}{8}= \frac{1}{8} (2x+4).$ Now apply the Distributive Property: $\frac{1}{8} (2x)+ \frac{1}{8}(4) = \frac{2x}{8} + \frac{4}{8}.$ Simplified: $\frac{x}{4} + \frac{1}{2}.$ Solve Real-World Problems Using the Distributive Property The Distributive Property is one of the most common mathematical properties seen in everyday life. It crops up in business and in geometry. Anytime we have two or more groups of objects, the Distributive Property can help us solve for an unknown. #### Example C An octagonal gazebo is to be built as shown below. Building code requires five-foot-long steel supports to be added along the base and four-foot-long steel supports to be added to the roof-line of the gazebo. What length of steel will be required to complete the project? Solution: Each side will require two lengths, one of five and one of four feet respectively. There are eight sides, so here is our equation. Steel required $= 8(4 + 5)$ feet. We can use the Distributive Property to find the total amount of steel. Steel required $= 8 \times 4 + 8 \times 5 = 32 + 40$ feet. A total of 72 feet of steel is needed for this project. ### Guided Practice Simplify $\frac{10x+8y-1}{2}.$ Solution: First we rewrite the expression so we can see how to distribute the denominator: $& \frac{10x+8y-1}{2}=\frac{1}{2}(10x+8y-1)= \\&\frac{1}{2}(10x)+\frac{1}{2}(8y)-\frac{1}{2}(1)= 5x+4y-\frac{1}{2}$ ### Practice Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Distributive Property (5:39) Use the Distributive Property to simplify the following expressions. 1. $(2 - j)(-6)$ 2. $(r + 3)(-5)$ 3. $6 + (x - 5) + 7$ Use the Distributive Property to simplify the following fractions. 1. $\frac{8x + 12}{4}$ 2. $\frac{9x + 12}{3}$ 3. $\frac{11x + 12}{2}$ 4. $\frac{3y + 2}{6}$ 5. $- \frac{6z - 2}{3}$ 6. $\frac{7 - 6p}{3}$ In 10 – 17, write an expression for each phrase. 1. $\frac{2}{3}$ times the quantity of $n$ plus 16 2. Twice the quantity of $m$ minus 3 3. $-4x$ times the quantity of $x$ plus 2 4. A bookshelf has five shelves, and each shelf contains seven poetry books and eleven novels. How many of each type of book does the bookcase contain? 5. Use the Distributive Property to show how to simplify 6(19.99) in your head. 6. A student rewrote $4(9x + 10)$ as $36x + 10$ . Explain the student’s error. 8. Amar is making giant holiday cookies for his friends at school. He makes each cookie with 6 oz of cookie dough and decorates each one with macadamia nuts. If Amar has 5 lbs of cookie dough $(1 \ lb = 16 \ oz)$ and 60 macadamia nuts, calculate the following. 1. How many (full) cookies can he make? 2. How many macadamia nuts can he put on each cookie if each is supposed to be identical? Basic 8 , 9 Feb 24, 2012 Aug 21, 2014
2014-09-19 20:39:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 31, "texerror": 0, "math_score": 0.5245721340179443, "perplexity": 1127.5438197110586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132007.18/warc/CC-MAIN-20140914011212-00114-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.physicsforums.com/threads/distribution-of-protons-in-nucleus.728452/
# Distribution of protons in nucleus 1. Dec 14, 2013 ### gildomar Is the most stable/likely configuration of protons in heavy nuclei that of being evenly distributed throughout the nucleus? As opposed to something like a spherical distribution? 2. Dec 15, 2013 ### ChrisVer Although it is weird talking about where protons are in the nuclei, I'd guess they would prefer being somewhat near the center, where the electromagnetic repulse is less... Of course things are more complicated... Also what is the difference to the distributions you proposed? the one means homogeneous the other means spherical :P the one doesn't cancel the other out 3. Dec 15, 2013 ### Bill_K The distribution of protons in the nucleus is directly related to its charge distribution. Generally the distribution is uniform throughout most of the nucleus, while near the surface it tapers off. 4. Dec 15, 2013 ### gildomar @ChrisVer - I was thinking that the protons would be scattered uniformly throughout the nucleus, as opposed to something like them being mainly near the surface, like a shell. @Bill_K - I was thinking it was something like that. Is that mostly due to the electromagnetic interaction between the protons, given that the strong force is relatively blind to the difference between protons and neutrons? 5. Dec 16, 2013 ### K^2 Actually, if you are comparing protons and neutrons, you are going to find that it's neutrons that dominate the exterior. The reason is that if a proton can become a neutron and reduce overall energy of the nucleus, it's going to do so via $\beta^+$ decay. So most energetic proton will have roughly the same total energy as most energetic neutron. And because protons have repulsion energy added in, the particles with highest kinetic energy are neutrons, and so they can be found a bit further from the center of the nucleus. You should also keep in mind that while the distributions are fairly uniform in the interior, they are also correlated. Protons and neutrons like to hang out in pairs in the interior, and there is very good evidence for larger clusters. 6. Dec 16, 2013 ### ChrisVer Still this conversation scares me :p it's like we are dealing with protons as people dealed with atoms before QM... you can't say where are the protons or the neutrons in the nuclei... what you can say is where they'd prefer to be... For example the protons even if they'd prefer to be a little bit closer to the center (to avoid EM repulsion), they are also subatomic particles and they obey Heisenberg's uncertainty principle- the more localized the more energetic... I guess the best way to think of the nuclei is that of a nucleonic soup with a fine homogeneity... protons turn to neutrons and vice versa by a continuous interaction with puon mesons (that's one explanation of why the neutron becomes stable within the nuclei, since it always changes to proton and a proton changes to a neutron, in strong interaction characteristic times and thus faster than its weak interaction). 7. Dec 16, 2013 ### K^2 The pion exchange can only switch a proton and neutron places. It can't change the total number of either. And since protons and neutrons are distributed to begin with, and one proton cannot be distinguished from any other proton, you can just think of that as a contribution to propagator. It's kind of like color switching in strong interaction. It's there, but you don't have to think about it. As for picturing particles vs picturing a homogeneous soup, former has certain advantages. Like I said, the distributions are correlated. Properly, you need to describe this with a multi-dimensional wave function. If we forget about all of the nuances of multi-particle theory, and just think about the N valence nucleons, your wave function is 3N dimensional. This is very hard to picture. Instead, you can picture various arrangement of point particles in the nucleus, and think about the state being a superposition of these. Just makes your brain hurt less. But yeah, it's all quantum. 8. Dec 16, 2013 ### gildomar @K^2: Thanks for clearing that up; what I was reading didn't really explain how the two of them were distributed. As for the protons and neutrons being correlated, is that something like the weak bonding of Cooper pairs in superconductors? @ChrisVer: I realize that the discussion sounds like we're talking about the neutrons and protons in a classical sense, but it's a little easier to talk about them in that way for the time being. But I did make sure to phrase the question at the beginning as the most likely place to find them (not where they actually are) in acknowledgement of both the uncertainty principle and the probability densities of their wavefunctions. 9. Dec 16, 2013 ### K^2 No, that's quite different. Cooper pairs form from identical fermions due to interaction with the lattice. Because they are fermions, they cannot be in the same exact state, and in fact, experience Pauli repulsion. As a result, a Cooper pair is a fairly "spread out" object. And not just in the sense of being delocalized, but expectation value for distance between two particles in a pair is rather large. The pn pairs in nucleus are "tight". Again, they are still delocalized as a pair, but expectation of the distance between two particles is small. This is only possible because proton and neutron are distinguishable, and Fermi statistics does not apply. They can be in the same state, and because of isospin symmetry, they basically are. Pretty much, the only significant similarity is that in both cases the pair has integer spin, and so behaves as a boson. Whether that last bit has any critical impact on how these pairs behave in a nucleus, I just don't know. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
2017-10-20 14:53:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4894010126590729, "perplexity": 584.2676518107581}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824225.41/warc/CC-MAIN-20171020135519-20171020155519-00121.warc.gz"}
https://physics.stackexchange.com/questions/289088/how-do-annihilation-and-creation-operators-act-on-fermions
# How do annihilation and creation operators act on fermions? I'm taking an introductory course in QFT. During quantization of the Dirac field, my textbook gives a lot of information on how annihilation and creation operators act on vacuum, but nothing about how they act on non-vacuum states. I need these to compute $$\int \frac{\mathrm d^3 p}{(2\pi)^3} \sum_s \left( {a^s_ {{\vec{p}}}}^\dagger a^s_ {{\vec{p}}} - {b^s_ {{\vec{p}}}}^\dagger b^s_ {{\vec{p}}} \right) |\vec{k},s \rangle,$$ where ${a^s_ {{\vec{p}}}}^\dagger, {b^s_ {{\vec{p}}}}^\dagger$ are the creation operator for fermions and anti-fermions respectively and ${a^s_ {{\vec{p}}}},{b^s_ {{\vec{p}}}}$ are the annihilation operators of fermions and anti-fermions repectively. I have searched google, but I couldn't find anything after about 1 hour of searching. Are you able to tell me how ${a^s_ {{\vec{p}}}}^\dagger, {b^s_ {{\vec{p}}}}^\dagger, {a^s_ {{\vec{p}}}},{b^s_ {{\vec{p}}}}$ act on non-vacuum states? If you need to compute $$\int \frac{d^3 p}{(2\pi)^3} \sum_s ( {a^s_ {{\vec{p}}}}^\dagger a^s_ {{\vec{p}}} - {b^s_ {{\vec{p}}}}^\dagger b^s_ {{\vec{p}}} ) |\vec{k},r \rangle,$$ you will need ${a^s_ {{\vec{p}}}}^\dagger a^s_ {{\vec{p}}}|\vec{k},r \rangle$ and ${b^s_ {{\vec{p}}}}^\dagger b^s_ {{\vec{p}}} |\vec{k},r \rangle$. Since you are dealing with Dirac fields, you get these using the anti-commutation relations (with the proper normalization factors - and I don't know which convention you are using): $$\{{a^s_ {{\vec{p}}}},{a^r_ {{\vec{q}}}}^\dagger\}=\delta_{sr}\delta(\vec{p}-\vec{q}),\\ \{{b^s_ {{\vec{p}}}},{b^r_ {{\vec{q}}}}^\dagger\}=\delta_{sr}\delta(\vec{p}-\vec{q}),\\ \{{a^s_ {{\vec{p}}}},{b^r_ {{\vec{q}}}}^\dagger\}=\{{b^s_ {{\vec{p}}}},{a^r_ {{\vec{q}}}}^\dagger\}=0.\\$$ and knowing that ${a^s_ {{\vec{p}}}}|0\rangle={b^s_ {{\vec{p}}}}|0\rangle=0$. It follows the answer with the same procedure @flippiefanus used. The basic procedure is as follows: $$a_r(\mathbf{k}_1) |\mathbf{k}_2,s\rangle = a_r(\mathbf{k}_1) a_s^{\dagger}(\mathbf{k}_2) |0\rangle = \{a_r(\mathbf{k}_1), a_s^{\dagger}(\mathbf{k}_2) \}|0\rangle = |0\rangle (2\pi)^2\omega_1 \delta(\mathbf{k}_1-\mathbf{k}_2) \delta_{rs} ,$$ where $|\mathbf{k}_2,s\rangle$ is assumed to be a fermion state. For an anti-fermion state one would use the $b$-operators, instead. The reason why one can express this in terms of the anti-commutator is because $a_r(\mathbf{k}_1) |0\rangle = 0$. The detail of the final expression depends on the particular anti-commutation relation that you use. Here I've used a Lorentz convariant version. • Thanks, are you also able to explain how $b_r(\vec{k}_1)$, $b_r^\dagger(\vec{k}_1)$ and $a_r^\dagger(\vec{k}_1)$ acts on $| \vec{k}_2,s\rangle$ or just state the result? Thanks – Mikkel Rev Oct 27 '16 at 11:55 • And add $b_r^\dagger(\vec{k}) | 0 \rangle$ for completeness? :) – Mikkel Rev Oct 27 '16 at 12:13 • Perhaps you can add in your question the definitions for $a_s(\mathbf{k})$, $b_s(\mathbf{k})$, etc. – flippiefanus Oct 27 '16 at 13:00 • Thank you for your response. I added in the definitions as requested. – Mikkel Rev Oct 30 '16 at 13:49 • I offer bounty +50 for the answer now – Mikkel Rev Oct 30 '16 at 14:00 All you need is the (anti-)commutation relations and the definitions of the states in terms of creation operators acting on vacuum state. e.g. a state $|\psi\rangle$ of two particles: $$c_k|\psi\rangle =c_k\left(\sum_{i<j}\psi_{ij}|i,j\rangle\right)= \sum_{i<j}\psi_{ij}c_k c_i^{\dagger}c_j^{\dagger}|0\rangle$$ Then commutes $c_k$ with $c_i^{\dagger}$ and $c_j^{\dagger}$ until hit the vacuum state and annihilate it. $$\sum_{i<j}\psi_{ij}\left(\left[ c_k ,\, c_i^{\dagger}\right]_+ - c_i^{\dagger}c_k\right) c_j^{\dagger}|0\rangle=\sum_{i<j}\psi_{ij}\left(\left[ c_k ,\, c_i^{\dagger}\right]_+c_j^{\dagger} - c_i^{\dagger} \left[ c_k ,\, c_j^{\dagger}\right]_+ \right) |0\rangle = \sum_{i<j}\psi_{ij}\left(\left[ c_k ,\, c_i^{\dagger}\right]_+|j\rangle - \left[ c_k ,\, c_j^{\dagger}\right]_+ |i\rangle \right)$$ Result: The only thing you'll really need for this calculation is the definition of one-(anti)-particle states (given below) and the application of annihilation operators on those, given by $$a_{\vec p_1}^{s_1} |\vec p_2, s_2;0,0\rangle =\delta_{s_1, s_2} \delta^3\left(\vec p_1 - \vec p_2\right) |0\rangle,\\b_{\vec q_1}^{r_1} |0,0;\vec q_2, r_2\rangle=\delta_{r_1, r_2} \delta^3\left(\vec q_1 - \vec q_2\right) |0\rangle.\\\\$$ Derivation: You were asking for the action of creation and annihilation operators on one-particle states, given by $$|\vec p, s; \vec 0, 0\rangle = a_{\vec{p}}^{s\dagger}|0\rangle\\ |0,0;\vec p, s\rangle = b_{\vec{p}}^{s\dagger}|0\rangle.$$ It makes sense to also define the following two-particle states, which are only non-zero if again all ${\vec p_i, s_i}$ and ${\vec q_j, s_j}$ are respectively distinct. $$|\vec p, s; \vec q, r\rangle = \frac{1}{2}\left(a_{\vec{p}}^{s\dagger}b_{\vec{q}}^{r\dagger}-b_{\vec{q}}^{r\dagger}a_{\vec{p}}^{s\dagger}\right)|0\rangle\\ |\vec p_1, s_1, \vec p_2, s_2;\vec 0,0\rangle = \frac{1}{2}\left(a_{\vec{p}_1}^{s_1\dagger}a_{\vec{p}_2}^{s_2\dagger}-a_{\vec{p}_2}^{s_2\dagger}a_{\vec{p}_1}^{s_1\dagger}\right)|0\rangle\\|\vec 0,0;\vec q_1, r_1, \vec q_2, r_2\rangle = \frac{1}{2}\left(b_{\vec{q}_1}^{r_1\dagger}b_{\vec{q}_2}^{r_2\dagger}-b_{\vec{q}_2}^{r_2\dagger}b_{\vec{q}_1}^{r_1\dagger}\right)|0\rangle$$ where we just decided to use a (anti)symmetrical definition - it is clear that using the appropriate anticommutation-relations, all of those states can be written without the difference of two terms. Now, to find the action of those operators we are going to use the mentioned anticommutation relations $$\{a_{\vec p}^s, a_{\vec q}^r\}=0 \qquad \{a_{\vec p}^{s\dagger}, a_{\vec q}^{r\dagger}\}=0\\ \{a_{\vec p}^s, a_{\vec q}^{r\dagger}\}=\delta^{rs} \delta^3(\vec p - \vec q)$$ and similar for the $b$-operators. Also, every $b$ anticommutes with every $a$. Note, that the above states are adequately normalized, provided the vacuum $|0\rangle$ is: $$\langle \vec p, s; \vec 0, 0|\vec q, r; 0, 0\rangle = \langle 0| a_{\vec{q}}^{r}a_{\vec{p}}^{s\dagger}|0\rangle\\ = \langle 0|\{a_{\vec{q}}^{r},a_{\vec{p}}^{s\dagger}\}|0\rangle\\=\delta^{rs} \delta^3(\vec p-\vec q)$$ From the fact that all b's and a's anticommute we can immediately derive $$b_{\vec p}^s |\vec q, r;0,0\rangle = 0, \\a_{\vec p}^s |0,0;\vec q, r\rangle = 0.$$ Also, because the creation operators anticommute with themselves, we have $$\left(a_{\vec p}^{s\dagger}\right)^2 = 0 =\left(b_{\vec p}^{s\dagger}\right)^2$$ so that $$a_{\vec p}^{s\dagger} |\vec p, s; 0, 0\rangle = 0 = b_{\vec p}^{s\dagger} |0,0;\vec p, s\rangle.$$ Of course, if we act with creation operators with different momenta and/or spins on the one-particle states, we are going to create the above two-particle (and particle-antiparticle states). We can combine this with the last formula in the following way: $$a_{\vec p_1}^{s_1\dagger} |\vec p_2, s_2;0,0\rangle = (1-\delta_{s_1, s_2}\delta_{\vec p_1, \vec p_2})|\vec p_1, s_1, \vec p_2, s_2; 0,0\rangle\\ b_{\vec p_1}^{s_1\dagger} |0,0;\vec p_2, s_2\rangle = (1-\delta_{s_1, s_2}\delta_{\vec p_1, \vec p_2})|0,0;\vec p_1, s_1, \vec p_2, s_2\rangle\\ a_{\vec p}^{s\dagger} |0,0;\vec q, r\rangle = |\vec p, s; \vec q, r\rangle\\ b_{\vec q}^{r\dagger} |p, s;0,0\rangle = -|\vec p, s; \vec q, r\rangle$$ Now, the really interesting$^{1}$ thing happens, if we annihilate a particle from the one-particle state (or an anti-particle from the one-anti-particle state). $$a_{\vec p_1}^{s_1} |\vec p_2, s_2;0,0\rangle = a_{\vec p_1}^{s_1}a_{\vec p_2}^{s_2\dagger}|0\rangle \\=\{a_{\vec p_1}^{s_1}, a_{\vec p_2}^{s_2\dagger}\}|0\rangle \\=\delta_{s_1, s_2} \delta^3\left(\vec p_1 - \vec p_2\right) |0\rangle$$ and analogously $$b_{\vec q_1}^{r_1} |0,0;\vec q_2, r_2\rangle=\delta_{r_1, r_2} \delta^3\left(\vec q_1 - \vec q_2\right) |0\rangle$$
2020-06-06 17:43:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637535572052002, "perplexity": 332.61681504052734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348517506.81/warc/CC-MAIN-20200606155701-20200606185701-00440.warc.gz"}
https://stats.stackexchange.com/questions/413874/changing-a-conditional-probability-to-a-deterministic-function
# Changing a conditional probability to a deterministic function Suppose that we have a conditional density function $$p(y|x;\theta^*)$$, where $$\theta^*$$ represents distribution parameters and are assumed to be deterministic. Is it possible that we write this conditional density as a deterministic function of $$x$$ and $$\theta$$ where $$\theta$$ is a random variable independent of $$x$$? In other words, $$y|x \sim p(y|x;\theta^*)$$ is equivalent to $$y = g(x, \theta)$$ $$\theta \sim p(\theta)$$ Furthermore, is this representation unique? For example, if $$y$$ has a Gaussian distribution with mean $$x$$ and s.d. $$\sigma^*$$, we can write $$y = x + \sigma,$$ where $$\sigma$$ has a Gaussian distribution with mean zero and s.d. $$\sigma^*$$. My question might be related to the question discussed here. • It is hard to call your $p$ as conditional pdf, because there is no additional random component ($x$ and $\theta^*$ are fixed parameters). – user158565 Jun 20 '19 at 2:55 • " σ has a Gaussian distribution with mean zero and s.d. σ" -- please don't use the same symbol for two completely different things. – Glen_b Jun 20 '19 at 6:06 • @Glen_b Thanks for pointing that out. I changed the s.d. to $\sigma^*$. – KRL Jun 20 '19 at 21:27 • It would have been much better to stick with statistical convention and leave the s.d. as $\sigma$ and change the variable to a more conventional symbol in such a context (like $\varepsilon$ or $\eta$ or $\zeta$ or $\xi$), so that you had something like "For example, if $y$ has a Gaussian distribution with mean $x$ and s.d. $σ$, we can write $y=x+\varepsilon$, where $\varepsilon$ has a Gaussian distribution with mean zero and s.d. $σ$". – Glen_b Jun 20 '19 at 22:28
2020-10-31 17:01:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8958988785743713, "perplexity": 149.05936040577325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00612.warc.gz"}
https://cs.stackexchange.com/questions/38318/is-it-a-problem-to-be-a-programmer-with-no-knowledge-about-computational-complex/38329
# Is it a problem to be a programmer with no knowledge about computational complexity? I've been assigned an exercise in my university. I took it home and tried to program an algorithm to solve it, it was something related to graphs, finding connected components, I guess. Then I made the most trivial thing that came into my mind and then showed to my lecturer. After a brief observation, he perceived that the runtime complexity of my solution was inviable and shown something more efficient. And there is a tradition of programmers who have no idea on what is computational complexity (I was one of those), so is it a problem if a programmer has no idea on what is computational complexity? • Moderator notice: please do not use comments for extended discussion or to post pithy answers. You may use the chat room to discuss this question; previous comments have been moved there. – Gilles 'SO- stop being evil' Feb 13 '15 at 18:34 • Your title says programmer, but your question says student. Generally 'programmer' implies 'professional programmer' - so are you asking if it's a problem to be a professional programmer without knowledge of computational complexity? Or whether it's okay for a programming student to not have that knowledge? The two are different questions, even if it turns out they have the same answer. – corsiKa Feb 13 '15 at 21:38 Yes, I would say knowing something about computational complexity is a must for any serious programmer. So long as you are not dealing with huge data sets you will be fine not knowing complexity, but if you want to write a program that tackles serious problems you need it. In your specific case, your example of finding connected components might have worked for graphs of up to say $100$ nodes. However, if you tried a graph with $100.000$ nodes then your lecturer's algorithm would probably have managed that in 1 second, while your algorithm would have (depending on how bad the complexity was) taken 1 hour, 1 day, or maybe even 1 eternity. A somewhat common mistake students make in our algorithms course is to iterate over an array like this: while array not empty examine first element of array remove first element from array This might not be the most beautiful code but in a complicated program something like this might show up without the programmer being aware of it. Now, what is the problem with this program? Suppose we run it on a data set of $100.000$ elements. Compared to the following program, the former program will run $50.000$ slower. while array not empty examine last element of array remove last element from array I hope you agree that having the knowledge to make your program run $50.000$ times faster is probably an important thing for a programmer. Understanding the difference between the two programs requires some basic knowledge about complexity theory and some knowledge about the particulars of the language you are programming in. In my pseudocode language, "removing an element from an array" shifts all the elements to the right of the element being removed one position from the left. This makes removing the last element an $O(1)$ operation since in order to do that we only need to interact with 1 element. Removing the first element is $O(n)$ since in order to remove the first element we need to shift all the other $n-1$ elements one position to the left as well. A very basic exercise in complexity is to prove that the first program will do $\frac{1}{2}n^2$ operations while the second program uses only $n$ operations. If you plug in $n=100.000$ you will see one program is drastically more efficient than the other. This is just a toy example but it already requires a basic understanding of complexity to tell the difference between the two programs, and if you are actually trying to debug/optimize a more complicated program that has this mistake it takes an even greater understanding to find out where the bug is. Because a mistake like removing an element from an array in this fashion can be hidden very well by abstractions in the code. Having a good understanding of complexity also helps when comparing two approaches to solve a problem. Suppose you had come up with two different approaches to solving the connected components problem on your own: in order to decide between them it would be very useful if you could (quickly) estimate their complexity and pick the better one. • "So long as you are not dealing with huge data sets you will be fine not knowing complexity" This is often true, but not always so. For instance, an O(n!) algorithm will not be viable even for relatively small data sets. If you use an O(n!) algorithm where you could have used O(n^2) your program will take 36,288 times longer to execute on a data size of 10. On a data size of 20, you're looking at 2.4 quintillion operations. – reirab Feb 13 '15 at 16:30 • I think @reirab's example should be included in the answer. It is more dramatic and proves your point more decisively. And I personally have been bitten by such algorithms, before I learned computational complexity. – Siyuan Ren Feb 15 '15 at 3:03 • I think there's a greater issue at play. If you simply dont know you self select to tasks where this is not needed. So you can say nearly all questions do i need to know X ends up with, it might be useful. So irrespectively if its critical its still good to know or it might come to bite your in the end. – joojaa Feb 15 '15 at 6:40 • "Understanding the difference between the two programs requires some basic knowledge about complexity theory" -- I think for this particular example it doesn't. You could profile it, observe that all the time is taken in "remove element", know (without understanding complexity theory) that removing the last element is faster than removing the first, make the change, and therefore speed up the program. The advantage of understanding complexity theory is that it lets you loosely quantity such problems without profiling them, so you can "prematurely" optimize. – Steve Jessop Feb 16 '15 at 10:26 • .. and in general I suspect that all or almost all practical examples can be solved, one by one, without reference to complexity theory. In this case, knowing that copying a lot of data is slower than not doing so, isn't "complexity theory". But of course it's still useful in programming (and any profession) to have a good mental model of principles that commonly come up, because you can analyse, discuss and solve such problems routinely by principle instead of one at a time by ad hoc means. – Steve Jessop Feb 16 '15 at 10:28 This is a rebuttal of Tom van der Zanden's answer, which states that this is a must. The thing is, most times, 50.000 times slower is not relevant (unless you work at Google of course). If the operation you do takes a microsecond or if your N is never above a certain threshold (A high portion of the coding done nowadays) it will NEVER matter. In those cases thinking about computational complexity will only make you waste time (and most likely money). Computational complexity is a tool to understand why something might be slow or scale badly, and how to improve it, but most of the time is complete overkill. I've been a professional programmer for more than five years now and I've never found the need to think about computational complexity when looping inside a loop O(M * N) because always the operation is really fast or M and N are so small. There are far more important, generally used, and harder things to understand for anyone doing programming jobs (threading and profiling are good examples in the performance area). Of course, there are some things that you will never be able to do without understanding computational complexity (for example: finding anagrams on a dictionary), but most of the time you don't need it. • To expand on your point, there are cases where too much emphasis on computational complexity can lead you astray. For example, there may be situations where "better" algorithm is actually slower for small inputs. The profiler is the ultimate source of truth. – Kevin Krumwiede Feb 13 '15 at 22:44 • @Kevin Krumwiede, I completely agree with you that optimizing a sort for a trivial data set is overkill. But it also illustrates that having at least an understanding of complexity is still important. The understanding is what will lead you to make the decision that a bubble sort is appropriate as opposed to some other, more complex, algorithm. – Kent A. Feb 14 '15 at 16:22 • When you know the data set is small in all cases you can get away with this sort of thing. You have to be very careful of excess complexity in stuff called within loops, though--not long ago I cut a minute runtime to a second this way. I've also encountered a O(n^8) problem once (data validation.) Lots of care got it down to 12 hours. – Loren Pechtel Feb 14 '15 at 20:11 • I've never found the need to think about computational complexity when looping inside a loop O(M * N) because always the operation is really fast or M and N are so small. – Ironically, the argument you give shows that you did think about computational complexity. You decided that it’s not a relevant issue for what you are doing and possibly rightfully so, but you are still aware of the existence of this issue, and if it would ever pose a problem, you could react to it before serious consequences happen on the user level. – Wrzlprmft Feb 15 '15 at 19:09 • Premature optimization is the root of all evil, but premature pessimization is the root of at least a good deal of annoyed users. You may not need to be able to solve a recurrence relation, but if you are, at the very least, not capable of telling the difference between O(1), O(N) and O(N^2), especially when you're nesting loops, someone is going to have to clean up the mess later. Source: the messes I had to clean up later. A factor 50.000 is so big that you had better know if you can still afford that later, when your inputs have grown. – Jeroen Mostert Feb 15 '15 at 23:12 I've been developing software for about thirty years, working both as a contractor and employee, and I've been pretty successful at it. My first language was BASIC, but I quickly taught myself machine language to get decent speed out of my underpowered box. I have spent a lot of time in profilers over the years and have learned a lot about producing fast, memory efficient optimized code. Regardless to say, I'm self taught. I never encountered the O notation until I started interviewing a few years ago. It's never come up in my professional work EXCEPT during interviews. So I've had to learn the basics just to handle that question in interviews. I feel like the jazz musician who can't read sheet music. I can still play just fine. I know about hashtables (heck, I invented hashtables before I learned that they had already been invented) and other important data structures, and I might even know some tricks that they don't teach in school. But I think the truth is that if you want to succeed in this profession, you will either need to go indie or learn the answers to the questions that they will ask during interviews. Incidentally, I most recently interviewed for a front end web developer role. They asked me a question where the answer required both a knowledge of computational complexity and logarithms. I managed to remember enough math from twenty years ago to answer it more or less correctly, but it was a bit jarring. I've never had to use logarithms in any front end development. Good luck to you! • So, your answer is "yes"? – Raphael Feb 13 '15 at 16:42 • TL;DR: "yes". However, in my experience you're not going to be talking about computational complexity in most jobs after you're hired. Yes, know your data structures and their performance, but just knowing that an algorithm is O(n) or whatever does not a good programmer make. It's much better to focus on writing good code quickly and then optimizing the hot spots later. Readability and maintainability are usually more important for most code than performance. – Scott Schafer Feb 13 '15 at 17:23 • I think it may happen that complexity comes up in a corporate setting, but the first real concern for companies is shipping: if it works, it's good enough, until there's available budget to improve the app, or a customer comes back to complain about poor performances. In b2b situations for adhoc projects, it's probably quite uncommon. In b2c, or in highly competitive markets (off the shelf products), it would probably come up more often, with the direct effect of raising the entry bar for new hires. – didierc Feb 13 '15 at 17:46 • @didierc "Good enough" is also what breaks things all the time. – Raphael Feb 13 '15 at 18:26 • @didierc 1) Well, people with solid backgrounds in CS do (hopefully) have a good intuition for what is correct and what is not, whereas ad-hoc problem solvers may commit "simple" mistakes. Ensuring that the execution after multiplie compilations is exactly what was specifid is highly non-trivial, and afaik an unsolved problem. 2) No. – Raphael Feb 15 '15 at 9:37 The question is quite subjective, so I think the answer is it depends. It doesn't matter that much if you work with small amounts of data. In these cases, it is usually fine to use whatever e.g. the standard library of your language offers. However, when you deal with large amounts of data, or for some other reason you insist that your program is fast, then you must understand computational complexity. If you don't, how do you know how a problem should be solved, or how quickly it is even possible to solve it? But understanding just theory is not enough to be a really good programmer. To produce extremely fast code, I believe, you also have to understand how e.g. your machine works (caches, memory layout, the instruction set), and what your compiler does (compilers do their best, but are not perfect). In short, I think understanding complexity clearly makes you a better programmer. • I think you generally have right idea, but "subjective" doesn't describe this issue adequately; "circumstantial" would be a better word. Also, one can however write very slow programs that don't operate on a lot of data. I recently answered a question on math.se about polynomial representation/storage. That usually involves a pretty small amount of data e.g. ~1000-term polynomials are typical; yet there are huge real-world differences in performance (hundreds or thousands of seconds vs. a few seconds for a multiplication) depending on the implementation. – Fizz Feb 13 '15 at 19:10 It is certainly a problem if someone who is developing significant algorithms does not understand algorithm complexity. Users of an algorithm generally rely on a good quality of implementation that has good performance characteristics. While complexity is not the only contributor to performance characteristics of an algorithm, it is a significant one. Someone who does not understand algorithm complexity is less likely to develop algorithms with useful performance characteristics. It is less of a problem for users of an algorithm, assuming the algorithms available are of good quality. This is true for developers who use languages that have a significant, well-specified, standard library - they just need to know how to pick an algorithm that meets there needs. The problem comes in where their are multiple algorithms of some type (say, sorting) available within a library, because complexity is often one of the criteria for picking between. A developer who does not understand complexity then cannot understand the basis for picking an effective algorithm for their task at hand. Then there are developers who focus on (for want of a better description) non-algorithmic concerns. For example, they may focus on developing intuitive user interfaces. Such developers will often not need to worry about algorithm complexity although, again, they may rely on libraries or other code being developed to a high quality. It depends, but not on amount of data you're working with, but on kind of work you do, programs you develop. Let's call programmer that doesn't know about conceptual complexity noobish programmer. The noobish programmer can do: • develop big data databases - he doesn't have to know how it works inside, all he has to know are rules about developing databases. He knows things like: what should be indexed,... where it is better to make redundancy in data, where it is not... • make games - he just has to study how some game engine works and follow its paradigms, games and computer graphics are quite a big data problems. Consider 1920*1080*32bit = cca 7.9MB for single picture/frame... @60 FPS it's at least 475MB/s. Consider, that just one unnecessary copy of fullscreen picture would waste around 500MB memory throughput per second. But, he doesn't need to care about that, because he only uses engine! The noobish programmer shouldn't do: • develop very frequently used complex programs no matter of size of data it's working with,... for example, small data won't cause noticeable impact of improper solution during development, because it will be slower than compilation time, etc. So, 0.5sec for one simple program ain't that much from noobish programmer perspective, Well, consider server server, that runs this program twenty times per second. It would require 10cores to be able to sustain that load! • develop programs for embedded devices. Embedded devices work with small data, but they need to be as efficient as it's possible, because redundant operations make unnecessary power consuption So, noobish programmer is fine, when you want just use technologies. So, when it comes to development of new solutions, custom technologies, etc. Then it's better to hire not noobish programmer. However, if company doesn't develop new technologies, just uses already made ones. It would be waste of talent to hire skilled and talented programmer. The same applies, if you don't want to work on new technologies and you're fine putting customers ideas into designs and programs using already made frameworks, then it's waste of your time, to learn something you won't ever need, except if it's your hobby and you like logical challenges. • This answer could be improved if it used a more neutral label, or no label at all, much like the other reply that used the term "incompetent programmer." – Moby Disk Feb 13 '15 at 18:04 • I'm not sure what you mean by "conceptual complexity". My experience is that people who don't know enough about trees or hashtables can't make intelligent decisions regarding how to index (parts of) a big database. – Fizz Feb 13 '15 at 20:55 I'm somewhat hesitant to write an answer here but since I found myself nitpicking on several others' [some of my comments got moved to chat], here's how I see it... There are levels/degrees of knowledge to a lot of things in computing (and by this term I mean roughly the union of computer science with information technology). Computation complexity surely is a vast field (Do you know what OptP is? Or what the Abiteboul-Vianu theorem says?) and also admits a lot of depth: most people with a CS degree can't produce the expert proofs that go into research publications in computational complexity. The level of knowledge and skill/competence required in such matters depends a lot on what one works on. Completely clueless O($n^2$) sorting is sometimes said to be a major cause of slow programs[citation needed], but a 2003 SIGCSE paper noted "Insertion sort is used to sort small (sub) arrays in standard Java and C++ libraries." On the flip side, premature optimization coming from someone who doesn't understand what asymptotic means (computational complexity being such a measure) is sometimes a problem in programming practice. However, the point of knowing at least when computational complexity matters is why you need to have some clue about it, at least at an undergraduate level. I would honestly dare compare the situation of knowing when to apply computational complexity concepts (and knowing when you can safely ignore them) with the somewhat common practice (outside of Java world) of implementing some performance-sensitive code in C and the performance-insensitive stuff in Python etc. (As an aside, this was called in a Julia talk the "standard compromise".) Knowing when you don't have to think about performance saves you programming time, which is a fairly valuable commodity too. And one more point is that knowing computational complexity won't automatically make you good at optimizing programs; you need to understand more architecture-related stuff like cache locality, [sometimes] pipelining, and nowadays parallel/multi-core programming too; the latter has both its own complexity theory and practical considerations as well; a taste of the latter from a 2013 SOSP paper "Every locking scheme has its fifteen minutes of fame. None of the nine locking schemes we consider consistently outperforms any other one, on all target architectures or workloads. Strictly speaking, to seek optimality, a lock algorithm should thus be selected based on the hardware platform and the expected workload." • In the long run, developing or finding a better algorithm is usually more beneficial than changing programming language for the performance-sensitive bits. I agree with you that there is a strong association between lack of understanding of complexity and premature optimisation - because they usually target the less performance-sensitive bits for optimisation. – Rob Feb 13 '15 at 21:50 • In practice, (inadvertent) Schlemiel the Painter's algorithms are much more frequent than O(n^2) sorting. – Peter Mortensen Feb 15 '15 at 16:17 If you don't know big-O you should learn it. It's not hard, and it's really useful. Start with searching and sorting. I do notice that a lot of answers and comments recommend profiling, and they almost always mean use a profiling tool. The trouble is, profiling tools are all over the map in terms of how effective they are for finding what you need to speed up. Here I've listed and explained the misconceptions that profilers suffer from. The result is that programs, if they are larger than an academic exercise, can contain sleeping giants, that even the best automatic profiler cannot expose. This post shows a few examples of how performance problems can hide from profilers. But they cannot hide from this technique. • You claim "Big-Oh" is useful but then you advocate a different approach. Also, I don't see how learning "Big-Oh" (mathematics) can "start with searching and sorting" (algorithmis problems). – Raphael May 20 '15 at 14:30 • @Raphael: I do not advocate a different approach - it's orthogonal.Big-O is basic knowledge for understanding algorithms, whereas finding performance problems in non-toy software is something you do after the code is written and run, not before. (Sometimes academics don't know this, so they continue teaching gprof, doing more harm than good.) In so doing, you may or may not find that the problem is use of a O(n*n) algorithm, so you should be able to recognize that. (And big-O is just a mathematically defined property of algorithms, not a different subject.) – Mike Dunlavey May 20 '15 at 15:40 • "And big-O is just a mathematically defined property of algorithms, not a different subject." -- that's wrong, and dangerously so. "Big-Oh" defines classes of functions; per se, it has nothing to do with algorithms at all. – Raphael May 20 '15 at 16:11 • – Raphael May 20 '15 at 18:37
2020-02-27 00:15:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4077468812465668, "perplexity": 830.130916851018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00072.warc.gz"}
https://math.stackexchange.com/questions/3267519/what-is-the-explicit-map-of-the-open-embedding-b-to-g-u
# What is the explicit map of the open embedding $B^- \to G/U$? Let $$G=GL_n$$ and $$B^-$$ the set of lower triangular matrices in $$G$$. It is said that there is an open embedding $$B^- \to G/U$$. What is the explicit map of $$B^- \to G/U$$. For example, in the case of $$GL_2$$. We have every element in $$B^-$$ is of the form $$\left( \begin{matrix} a & 0 \\ c & d \end{matrix} \right)$$. What is the images of elements in $$B^- \to G/U$$. Thank you very much. • What's $U$ denote? – Randall Jun 19 '19 at 12:48 Assuming $$U$$ is the set of upper-triangular unipotent matrices. Think about $$B^-\to G\to G/U$$. • @Thank you very much. Yes, $U$ is upper triangular matrices. But according to your proof, it seems no matter what $U$ is, $B^- \to G/U$ is an open embedding which is not true. Where do you use the condition that $U$ is upper triangular? – LJR Jun 19 '19 at 13:00 • You get embedding, and for openness you need to use this $U$ to count dimensions (or recall $LU$-factorisation of matrices). – user10354138 Jun 19 '19 at 13:04 • thank you very much. Why LU-factorisation implies openness? For example, let $g=\left(\begin{array}{cc} a & b\\ c & d \end{array}\right)$. Then $g=b_- u = \left(\begin{array}{cc} a & 0\\ c & d - \frac{b\, c}{a} \end{array}\right) \left(\begin{array}{cc} 1 & \frac{b}{a}\\ 0 & 1 \end{array}\right)$. What is the map $B^- \to G/U$ in this case? – LJR Jun 19 '19 at 13:53 • You think of the map $G/U\to B^-$ instead, which is the "L" part of LU factorization and we know $g\mapsto(\ell,u)$ is a diffeomorphism, so projecting (univesal property of quotient) gives the induced map $G/U\to B^-$ a diffeomorphism. The inverse map $B^-\to G/U$ maps a lower triangular matrix to its $U$-orbit in $G$. – user10354138 Jun 19 '19 at 14:29
2021-06-12 23:35:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030980467796326, "perplexity": 185.7795825509643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00136.warc.gz"}
https://math.stackexchange.com/questions/1252026/proving-strong-stability-of-semigroup/1252368#1252368
# Proving strong stability of semigroup $X$ is the Hilbert space $L^{2}(0,\infty)$ and let $T(t):X\to X$ with $t\ge 0$ be defined by $(T(t)f)(\zeta):=f(t+\zeta)$. I want to prove that the $C_{0}$-semigroup $(T(t))_{t\ge 0}$ is strongly stable, but not exponentially stable. We have that $(T(t))_{t\ge 0}$ is strongly stable if $T(t)x\to 0$ as $t\to \infty$, $\forall x\in X$. By Datko's lemma, if $(T(t))_{t\ge0}$ is exponentially stable, then $\displaystyle \int_{0}^{\infty}\|T(t)x\|^{2}dt<\infty$, $\forall x\in X$. Suppose $(T(t))_{t\ge 0}$ is exponentially stable, then $\displaystyle \int_{0}^{\infty}\|T(t)x\|^{2}dt=\int_{0}^{\infty}$... Am I going about this the right way, or could someone point me in the right direction? By definition, $$\|T_t f\|^2 = \int_0^{\infty} |(T_t f)(\xi)|^2 \, d\xi = \int_0^{\infty} |f(t+\xi)|^2 \, d\xi = \int_t^{\infty} |f(\eta)|^2 \, d\eta. \tag{1}$$ For $f(x) := \min \left\{\frac{1}{x},1 \right\}$, we have $f \in X$ and, by $(1)$, $$\|T_t f\|^2 = \frac{1}{t} \qquad \text{for all t \geq 1.}$$ Thus, $$\int_0^{\infty} \|T_t f\|^2 \, dt \geq \int_1^{\infty} \frac{1}{t} = \infty.$$ This shows that $(T_t)_{t \geq 0}$ cannot be exponentially stable.
2022-01-16 11:37:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798415303230286, "perplexity": 145.20586337482064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299852.23/warc/CC-MAIN-20220116093137-20220116123137-00203.warc.gz"}
https://www.hackmath.net/en/math-problem/154?tag_id=54
# Trio 56 children lined up in groups of three. How many children did not create a trio? Result n =  2 #### Solution: $n = 56 \mod \ 3 = 2$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! ## Next similar math problems: 1. Year 2018 The product of the three positive numbers is 2018. What are the numbers? 2. Collection of stamps Jano, Rado, and Fero have created a collection of stamps in a ratio of 5: 6: 9. Two of them had 429 stamps together. How many stamps did their shared collection have? 3. Dozen What is the product of 26 and 5? Write the answer in Arabic numeral. Add up the digits. How many of this is in a dozen? Divide #114 by this 4. Evaluate - order of ops Evaluate the expression: 32+2[5×(24-6)]-48÷24 Pay attention to the order of operation including integers 5. Expression plus minus Evaluate expression: (-1)2 . 12 – 6 : 3 + (-3) . (-2) + 22 – (-3) . 2 6. Evaluate 5 Evaluate expression x2−7x+12x−4 when x=−1 In about 12 hours in North Dakota the temperature rose from -33 degrees farenheit to 50 degrees farenheit. By how much did the temperature change? 8. The temperature The temperature at 1:00 was 10 F. Between 1:00 and 2:00, the temperature dropped 15F. Between 2:00 and 3:00, the temperature rose 3F. What is the temperature at 3:00? 9. Simplify Simplify expression - which expression is equivalent to: 3(m + 2) − 4(2m − 9) 10. Progression 12, 60, -300,1500 need next 2 numbers of pattern 11. Evaluate expression 2 Evaluate expression with negatives: (-3)+4+(-8)+(-6)+4+(-1) 12. Degrees 2 The temperature was 3°F and falls four degrees Fahrenheit. What is actually temperature? 13. Integer Find the integer whose distance on the numerical axis from number 1 is two times smaller as the distance from number 6. 14. Expression 6 Evaluate expression: -6-2(4-8)-9 15. Two integers Two integers, a and b, have a product of 36. What is the least possible sum of a and b? 16. The difference The difference of two numbers is 1375. If their exact quotient is 12. Find the two numbers 17. Equatiom Solve equation with negatives: X/(-5) + 2 = -9
2020-02-23 00:19:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4901057481765747, "perplexity": 2601.2891121276243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145742.20/warc/CC-MAIN-20200223001555-20200223031555-00190.warc.gz"}
https://infinispan.org/docs/stable/titles/security/security.html
## 1. Infinispan Security Infinispan provides security for components as well as data across different layers: • Within the core library to provide role-based access control (RBAC) to CacheManagers, Cache instances, and stored data. • Over remote protocols to authenticate client requests and encrypt network traffic. • Across nodes in clusters to authenticate new cluster members and encrypt the cluster transport. The Infinispan core library uses standard Java security libraries such as JAAS, JSSE, JCA, JCE, and SASL to ease integration and improve compatibility with custom applications and container environments. For this reason, the Infinispan core library provides only interfaces and a set of basic implementations. Infinispan servers support a wide range of security standards and mechanisms to readily integrate with enterprise-level security frameworks. ## 2. Configuring Infinispan Authorization Authorization restricts the ability to perform operations with Infinispan and access data. You assign users with roles that have different permission levels. ### 2.1. Infinispan Authorization Infinispan lets you configure authorization to secure Cache Managers and cache instances. When user applications or clients attempt to perform an operation on secured Cached Managers and caches, they must provide an identity with a role that has sufficient permissions to perform that operation. For example, you configure authorization on a specific cache instance so that invoking `Cache.get()` requires an identity to be assigned a role with read permission while `Cache.put()` requires a role with write permission. In this scenario, if a user application or client with the `reader` role attempts to write an entry, Infinispan denies the request and throws a security exception. If a user application or client with the `writer` role sends a write request, Infinispan validates authorization and issues a token for subsequent operations. Identity to Role Mapping Identities are security Principals of type `java.security.Principal`. Subjects, implemented with the `javax.security.auth.Subject` class, represent a group of security Principals. In other words, a Subject represents a user and all groups to which it belongs. Infinispan uses role mappers so that security principals correspond to roles, which represent one or more permissions. The following image illustrates how security principals map to roles: #### 2.1.1. Permissions Permissions control access to Cache Managers and caches by restricting the actions that you can perform. Permissions can also apply to specific entities such as named caches. Table 1. Cache Manager Permissions Permission Function Description CONFIGURATION `defineConfiguration` Defines new cache configurations. LISTEN `addListener` Registers listeners against a Cache Manager. LIFECYCLE `stop` Stops the Cache Manager. ALL - Includes all Cache Manager permissions. Table 2. Cache Permissions Permission Function Description `READ` `get`, `contains` Retrieves entries from a cache. WRITE `put`, `putIfAbsent`, `replace`, `remove`, `evict` Writes, replaces, removes, evicts data in a cache. EXEC `distexec`, `streams` Allows code execution against a cache. LISTEN `addListener` Registers listeners against a cache. `keySet`, `values`, `entrySet`, `query` Executes bulk retrieve operations. BULK_WRITE `clear`, `putAll` Executes bulk write operations. LIFECYCLE `start`, `stop` Starts and stops a cache. `getVersion`, `addInterceptor*`, `removeInterceptor`, `getInterceptorChain`, `getEvictionManager`, `getComponentRegistry`, `getDistributionManager`, `getAuthorizationManager`, `evict`, `getRpcManager`, `getCacheConfiguration`, `getCacheManager`, `getInvocationContextContainer`, `setAvailability`, `getDataContainer`, `getStats`, `getXAResource` ALL - Includes all cache permissions. - ALL_WRITE - Combines the WRITE and BULK_WRITE permissions. Combining permissions You might need to combine permissions so that they are useful. For example, to allow "supervisors" to run stream operations but restrict "standard" users to puts and gets only, you can define the following mappings: ``````<role name="standard" permission="READ WRITE" /> <role name="supervisors" permission="READ WRITE EXEC BULK"/>`````` Reference #### 2.1.2. Role Mappers Infinispan includes a `PrincipalRoleMapper` API that maps security Principals in a Subject to authorization roles. There are two role mappers available by default: IdentityRoleMapper Uses the Principal name as the role name. • Java class: `org.infinispan.security.mappers.IdentityRoleMapper` • Declarative configuration: `<identity-role-mapper />` CommonNameRoleMapper Uses the Common Name (CN) as the role name if the Principal name is a Distinguished Name (DN). For example the `cn=managers,ou=people,dc=example,dc=com` DN maps to the `managers` role. • Java class: `org.infinispan.security.mappers.CommonRoleMapper` • Declarative configuration: `<common-name-role-mapper />` You can also use custom role mappers that implement the `org.infinispan.security.PrincipalRoleMapper` interface. To configure custom role mappers declaratively, use: `<custom-role-mapper class="my.custom.RoleMapper" />` ### 2.2. Programmatically Configuring Authorization When using Infinispan as an embedded library, you can configure authorization with the `GlobalSecurityConfigurationBuilder` and `ConfigurationBuilder` classes. Procedure 1. Construct a `GlobalConfigurationBuilder` that enables authorization, specifies a role mapper, and defines a set of roles and permissions. ``````GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global .security() .authorization().enable() (1) .principalRoleMapper(new IdentityRoleMapper()) (2) .permission(AuthorizationPermission.ALL) .role("writer") .permission(AuthorizationPermission.WRITE) .role("supervisor") .permission(AuthorizationPermission.WRITE) .permission(AuthorizationPermission.EXEC);`````` 1 Enables Infinispan authorization for the Cache Manager. 2 Specifies an implementation of `PrincipalRoleMapper` that maps Principals to roles. 3 Defines roles and their associated permissions. 2. Enable authorization in the `ConfigurationBuilder` for caches to restrict access based on user roles. ```ConfigurationBuilder config = new ConfigurationBuilder(); config .security() .authorization() .enable(); (1)``` 1 Implicitly adds all roles from the global configuration. If you do not want to apply all roles to a cache, explicitly define the roles that are authorized for caches as follows: ```ConfigurationBuilder config = new ConfigurationBuilder(); config .security() .authorization() .enable() .role("supervisor") 1 Defines authorized roles for the cache. In this example, users who have the `writer` role only are not authorized for the "secured" cache. Infinispan denies any access requests from those users. ### 2.3. Declaratively Configuring Authorization Configure authorization in your `infinispan.xml` file. Procedure 1. Configure the global authorization settings in the `cache-container` that specify a role mapper, and define a set of roles and permissions. 2. Configure authorization for caches to restrict access based on user roles. ``````<infinispan> <cache-container default-cache="secured" name="secured"> <security> <authorization> (1) <identity-role-mapper /> (2) <role name="writer" permissions="WRITE" /> </authorization> </security> <local-cache name="secured"> <security> <authorization/> (4) </security> </local-cache> </cache-container> </infinispan>`````` 1 Enables Infinispan authorization for the Cache Manager. 2 Specifies an implementation of `PrincipalRoleMapper` that maps Principals to roles. 3 Defines roles and their associated permissions. 4 Implicitly adds all roles from the global configuration. If you do not want to apply all roles to a cache, explicitly define the roles that are authorized for caches as follows: ``````<infinispan> <cache-container default-cache="secured" name="secured"> <security> <authorization> <identity-role-mapper /> <role name="writer" permissions="WRITE" /> </authorization> </security> <local-cache name="secured"> <security> </security> </local-cache> </cache-container> </infinispan>`````` 1 Defines authorized roles for the cache. In this example, users who have the `writer` role only are not authorized for the "secured" cache. Infinispan denies any access requests from those users. ### 2.4. Code Execution with Secure Caches When you configure Infinispan authorization and then construct a `DefaultCacheManager`, it returns a `SecureCache` that checks the security context before invoking any operations on the underlying caches. A `SecureCache` also ensures that applications cannot retrieve lower-level insecure objects such as `DataContainer`. For this reason, you must execute code with an identity that has the required authorization. In Java, executing code with a specific identity usually means wrapping the code to be executed within a `PrivilegedAction` as follows: ``````import org.infinispan.security.Security; Security.doAs(subject, new PrivilegedExceptionAction<Void>() { public Void run() throws Exception { cache.put("key", "value"); } });`````` With Java 8, you can simplify the preceding call as follows: ``Security.doAs(mySubject, PrivilegedAction<String>() -> cache.put("key", "value"));`` The preceding call uses the `Security.doAs()` method instead of `Subject.doAs()`. You can use either method with Infinispan, however `Security.doAs()` provides better performance. If you need the current Subject, use the following call to retrieve it from the Infinispan context or from the AccessControlContext: ``Security.getSubject();`` ## 3. Encrypting Cluster Transport Secure cluster transport so that nodes communicate with encrypted messages. You can also configure Infinispan clusters to perform certificate authentication so that only nodes with valid identities can join. ### 3.1. Infinispan Cluster Security To secure cluster traffic, you configure Infinispan nodes to encrypt JGroups message payloads with secret keys. Infinispan nodes can obtain secret keys from either: • The coordinator node (asymmetric encryption). • A shared keystore (symmetric encryption). Retrieving secret keys from coordinator nodes You configure asymmetric encryption by adding the `ASYM_ENCRYPT` protocol to a JGroups stack in your Infinispan configuration. This allows Infinispan clusters to generate and distribute secret keys. When using asymmetric encryption, you should also provide keystores so that nodes can perform certificate authentication and securely exchange secret keys. This protects your cluster from man-in-the-middle (MitM) attacks. Asymmetric encryption secures cluster traffic as follows: 1. The first node in the Infinispan cluster, the coordinator node, generates a secret key. 2. A joining node performs certificate authentication with the coordinator to mutually verify identity. 3. The joining node requests the secret key from the coordinator node. That request includes the public key for the joining node. 4. The coordinator node encrypts the secret key with the public key and returns it to the joining node. 5. The joining node decrypts and installs the secret key. 6. The node joins the cluster, encrypting and decrypting messages with the secret key. Retrieving secret keys from shared keystores You configure symmetric encryption by adding the `SYM_ENCRYPT` protocol to a JGroups stack in your Infinispan configuration. This allows Infinispan clusters to obtain secret keys from keystores that you provide. 1. Nodes install the secret key from a keystore on the Infinispan classpath at startup. 2. Node join clusters, encrypting and decrypting messages with the secret key. Comparison of asymmetric and symmetric encryption `ASYM_ENCRYPT` with certificate authentication provides an additional layer of encryption in comparison with `SYM_ENCRYPT`. You provide keystores that encrypt the requests to coordinator nodes for the secret key. Infinispan automatically generates that secret key and handles cluster traffic, while letting you specify when to generate secret keys. For example, you can configure clusters to generate new secret keys when nodes leave. This ensures that nodes cannot bypass certificate authentication and join with old keys. `SYM_ENCRYPT`, on the other hand, is faster than `ASYM_ENCRYPT` because nodes do not need to exchange keys with the cluster coordinator. A potential drawback to `SYM_ENCRYPT` is that there is no configuration to automatically generate new secret keys when cluster membership changes. Users are responsible for generating and distributing the secret keys that nodes use to encrypt cluster traffic. ### 3.2. Configuring Cluster Transport with Asymmetric Encryption Configure Infinispan clusters to generate and distribute secret keys that encrypt JGroups messages. Procedure 1. Create a keystore with certificate chains that enables Infinispan to verify node identity. 2. Place the keystore on the classpath for each node in the cluster. For Infinispan Server, you put the keystore in the $ISPN_HOME directory. 3. Add the `SSL_KEY_EXCHANGE` and `ASYM_ENCRYPT` protocols to a JGroups stack in your Infinispan configuration, as in the following example: ``````<infinispan> <jgroups> <stack name="encrypt-tcp" extends="tcp"> (1) <SSL_KEY_EXCHANGE keystore_name="mykeystore.jks" (2) keystore_password="changeit" (3) stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT"/> (4) <ASYM_ENCRYPT asym_keylength="2048" (5) asym_algorithm="RSA" (6) change_key_on_coord_leave = "false" (7) change_key_on_leave = "false" (8) use_external_key_exchange = "true" (9) stack.combine="INSERT_BEFORE" stack.position="pbcast.NAKACK2"/> (10) </stack> </jgroups> <cache-container name="default" statistics="true"> <transport cluster="${infinispan.cluster.name}" stack="encrypt-tcp" (11) node-name="${infinispan.node.name:}"/> </cache-container> </infinispan>`````` 1 Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack for Infinispan. 2 Names the keystore that nodes use to perform certificate authentication. 3 Specifies the keystore password. 4 Uses the `stack.combine` and `stack.position` attributes to insert `SSL_KEY_EXCHANGE` into the default TCP stack after the `VERIFY_SUSPECT` protocol. 5 Specifies the length of the secret key that the coordinator node generates. The default value is `2048`. 6 Specifies the cipher engine the coordinator node uses to generate secret keys. The default value is `RSA`. 7 Configures Infinispan to generate and distribute a new secret key when the coordinator node changes. 8 Configures Infinispan to generate and distribute a new secret key when nodes leave. 9 Configures Infinispan nodes to use the `SSL_KEY_EXCHANGE` protocol for certificate authentication. 10 Uses the `stack.combine` and `stack.position` attributes to insert `ASYM_ENCRYPT` into the default TCP stack before the `pbcast.NAKACK2` protocol. 11 Configures the Infinispan cluster to use the secure JGroups stack. Verification When you start your Infinispan cluster, the following log message indicates that the cluster is using the secure JGroups stack: ``[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>`` Infinispan nodes can join the cluster only if they use `ASYM_ENCRYPT` and can obtain the secret key from the coordinator node. Otherwise the following message is written to Infinispan logs: `[org.jgroups.protocols.ASYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it` Reference The example `ASYM_ENCRYPT` configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters. ### 3.3. Configuring Cluster Transport with Symmetric Encryption Configure Infinispan clusters to encrypt JGroups messages with secret keys from keystores that you provide. Procedure 1. Create a keystore that contains a secret key. 2. Place the keystore on the classpath for each node in the cluster. For Infinispan Server, you put the keystore in the$ISPN_HOME directory. 3. Add the `SYM_ENCRYPT` protocol to a JGroups stack in your Infinispan configuration, as in the following example: ``````<infinispan> <jgroups> <stack name="encrypt-tcp" extends="tcp"> (1) <SYM_ENCRYPT keystore_name="myKeystore.p12" (2) keystore_type="PKCS12" (3) alias="myKey" (6) stack.combine="INSERT_AFTER" stack.position="VERIFY_SUSPECT"/> (7) </stack> </jgroups> <cache-container name="default" statistics="true"> <transport cluster="${infinispan.cluster.name}" stack="encrypt-tcp" (8) node-name="${infinispan.node.name:}"/> </cache-container> </infinispan>`````` 1 Creates a secure JGroups stack named "encrypt-tcp" that extends the default TCP stack for Infinispan. 2 Names the keystore from which nodes obtain secret keys. 3 Specifies the keystore type. JGroups uses JCEKS by default. 4 Specifies the keystore password. 5 Specifies the secret key password. 6 Specifies the secret key alias. 7 Uses the `stack.combine` and `stack.position` attributes to insert `SYM_ENCRYPT` into the default TCP stack after the `VERIFY_SUSPECT` protocol. 8 Configures the Infinispan cluster to use the secure JGroups stack. Verification When you start your Infinispan cluster, the following log message indicates that the cluster is using the secure JGroups stack: ``[org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel cluster with stack <encrypted_stack_name>`` Infinispan nodes can join the cluster only if they use `SYM_ENCRYPT` and can obtain the secret key from the shared keystore. Otherwise the following message is written to Infinispan logs: `[org.jgroups.protocols.SYM_ENCRYPT] <hostname>: received message without encrypt header from <hostname>; dropping it` Reference The example `SYM_ENCRYPT` configuration in this procedure shows commonly used parameters. Refer to JGroups documentation for the full set of available parameters. ## 4. Infinispan Ports and Protocols As Infinispan distributes data across your network and can establish connections for external client requests, you should be aware of the ports and protocols that Infinispan uses to handle network traffic. If run Infinispan as a remote server then you might need to allow remote clients through your firewall. Likewise, you should adjust ports that Infinispan nodes use for cluster communication to prevent conflicts or network issues. ### 4.1. Infinispan Server Ports and Protocols Infinispan Server exposes endpoints on your network for remote client access. Port Protocol Description `11222` TCP Hot Rod and REST endpoint `11221` TCP Memcached endpoint, which is disabled by default. #### 4.1.1. Configuring Network Firewalls for Remote Connections Adjust any firewall rules to allow traffic between the server and external clients. Procedure On Red Hat Enterprise Linux (RHEL) workstations, for example, you can allow traffic to port `11222` with firewalld as follows: ``````# firewall-cmd --add-port=11222/tcp --permanent success # firewall-cmd --list-ports | grep 11222 11222/tcp`````` To configure firewall rules that apply across a network, you can use the nftables utility. ### 4.2. TCP and UDP Ports for Cluster Traffic Infinispan uses the following ports by default: Default Port Protocol Description `7800` TCP/UDP JGroups cluster bind port `46655` UDP JGroups multicast `7200` TCP JGroups RELAY2 for cross-site replication
2021-01-20 15:42:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5491307377815247, "perplexity": 11704.24099952017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703521139.30/warc/CC-MAIN-20210120151257-20210120181257-00066.warc.gz"}