url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://questions.examside.com/past-years/gate/question/dirty-bit-for-a-page-in-a-page-table-gate-cse-1997-marks-1-fulo1ecsb5bddtj2.htm
NEW New Website Launch Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc... 1 ### GATE CSE 1997 Dirty bit for a page in a page table A Helps avoid unnecessary writes on a paging device B Helps maintain $$LRU$$ information C Allows only read on a page D None of the above. 2 ### GATE CSE 1997 Thrashing A Reduces page $$I/O$$ B Decreases the degree of multiprogramming C Implies excessive page $$I/O$$ D Improve the system performance 3 ### GATE CSE 1997 Locality of reference implies that the page reference being made by a process A Will always be to the page used in the previous page reference B Is likely to be to one of the pages used in the last few page references C Will always be to one of the pages existing in memory D Will always lead to a page fault 4 ### GATE CSE 1996 A ROM is sued to store the table for multiplication of two $$8$$-bit unsigned integers. The size of ROM required is A $$256 \times 16$$ B $$64\,K \times 8$$ C $$4\,K \times 16$$ D $$64\,K \times 16$$ Write for Us Do you want to write for us? Help us by contributing to our platform. ### Joint Entrance Examination JEE Main JEE Advanced WB JEE ### Graduate Aptitude Test in Engineering GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN NEET Class 12
2022-06-26 11:43:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7543201446533203, "perplexity": 9307.249776962686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00638.warc.gz"}
https://en.wikipedia.org/wiki/User_talk:Freenaulij
# User talk:Freenaulij ## Welcome Welcome to WikiProject Chess and thanks for your work on Man vs Machine World Team Championship and related articles. Your work to make sure that these tournaments are mentioned in the participants articles is a very good idea. I and others have been trying to add these sorts of back-links to articles in Category:Chess national championships and others where appropriate. Quale 05:56, 7 November 2007 (UTC) ## Welcome to the Orphanage Welcome! Thanks for joining The Orphanage. As we're a new project, we need all the help we can get! De-orphaning articles can be tricky, so please drop me a line if you would like any help =) Lex Kitten 22:35, 8 November 2007 (UTC) ## Image source problem with Image:PanoramaFall07.jpg Thanks for uploading Image:PanoramaFall07.jpg. I noticed that the file's description page currently doesn't specify who created the content, so the copyright status is unclear. If you did not create this file yourself, you will need to specify the owner of the copyright. If you obtained it from a website, then a link to the website from which it was taken, together with a restatement of that website's terms of use of its content, is usually sufficient information. However, if the copyright holder is different from the website's publisher, their copyright should also be acknowledged. As well as adding the source, please add a proper copyright licensing tag if the file doesn't have one already. If you created/took the picture, audio, or video then the {{GFDL-self}} tag can be used to release it under the GFDL. If you believe the media meets the criteria at Wikipedia:Non-free content, use a tag such as {{non-free fair use in|article name}} or one of the other tags listed at Wikipedia:Image copyright tags#Fair use. See Wikipedia:Image copyright tags for the full list of copyright tags that you can use. If you have uploaded other files, consider checking that you have specified their source and tagged them, too. You can find a list of files you have uploaded by following this link. Unsourced and untagged images may be deleted one week after they have been tagged, as described on criteria for speedy deletion. If the image is copyrighted under a non-free license (per Wikipedia:Fair use) then the image will be deleted 48 hours after 04:40, 11 November 2007 (UTC). If you have any questions please ask them at the Media copyright questions page. Thank you. Kkmurray 04:40, 11 November 2007 (UTC) ## Welcome to WikiProject Catholicism! Hello, Freenaulij, and welcome to Wikiproject Catholicism! Thank you for your generous offer to help contribute. I'm sure your input will be much appreciated. I hope you enjoy contributing here and being a Catholic Project Wikipedian! If you have any questions, feel free to discuss anything on the project talk page, or to leave a message on my own talk page. Please remember to sign all your comments, and be bold with your edits. Again, welcome, and happy editing! --Thw1309 08:59, 14 November 2007 (UTC) ## Division by zero Hi, I've commented at Talk:Division by zero#Division by Zero. -- Meni Rosenfeld (talk) 08:59, 4 December 2007 (UTC) I have a few comments as well. When mathematicians say "proof", they mean that a certain well-defined proposition is justified by an argument that shows that the way the concepts involved have been defined makes it necessary that the proposition is true. Therefore, a mathematician would not regard your calculations as "proofs": you want to demonstrate that 1/0=infinity, but you have not defined infinity, nor what division means. What you do is you make a calculation using ordinary rules of algebra. But how do you know that these rules are valid? They have been checked under the condition that the variables range over real numbers, but if you divide by zero we can certainly not deal with real numbers and division in the ordinary sense. So you are left with the mission: you must define what division means, what infinity means, et cetera; then checking that the rules are still valid with these new definitions. Finally, you can try to prove 1/0=infinity, using the checked rules. I think you describe nicely why it is annoying that people continue to state that it is impossible to divide by zero: we have heard that kind of statements before. On the other hand, it is fairly annoying also that other people continue to state that it is indeed possible. We have to agree on the sense in which it is possible or impossible. As Meni Rosenfeld say, it is clear that it is impossible to introduce division by zero and still maintaining all algebraic properties of the real or complex number fields. On the other hand, we had to give up rules before as well. When adding 0 to the positive integers, we had to give up the rule ${\displaystyle ab=ac\Rightarrow b=c}$, when we added negative numbers we had to give up the rule ${\displaystyle a+b=c\Rightarrow a\leq c}$, when we added imaginary numbers we had to give up the rule ${\displaystyle a^{2}\geq 0}$, et cetera. So isn't it possible to give up some rule, learn to live with that, and begin dividing by zero (using a different notion of division of course, like we had to change the notion of subtraction when passing to negative numbers)? Well, I did that once, in my first mathematical paper. It worked, but the suggestion is that we give up a lot, so it might very well be too high a price to pay. You can have a quick look at Wheel theory, where some of this is described. One example of a wheel is similar to your experiments on lines through the origin. If you pick any point (x,y) in the plane it defines a line through the origin, unless (x,y)=(0,0), when it defines only a point. If we identify points that define the same lines, and write (x:y) for such classes of points, so that e.g. (1:2)=(2:4)=(1.5:3), we can define addition by (a:b)+(c:d)=(ad+bc:bd), multiplication by (a:b)(c:d)=(ac:bd), negation by (a:b)=(-a:b), inversion by /(a:b)=(b:a). We can also identify every real number r with (r:1). In particular, 0=(0:1), 1=(1:1). We also define infinity to be (1:0). In particular, 1/0 means (1:1)/(0:1)=(1:0), which is infinity. Now this structure is a wheel, which means that the algebraic rules of wheel theory are valid. Jesper Carlstrom 09:34, 4 December 2007 (UTC) ## Notability of The Woodlands College Park Band Hello, this is a message from an automated bot. A tag has been placed on The Woodlands College Park Band, by another Wikipedia user, requesting that it be speedily deleted from Wikipedia. The tag claims that it should be speedily deleted because The Woodlands College Park Band seems to be about a person, group of people, band, club, company, or web content, but it does not indicate how or why the subject is notable: that is, why an article about that subject should be included in an encyclopedia. Under the criteria for speedy deletion, articles that do not assert the subject's importance or significance may be deleted at any time. Please see the guidelines for what is generally accepted as notable. To contest the tagging and request that administrators wait before possibly deleting The Woodlands College Park Band, please affix the template {{hangon}} to the page, and put a note on its talk page. If the article has already been deleted, see the advice and instructions at WP:WMD. Feel free to contact the bot operator if you have any questions about this or any problems with this bot, bearing in mind that this bot is only informing you of the nomination for speedy deletion; it does not perform any nominations or deletions itself. To see the user who deleted the page, click here CSDWarnBot (talk) 20:32, 10 December 2007 (UTC) ## Unified orphan/de-orphan process You might be interested in this discussion.--Aervanath's signature is boring 22:24, 24 May 2008 (UTC) ## Hope you come back Hey, we noticed you haven't been around en.wiki for awhile, so you've been moved to the WikiProject Orphanage Inactive list. We hope you'll come back, move your name back to the active list, and get back to de-orphaning real soon! Aervanath (talk) 18:26, 31 January 2009 (UTC) ## Ichthus: January 2012 ICHTHUS January 2012
2017-01-18 12:01:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5362201929092407, "perplexity": 1052.8102298599954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280280.6/warc/CC-MAIN-20170116095120-00077-ip-10-171-10-70.ec2.internal.warc.gz"}
http://mathhelpforum.com/statistics/222690-need-help-question-very-tricky.html
# Thread: Need help with this question, very tricky 3. ## Re: Need help with this question, very tricky So, for the 73th percentile, it's 0.6128, and yeah im using the z chart 5. ## Re: Need help with this question, very tricky That's the chart i'm using. 6. ## Re: Need help with this question, very tricky Then I gave you your answer. Just solve for x in the two equations I posted. , # offering classes at the docto Click on a term to search for related topics.
2018-04-19 14:22:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7653842568397522, "perplexity": 2009.8598077402294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936969.10/warc/CC-MAIN-20180419130550-20180419150550-00051.warc.gz"}
https://math.stackexchange.com/questions/3245733/product-of-transcendental-numbers-is-not-transcendental-or-is-it
# Product of transcendental numbers is not transcendental, or is it? The transcendental numbers form a field, or so I thought. I'm familiar with the fact that the algebraic numbers form a field which implies that reciprocals of transcendental numbers must be again transcendental (if reciprocal is not transcendental, then the reciprocal of the reciprocal, the transcendental element itself, must be algebraic...). But I was wondering about sums and products of transcendental numbers which are covered in numerous threads here on MSE. However, I came across an awful contradiction after combining certain proofs from here. Let's begin clear. Let $$L/K$$ be a field extension with $$\alpha,\beta\in L$$. Then obviously, it is true that $$\alpha$$ and $$\beta$$ are algebraic iff $$\alpha+\beta$$ and $$\alpha\beta$$ are algebraic; a simple proof of this is given using the polynomial $$f=x^2-(\alpha+\beta)x+\alpha\beta=(x-\alpha)(x-\beta)$$ in combination with the tower rule. I want to prove and disprove that $$\alpha\beta$$ is transcendental when $$\alpha$$ and $$\beta$$ are both transcendental. Let's assume that $$\alpha$$ and $$\beta$$ are transcendental. First for the proof: if $$\alpha\beta$$ is not transcendental, then it must be algebraic and hence $$\alpha$$ and $$\beta$$ must be algebraic, but they were assumed to be transcendental. Hence, a contradiction and $$\alpha\beta$$ must be transcendental. The "result" above is easily disproven: we know by the reasoning from earlier that $$\frac{1}{\alpha}$$ must also be transcendental; we take this reciprocal as our transcendental $$\beta$$. Now $$\alpha\beta=1$$ which is algebraic. Where did I go wrong? Thanks for the time. In addition: if we take the case $$\beta\neq\frac{1}{\gamma\alpha}$$ where $$\gamma$$ is algebraic, is it then the case that $$\alpha\beta$$ is always transcendental given $$\alpha$$ and $$\beta$$ transcendental. EDIT: Thanks to the people from the comment section below, I now know what went wrong in my (wrong) argumentation. The answer here tells the story quite well and the part that is wrong in my text is that I also assumed that $$\alpha$$ and $$\beta$$ are algebraic iff $$\alpha\beta$$ algebraic, which is false. I'm going to leave this open so that anyone having the same issue in the future will find more (summarised) info here. • Your error is here: “if $\alpha\beta$ is not transcendental, then it must be algebraic and hence $\alpha$ and $\beta$ must be algebraic” – Martin R May 30 '19 at 20:03 • A reciprocal of a transcendental is transcendental, yes? So $\pi\cdot\frac 1\pi = 1$ is an algebraic product od two transcendental numbers. Similarly for addition $e + (4-e) = 4.$ – CiaPan May 30 '19 at 20:06 • Are $0$ and $1$ transcendental? They are certainly elements of any field. – Mark Bennet May 30 '19 at 20:09 • @Algebear: That answer says that if both $\alpha+\beta$ and $\alpha \beta$ are algebraic then $\alpha$ and $\beta$ are algebraic. – Martin R May 30 '19 at 20:14 • Likewise, rational numbers form a field, but irrational numbers do not (sum or product of irrational numbers could be rational) – J. W. Tanner May 30 '19 at 20:57 The main question was already answered in several comments: as an example of $$T\cdot\frac 1T=1$$ shows, a product of two transcendental numbers needn't be transcendental. To answer an additional question by OP from the comment, let's consider: $$\begin{cases}\alpha\beta=K \\ \alpha+\beta=L \end{cases}$$ Plugging $$\beta=L-\alpha$$ from the second equation to the first one yields: $$\alpha^2-L\alpha+K=0.$$ This results in $$\alpha=\frac{L \pm \sqrt{L^2 - 4K}}2$$ which is algebraic for algebraic $$K,L.$$ So, yes: if both the sum and the product of two real numbers $$\alpha,\beta$$ are algebraic, then also both $$\alpha$$ and $$\beta$$ are algebraic. • Also a nice alternative! Can you also tell me more about my additional question? I wonder if we exclude this simple counterexample case, whether $\alpha\beta$ might be transcendental when $\alpha$ and $\beta\neq\frac{1}{\gamma\alpha}$ are transcendental, where $\gamma$ is algebraic. Shall I open a different question for this? – Algebear May 31 '19 at 12:21
2021-06-23 17:27:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 36, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660465478897095, "perplexity": 263.7519816204485}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00577.warc.gz"}
http://realitycommons.media.mit.edu/CallCenterDynamics.html
# Dynamics and Performance of an IT Call Center Contact Wen Dong or Todd Reid for technical support # Introduction The data contain the performance, behavior, and interpersonal interactions of participating employees at a Chicago-area data server configuration firm for one month. It is the first data set that contains the performance and dynamics of a real-world organization with a temporal resolution of a few seconds. Performance data include the assigning time, closing time, difficulty level, assigned-to, closed-by, and number of follow-ups of each task completed during that one-month period. Behavior data include the locations of the employees estimated from Zigbee RSSI recorded by the badges worn by each employee, representing to whom and to which key locations (printer, warehouse, and so on) he went. Behavior data also include the recordings of a 3-axis accelerometer on the badge, from which we estimate the postures and activities of its wearer. Interaction data include IR scanning by each badge of the badges worn by other employees, indicating that the latter are within a 1-meter distance and 30-degree cone in front of the badge, most likely indicating face-to-face communication. The badges also record audio intensity from an on-badge microphone, from which we estimate verbal behavior and verbal interactions. All sensor data are time-stamped. There were 28 employees at the firm, of which 23 participated in the study. Nineteen-hundred hours of data were collected, with a median of 80 hours per employee. The resulting data document the performance of computer system configuration tasks assigned to employees on a first-come, first-served basis. These configurations were rated to one of three levels of difficulty (basic, complex, or advanced) based on the configuration characteristics. At the conclusion of the task, the employee submitted the completed configuration as well as the price back to the salesman, after which the employee moved to the back of the queue for task assignment. The layout of the workspace is shown in the following figure. The base stations on yellow squares were placed at fixed positions throughout the workspace in order to locate the badges and time-stamp the data collected by them. Participating employees are indicated at their cubicles by their badge IDs; different colors behind the IDs represent different departmental branches at the firm. Non-participating employees have letter “N” at their cubicles. Employees fetched their badges from the room containing base station 1 (located at the lower left corner) at approximately 9am each weekday morning, and returned the badges to this room at around 6pm in the evening. The RSSI regions were manually assigned to identify different regions in the workspace, and do not correspond to any particular sensors deployed in this experiment. The employees indicated that their configuration tasks were information-intensive, and therefore required them to talk to one another to fully understand the various specifications. As such, we would expect a positive correlation between the rate of problem-solving by an employee and the number of places visited by that employee. Further, from who visited whose work cubicle we can determine interpersonal information flow and expertise in problem-solving. suppressMessages(require(grImport)) PostScriptTrace("chicagomap1page1.eps") # Data Description The data set contains the following tables, and each table contains the following fields: • Transactions.csv tasks assigned to whom (assigned.to) and when (assign.date), closed by whom (closed.by) and when (close.date), complexity (basic, advanced and complex), how many follow-ups (n.follow.ups) and what errors employees made until task-closing, what roles these tasks required (pricing, configuration). • BadgeAssignment.csv how badges (identified by unique BIDs) were assigned either to track the behavior of the employees or to serve as anchor nodes, locations (x, y) of employees' cubicles and anchor nodes, and roles of employees (pricing, configuration, and coordination). The anchor nodes served to stamp employees' behavioral data with times and indoor-locations (c.f. floor plan). • Zigbee.csv proximity ($$\le$$ 10 meters) of employees to one another and to anchor nodes. Zigbee messeges were sent by a badge that was either an anchor node at a fixed location or worn by an employee to track him (sender.id) at a rate of 1 message per sender per 10 seconds. Messages were received by a badge worn by another employee (local.id), inside the range of Zigbee signal from the sender at a specific time (date.time), with a received signal strength indicator (RSSI) indicating how far the sender badge is to the receiver badge. • IR.csv observations when an employee is face-to-face with another employee or an anchor node and is less than 3 meters to the latter. IR messeges were sent by a badge that was either an anchor node at a fixed location or worn by an employee to track him (sender.id) at a rate of 1 message per sender per 10 seconds. Messages were received by a badge worn by another employee (local.id), inside the range of IR signal from the sender, oriented towards sender badge, and at a specific time (date.time) • LocationTrackingEvery1Minute.csv 10 locations with longest stays (x,y) per employee (identified by the id of the badge assigned to him) per minute (time), estimated from Zigbee RSSI to anchor nodes. The indoor locationing algorithm is described in [8]. badge.assignment = read.csv("BadgeAssignment.csv") trans$assign.date = as.POSIXct(trans$assign.date, tz = "America/Chicago") trans$close.date = as.POSIXct(trans$close.date, tz = "America/Chicago") zz = bzfile("LocationTrackingEvery1Minute.csv.bz2", open = "rt") close(zz) hdc.xy$time = as.POSIXct(hdc.xy$time, tz = "America/Chicago") zz = bzfile("IR.csv.bz2", open = "rt") close(zz) IR.aggr$date.time = as.POSIXct(IR.aggr$date.time, tz = "America/Chicago") zz = bzfile("Zigbee.csv.bz2", open = "rt") close(zz) net.aggr$date.time = as.POSIXct(net.aggr$date.time, tz = "America/Chicago") The following figure shows how often two employees were located within the distance of one cubicle (that is, co-located) – rows and columns are indexed by employees. The brightness of a table cell is indexed by row and column representing the amount of time employee and employee were co-located; the whiter the color, the more total time they were co-located. The dendrograms to the left and top of the heat map represent how employees were grouped according to their co-location relationship. A leaf of the dendrogram corresponds to the same employee that indexes a row and a column of the heat map, while the colors on the leaves of the dendrogram represent different branches in the firm – red is configuration branch, green coordination branch, and purple pricing branch. The numbers at the right and bottom sides of the heat map show the IDs of the employee tracking badges. We constructed the dendrogram by expressing the amounts of time that an employee was co-located with other employees as an observation vector of real numbers regarding this employee, defining the distance between two employees and to be where is the correlation coefficient between 's times of co-location with other employees and 's times of correlations with other employees. We use Ward's minimum variance method in hierarchical clustering to find compact, spherical clusters in constructing the dendrogram. badge.prox = with(net.aggr[net.aggr$sender.id %in% unique(net.aggr$local.id), ], table(sender.id, local.id)) badge.prox = sweep(badge.prox, 2, tapply(trunc(as.numeric(net.aggr$date.time)/3600), net.aggr$local.id, function(x) length(unique(x))), "/") scale = "none", RowSideColors = c("yellow", "red", "green", "purple", "gray")[badge.assignment$role[match(rownames(badge.prox), badge.assignment$BID)]], ColSideColors = c("yellow", "red", "green", "purple", "gray")[badge.assignment$role[match(rownames(badge.prox), badge.assignment$BID)]]) According to the theory of structure holes [2], more often people talk to those with the same expertise/ roles, and the less often interactions among people with different expertise/ roles can be more important when they happen. This is confirmed by how often people engaged in face-to-face communications in the call center, as indicated by the IR messages logged by the employees' badges (c.f. figure below), and how visiting another employees' cubicles could contribute to higher productivity per unit time, to be discussed later. Employees are more likely to have face-to-face discussions when their cubicles are closer, and this indicats a way of engineering the communication structures within the call center by adjusting the cubicles. IR.aggr2 = IR.aggr[IR.aggr$sender.id %in% unique(hdc.xy$id) & IR.aggr$local.id %in% unique(hdc.xy$id), ] ir.prox = table(unique(IR.aggr2)[, c("sender.id", "local.id")]) ir.prox = ir.prox[rownames(ir.prox) %in% colnames(ir.prox), colnames(ir.prox) %in% rownames(ir.prox)] ir.prox.hclust = hclust(as.dist(sqrt(1 - cor(asinh(ir.prox)))), method = "ward") heatmap(asinh(ir.prox * 10), Rowv = as.dendrogram(ir.prox.hclust), Colv = as.dendrogram(ir.prox.hclust), scale = "none", RowSideColors = c("yellow", "red", "green", "purple", "gray")[badge.assignment$role[match(rownames(ir.prox), badge.assignment$BID)]], ColSideColors = c("yellow", "red", "green", "purple", "gray")[badge.assignment$role[match(rownames(ir.prox), badge.assignment$BID)]]) The following figure shows the positive correlation between the number of tasks assigned and where an employee went while working on a task. The employee with the highest number of assignments (badge ID 293) received 132 tasks during one month. His entropy of going to different places to finish these assignments was 5.75, and he typically went to exp(5.75)=315 grid points in the workspace (out of 502 in total), or 19 cubicles of the 28 non-empty cubicles. The employee with the least number of assignments received only one task. His entropy was 4.19, and he typically went to exp(4.19)=66 grid points, or 6 cubicles. The following figure also shows that employees in the pricing branch and in the configuration branch received and finished assignments very differently. In terms of overall tasks assigned, a pricing employee received an average of nine times as many assignments when they were basic, and three times as many when they were complex, as a configuration employee was assigned. Pricing employees also finished these assignments in parallel, and went to many people to solve these assignments. Configuration employees, on the other hand, solved advanced assignments exclusively, worked serially, and went to fewer people to solve their assignments. Interpreting the log linear relationship between rate of completion and entropy in terms of survival analysis, we write time of completion = $$\exp(-\sum_{(\tilde{x}_{m},\tilde{y}_{n})}p(\tilde{x}_{m},\tilde{y}_{n})\log p(\tilde{x}_{m},\tilde{y}_{n}))$$, where $$(\tilde{x}_{m},\tilde{y}_{n})$$ is the set of location grids onto which we map RSSI, $$p(\tilde{x}_{m},\tilde{y}_{n})$$ is the probability that the grid was visited, the exponent is the entropy of the employee's location-visiting behavior when he had a task, and the visit to every location $$(\tilde{x}_{m},\tilde{y}_{n})$$ makes task completion $$\exp(-\sum_{(\tilde{x}_{m},\tilde{y}_{n})}p(\tilde{x}_{m},\tilde{y}_{n})\log p(\tilde{x}_{m},\tilde{y}_{n}))$$ times faster. The “survival” time of a task is an exponential function of the rate of task completion, which in turn is the sum of the contributions from all locations that this employee visited weighted by the frequencies with which this employee visited them. The contribution of a specific location per visit $$-\log p(\tilde{x}_{m},\tilde{y}_{n})$$ is more critical when the location is less visited; however, over all visits, the more-frequently-visited locations contributed more to task completion than the less-visited locations, because $$p\log p$$ decreases to 0 when $$p$$ decreases to 0. hdc.entropy = sapply(split(hdc.xy, hdc.xy$id), function(x) { p = table(paste(x$x, x$y)) p = p/sum(p) sum(p * log(p)) }) hdc.accomplishment = c(table(as.character(trans$assigned.to))) hdc.accomplishment = hdc.accomplishment[intersect(names(hdc.accomplishment), names(hdc.entropy))] hdc.entropy = hdc.entropy[intersect(names(hdc.accomplishment), names(hdc.entropy))] plot(-hdc.entropy, hdc.accomplishment, xlab = "entropy", ylab = "# of tasks assigned to") suppressMessages(require(maptools)) pointLabel(-hdc.entropy, hdc.accomplishment, names(hdc.entropy), col = sapply(as.character(badge.assignment$role[match(names(hdc.entropy), badge.assignment$BID)]), function(x) switch(x, Pricing = "purple", Base station = "orange", Coordinator = "green", Configuration = "red", RSSI = "gray"))) legend("topleft", text.col = c("red", "purple"), legend = c("configuration", "pricing")) We show with a quantile-quantile plot (c.f. figure below) that the distance of two persons was closer within 1 minute of a face-to-face discussion, as compared to the distance within 1 hour of the face-to-face discussion, as a sanity testing of the time stamps estimated from “jiffy'' counts of the badges, and the indoor-locations estimated from Zigbee RSSI from employees' badges to anchor nodes: We randomly take 200 records of IR proximity from the data set, randomly take 10 locations within 1 minute of the IR proximity from the sender badge and 10 locations from the receiver badge for each record, sort the 20 thousand pairwise distances (200 records $$\times10\times10$$ pairwise distances per record), and plot them against another 20 thousand sorted distances within 1 hour of IR proximity. We find that with 90% probability two persons were within the distance of 1 cubicle in the 1 minute window of their face-to-face discussion, as compared to 70% probability in the 1 hour window. We would not find this structure if either the estimated time stamps had an error bigger than 1 minute or the estimated indoor locations had an error bigger than the distance of 1 cubicle. We can similarly check that two persons were closer to each other at the time of IR-proximity than Zigbee-proximity, and two persons had more IR-proximity records and Zigbee-proximity records when their cubicles were closer. IR.aggr2 = IR.aggr[IR.aggr$sender.id %in% unique(hdc.xy$id) & IR.aggr$local.id %in% unique(hdc.xy$id), ] IR.aggr2$ndx.local = match(paste(IR.aggr2$local.id, strftime(IR.aggr2$date.time, "%Y-%m-%d %H:%M:00")), paste(paste(hdc.xy$id, strftime(hdc.xy$time, "%Y-%m-%d %H:%M:00")))) IR.aggr2$ndx.sender = match(paste(IR.aggr2$sender.id, strftime(IR.aggr2$date.time, "%Y-%m-%d %H:%M:00")), paste(paste(hdc.xy$id, strftime(hdc.xy$time, "%Y-%m-%d %H:%M:00")))) IR.dist = unlist(lapply(sample(which(!is.na(IR.aggr2$ndx.local) & !is.na(IR.aggr2$ndx.sender)), 200), function(n) { a = hdc.xy[IR.aggr2$ndx.local[n] + 0:9, c("x", "y")] b = hdc.xy[IR.aggr2$ndx.sender[n] + 0:9, c("x", "y")] round((outer(a$x[1:10], b$x[1:10], function(u, v) (u - v))^2 + outer(a$y[1:10], b$y[1:10], function(u, v) (u - v))^2)^0.5) })) w = as.numeric(hdc.xy$time) IR.dist2 = unlist(lapply(sample(which(!is.na(IR.aggr2$ndx.local) & !is.na(IR.aggr2$ndx.sender)), 200), function(n) { ndx = which(IR.aggr2$local.id[n] == hdc.xy$id) ndx.local = ndx[abs(w[ndx] - as.numeric(IR.aggr2$date.time[n])) < 60 * 60] ndx = which(IR.aggr2$sender.id[n] == hdc.xy$id) ndx.sender = ndx[abs(w[ndx] - as.numeric(IR.aggr2$date.time[n])) < 60 * 60] a = hdc.xy[sample(ndx.local, 10, replace = TRUE), c("x", "y")] b = hdc.xy[sample(ndx.sender, 10, replace = TRUE), c("x", "y")] round((outer(a$x[1:10], b$x[1:10], function(u, v) (u - v))^2 + outer(a$y[1:10], b\$y[1:10], function(u, v) (u - v))^2)^0.5) })) qqplot(IR.dist2, IR.dist, pch = ".", xlab = "distance distribution less than 1 hour from IR proximity", ylab = "distance distribution less than 1 minute from IR proximity ", main = "Q-Q plot") abline(coef = c(0, 1), col = "red") pointLabel(quantile(IR.dist2, 1:9/10), quantile(IR.dist, 1:9/10), paste("", 1:9, sep = "."), col = "red") # More Data We repackaged the raw sensor data for investigators to inspect the call-center dynamics from more perspectives. The time stamps of the raw sensor data (directly from the badge hardware) were badge CPU clock counts, and started from 0 each time the badges were powered on to collecting data. • ChunkOffset.csv the times (offset YYYY-mm-dd HH:MM:SS in the local time of the call center) when the badges were powered on to collect a chunk of data. Hence the time stamp of each sensor record in a chunk is chunk offset + badge CPU clock / 374400 Hz. • Accelerometer.bz2 readings from 3-axis accelerometers (x,y,z) on the badges (local.id), timestamped with badge CPU clocks (local.time), tagged with chunk identifier to recover the global time stamps, and sampled at 100Hz. The accelerometer data enable us to estimate the activities of the employees (walking, standing, sitting) and their interactions. • AudioFeatures.bz2 audio features as described in [16]. • IR-raw.csv same as IR.csv, containing no data.time field (YYYY-mm-dd HH:MM:SS in the local time of the call center), but containing badge CPU clock (local.time) and tagged with chunk identifier. • Zigbee-raw.csv same as Zigbee.csv, containing no data.time field (YYYY-mm-dd HH:MM:SS in the local time of the call center), but containing badge CPU clock of the sender badge (sender.time) and the receiver badge (receiver.time), as well as the chunk identifier of the reeiver badge. The time of the call center is chunk offset + local.time/374400. We estimate the time of the call center (YYYY-mm-dd HH:MM:SS) corresponding to badge power-on based on the following two facts: (1) The anchor nodes were never rebooted. Hence the CPU clocks of each anchor nodes were non-decreasing over time, and sender.time in Zigbee-raw.csv is non-decreasing in each chunk and consistent across different chunks if sender.id is the ID of an anchor node. (2) The time when the data from the badge hardware were downloaded to a computer should be later than the times of the sensor records. For example, suppose the CPU clock range of one badge is from 0 to 3600* 374400 (CPU clock rate), corresponding to CPU clock range from 3600*374400 to 3600*2*374400 of anchor node A in Zigbee records, corresponding to CPU clock range from 1800*374400 + 3600*374400 to 1800*374400 + 3600*2*374400 of anchor node B, and the data on the badge were dumped at the noon of 2007/03/30. We can infer that the anchor node B started slightly earler than 10:30am on 2007/03/30, and half an hour earlier than anchor node A. After we average over all chunks of sensor data and all anchor nodes, we estimate that our mapping from CPU clock to the time of the call center should have less than 1 second error. Indoor localization from Zigbee RSSI is based on the fact that employees were at their cubicles more than at elsewhere, and is based on comparing RSSI to anchor nodes per minute per employee to the signature RSSIs to anchor nodes when employees were in their cubicles [8]. # Literature Review Researchers have been using multi-agent models to simulate organizational dynamics and organizational performance based on simple generative rules [1, 3, 17] since long before the availability of sensors to accurately track the whole population in an organization. In particular, Carley proposed that organizational dynamics center around three components (tasks, resources, and individuals) and five relationships (temporal ordering of tasks, resource prerequisite of tasks, assignment of personnel to tasks, interpersonal relationships, and accessibility of resources to individuals). Previous successes suggest the strong potential to verify these generative rules with sensor data – fitting multi-agent models to real-world sensor data that track organization dynamics, and even providing real-time interventions to organizations by combining multi-agent models and sensor data. A key psychological hypothesis behind organizational theory is transactive memory : an organization coping with complex tasks often needs a knowledge repertoire far beyond the memory capacity and reliability of any individual in this organization. Individuals collaborate to store this total repertoire by identifying the expertise of one another and distributing the repertoire among themselves. In the end, each individual has a subset of the repertoire, and an index of who knows what and how credible that source is. The longer group members work with one another, the more they understand this distribution of expertise and weakness, so the more precise their communications become and the more productively they retrieve information and complete tasks. The face-to-face interaction network is important in understanding how individuals completed tasks in the data set’s server configuration firm, because information flow and task solutions result from this face-to-face network. Location tracking is critical for pinpointing the direction of information flow. If A visits B, this means that information flows from B to A; if many people visited A, this is very different from A visiting many people. We can use the following generative multi-agent process to model the dynamics and performance of the IT firm that is compatible with the organizational dynamics theory. An individual iterates among four states during his work: working on his assignment by himself, asking for help from another individual, giving help to another individual, or idling. This individual enters and exits different states with different probabilities, proportional to the rates of different events: how often tasks come, how he and his counterparts make choices, and how effective these choices are towards assignment closing. Hence the number of tasks closed by an individual is inverse proportional to the average "survival time” of a task (the time for this individual to finish a task), and the average survival time of a task is an exponential function of the negative rate with which this individual finishes tasks in his different states [12]. Going to the cubicle of an individual with the right piece of knowledge will increase the productivity by a certain factor, dependent on how often this right piece of knowledge is needed and how effective is the communication. Findings and literature review on this data set include [8, 19]. The background on tracking organizational dynamics with badge hardware can be found at [6, 18, 15, 11]. Models about organizationa dynamics are described in, for example, [4, 5, 9, 14]. # References 1. Robert Axelrod. The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration. Princeton University Press, 1997. 2. Ronald S. Burt. Structural holes : the social structure of competition. Harvard University Press, Cambridge, Mass., 1st harvard university press pbk. ed. edition, 1995. 3. Kathleen M Carley. Computational organizational science and organizational engineering. Simulation Modeling Practice and Theory, 10(5-7):253-269, 2002. 4. Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Reviews of Modern Physics, 81(2):591-646, 2009. 5. Christophe P. Chamley. Rational Herds: Economic Models of Social Learning. Cambridge University Press, 2003. 6. Wen Dong. Modeling the Structure of Collective Intelligence. PhD thesis, MIT, 2010. 7. Wen Dong, Bruno Lepri, and Alex Pentland. Modeling the co-evolution of behaviors and social relationships using mobile phone data. In MUM, pages 134-143, 2011. 8. Wen Dong, Daniel Olgun Olgun, Benjamin N. Waber, Taemie Kim, and Alex Pentland. Mapping organizational dynamics with body sensor networks. In Guang-Zhong Yang, Eric M. Yeatman, and Chris McLeod, editors, BSN, pages 130-135. IEEE, 2012. 9. Joshua M. M. Epstein. Generative Social Science: Studies in Agent-Based Computational Modeling (Princeton Studies in Complexity). Princeton University Press, 2007. 10. Jr Joe H. Ward. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association, 58(301):236-244, 1963. 11. Taemie Kim. Enhancing distributed collaboration using sociometric feedback. PhD thesis, MIT, 2011. 12. Jerald F. Lawless. Statistical models and methods for lifetime data (2nd ed.). John Wiley and Sons, 2003. 13. Leah Lievrouw and Sonia Livingstone, editors. Smart agents and organizations of the future, chapter 12, pages 206-220. Handbook of New Media. Sage Publications, Inc., 2002. 14. Gilbert Nigel. A generic model of collectivities. In ABModSim 2006, International Symposium on Agent Based Modeling and Simulation. University of Vienna: European Meeting on Cybernetics Science and Systems Research, 2006. 15. Daniel Olgun Olgun. Sensor-based organizational design and engineering. PhD thesis, MIT, 2011. 16. Alex Pentland. Honest Signals. MIT press, 2008. 17. Ron Sun. Cognition and multi-agent interaction: from cognitive modeling to social simulation. Cambridge University Press, 2006. 18. Ben Waber. Understanding the link between changes in social support and changes in outcomes with the sociometric badge. PhD thesis, MIT, 2011. 19. Lynn Wu, Benjamin N. Waber, Sinan Aral, Erik Brynjolfsson, and Alex Pentland. Mining face-to-face interaction networks using sociometric badges: Predicting productivity in an it con guration task. In ICIS, page 127, 2008.
2017-11-25 09:20:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48907455801963806, "perplexity": 4248.008782265693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00084.warc.gz"}
https://tex.stackexchange.com/questions/117812/decrease-top-margin-of-some-chapters
# Decrease top margin of some chapters The problem is that I cannot avoid all lines to go to the next page without making the abstract page a bit weird (too small margin between title and text, too small stretch line...) and I think the easiest way to achieve that is to decrease the margin before the title. Any idea? Thank you! MWE: \documentclass{book} % Some dummy text \usepackage{lipsum} % Defult line stretch of all document \renewcommand{\baselinestretch}{1.5} % Starting document \begin{document} % The chapter I'd like to fit in one page \renewcommand{\baselinestretch}{0.5} % Reduce line stretch % \vspace*{-1.1cm} % doesn't work properly \chapter*{Abstract} \vspace*{-1.1cm} % Reduce space between title and text \lipsum[2-4] % Dummy text \lipsum[13] \renewcommand{\baselinestretch}{1.5} % Restore line stretch % A normal chapter whose behaviour I don't want to change \chapter{Introduction} \lipsum[6-20] % Dummy text \end{document} Peaso • Welcome to TeX.SX! Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – mafp Jun 5 '13 at 21:53 • See Chapter formatting or Space above chapter with titlesec. I'm sure one of these will solve your problem. Once you'd had some time to look at them, respond with some feedback as to their usefulness and why/if you need more help. – Werner Jun 5 '13 at 22:54 • @mafp: I've added a MWE. As you can see, there are two chapters: a special one (the Abstract) and a normal one (Introduction). I'd like to decrease the top margin ONLY for the abstract. – Peaso Jun 6 '13 at 15:04 • @Werner: thank you for your indications. The problem is that I only want to change the usual behaviour of a certain chapter, not for all chapters. I'll tak a look to titlesec. – Peaso Jun 6 '13 at 15:06 • I still have no idea of hoy to modify just the margins of some chapters. All examples I've found of titlesec packages are applied to the whole document. Any help? – Peaso Jun 8 '13 at 19:46 LaTeX standard applies. Don't want it global? Limit its scope by fencing it, put it in a cage, or as we like to say, put it in a group. A group can be as simple as a pair of braces. I defined a helper command, that makes life a bit easier. Space before chapters and contents give us the solution on how to change the spacing. Disclaimer I cannot advice any human being to do stuff like that. If you want to save space to save the rain forest, don't print it at all. \documentclass{book} \usepackage{etoolbox} \tracingpatches \makeatletter \newcommand{\makeCondensedChap}{% } \makeatother \usepackage{lipsum} \renewcommand{\baselinestretch}{1.5} \begin{document} {%Open the group \makeCondensedChap \chapter*{Abstract} }%closing the group \lipsum[2-4] \lipsum[13]\par \chapter*{Introduction} \lipsum[6-20] \end{document}
2019-11-12 03:48:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.745251476764679, "perplexity": 1858.1190976182509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00212.warc.gz"}
http://mikesiers.com/book-review-text-mining-with-r
Michael Siers Book Review - Text Mining with R Text Mining with R is very similar to the book R for Data Science (of which I'm a big fan). Whereas the latter is written by the creators of the tidyverse package, the former is written by the creators of the tidytext package. We thus define the tidy text format as being a table with one token per row. - Text Mining with R, Julia Silge & David Robinson The book starts off strong with captivating and easy-to-follow examples of creating tidy text dataframes from unstructured text documents, sentiment analysis, tf-idf, & n-grams. The chapters which follow on transformation to-and-from the tidy text format were not as exciting but provide a solid reference for projects. The book concludes with 3 case studies which give the reader an idea of what might constitute an end-to-end text analysis project. I bought this book amongst several others on the subject of R. I love books with beautiful coloured visualisations and highlighted syntax such as my copies of R for Data Science and Forecasting: Principles and Practice. Unfortunately my copy of Text Mining with R was in black and white which made some of the visualisations hard to read. I gladly would have paid $20-$30 more just to have coloured print. I'd recommend this book to data scientist who are new to text analysis, especially if they are familiar with the tidyverse. I have already begun sharing my newly gained knowledge at work on the matter and have a project in the pipeline. My Ratings Value for Money: ★★★★☆ Usefulness for Work: ★★★★☆ Expertise of the Authors: ★★★★★ Difficulty to Understand: ★★★☆☆ Printed Aesthetics: ★★☆☆☆ Average Score: 3.6 / 5.0
2019-11-15 23:57:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42115288972854614, "perplexity": 2669.600983770567}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.22/warc/CC-MAIN-20191115222436-20191116010436-00156.warc.gz"}
http://www.azimuthproject.org/azimuth/show/Milankovitch%20cycle
# Contents ## Idea The Milankovitch cycles are periodic or quasiperiodic changes in the parameters of the Earth’s orbit and tilt, which in turn affect the climate. The three major types of Milankovitch cycle are: • changes in the eccentricity of the Earth’s orbit: (changes greatly exaggerated) • changes in the obliquity, or tilt of the Earth’s axis: • precession, meaning changes in the direction of the Earth’s axis relative to the fixed stars: These changes do not effect the overall annual amount of solar radiation hitting the Earth, but they affect the strength of the seasons in different ways at different latitudes. It is widely believed that they are partially responsible for the glacial cycles. However, the details of how this occurs are complex and poorly understood, as the following graph makes clear. Some form of amplification, e.g. stochastic resonance, may be involved. ## History A brief history of Milankovitch cycles and their effect on glaciation is given in: Let us quote a bit: Joseph Adhémar (1842) formulated the conjecture that the precession of Earth’s orbit caused ice ages, because of its effects on the seasonal distribution of incoming solar radiation (insolation). He realised that one season had to be critical given that the changes in winter and summer insolation caused by precession cancel each other (Herschell, 1830). James Croll (1875) further developed the theory and appreciated the importance of the three Earth orbital elements determining insolation: eccentricity, obliquity (angle between the equator and the ecliptic) and the longitude of the perihelion. Milutin Milankovitch (1920, 1941) is quoted as the first one to have established a self-contained mathematical theory linking orbital elements to insolation, and insolation to climate changes. On a suggestion of V. Köppen, he calculated the secular evolution of the summer radiation of the northern latitude (published in Köppen and Wegener, 1925), following the hypothesis that glacial cycles are driven by the effects of changes in summer insolation on snow ablation rate during this season (the conjecture had in fact been earlier expressed by Murphy, 1876, and it contradicted Croll’s views). Summer insolation is largest when obliquity is large and/or when summer in the northern hemisphere corresponds to the time of perihelion passing. Milankovitch later substantiated the hypothesis by explicitly calculating the effects of radiation changes on the position of the snow line. By the mid-seventies the Earth’s orbital parameters were known over the last million years to a good accuracy thanks to the work of several generations of astronomers who based themselves on foundations lain by Laplace and Le Verrier. A decisive step was made by Berger (1978), who expressed in an analytical form the Fourier decomposition of the Earth’s orbital parameters relevant for the astronomical theory of palaeoclimates. This work constitutes the first demonstration that the spectrum of climatic precession is dominated by periods of 19, 22 and 24 kyr, that of obliquity, by a period 41 kyr, and eccentricity has periods of 400 kyr, 125 kyr and 96 kyr. The most accurate solution to date is the La04 (Laskar et al., 2004). ## Details There are three major forms of Milankovitch cycle: Eccentricity: The Earth’s orbit is an ellipse, and the eccentricity of this ellipse says how far it is from being circular. But the Earth’s orbit slowly changes shape: it varies from being nearly circular (eccentricity of 0.005) to being more strongly elliptical (eccentricity of 0.058), with a mean eccentricity of 0.028. There are several periodic components to these variations. The strongest occurs with a period of 413,000 years, and changes the eccentricity by ±0.012. Two other components have periods of 95,000 and 125,000 years. The eccentricity affects the percentage difference in incoming solar radiation between the perihelion, when the Earth is closest to the Sun, and aphelion, when it is farthest from the Sun. This works as follows. The percentage difference between the Earth’s distance from the Sun at perihelion and aphelion is twice the eccentricity, and the percentage change in incoming solar radiation is about twice that. The first fact follows from the definition of eccentricity, while the second follows from differentiating the inverse-square relationship between brightness and distance. Right now the eccentricity is 0.0167, or 1.67%. Thus, the distance from the Earth to Sun varies 3.34% over the course of a year. This in turn gives an annual variation in incoming solar radiation of about 6.68%. Note that this is not the cause of the seasons, which arise due to the Earth’s tilt, and occur at different times in the northern and southern hemispheres. Obliquity: The angle of the Earth’s axial tilt with respect to the plane of its orbit, called the obliquity, varies between 22.1° and 24.5° in a roughly periodic way, with a period of 41,000 years. When the obliquity is high, the strength of seasonal variations is stronger. Right now the obliquity is 23.44°, roughly halfway between its extreme values. It is decreasing, and will reach its minimum value around the year 10,000 CE. Precession: The slow turning in the direction of the Earth’s axis of rotation relative to the fixed stars, called precession, has a period of roughly 26,000 years. As precession occurs, the seasons drift in and out of phase with the perihelion and aphelion of the Earth’s orbit. Right now the perihelion occurs during the southern hemisphere’s summer, while the aphelion is reached during the southern winter. This tends to make the southern hemisphere seasons more extreme than the northern hemisphere seasons. The gradual precession of the Earth is not due to the same physical mechanism as the wobbling of the top. That wobbling does occur, but it has a period of only 427 days. The 26,000-year precession is due to tidal interactions between the Earth, Sun and Moon. For details, see: ## Glacial cycles The relation between the Milankovitch cycles and glacial cycles is complex, fascinating, but also poorly understood and controversial. One can see why here: • The blue curve on top shows the obliquity, $\epsilon$. • The green curve is the eccentricity, $e$. • The purple curve is $\sin \varpi$, where $\varpi$ is the longitude of the perihelion, that is, the angle between the vernal equinox and perihelion with the Sun as the angle vertex. This goes from 0 to $2 \pi$ as precession takes place. • The red curve is the precession index $e \sin \varpi$, which plays an important role (see Insolation for details). • The black curve summarizes the combined effects of all Milankovitch cycles: $\overline{Q}^{day}$ is the daily-averaged incoming solar radiation at the top of the atmosphere on the day of the summer solstice at 65° north latitude, measured in watts/meter2. This says how much solar energy the Earth gets on midsummer’s day at this latitude. To see how this is calculated from the previous quantities, see Insolation. • The benthic foram data comes from Lorraine E. Lisiecki and M. E. Raymo, A Pliocene-Pleistocene stack of 57 globally distributed benthic δ18O records, Paleoceanography 20 (2005), PA1003. Benthic forams are little shells found in ancient sediments on the sea floor. Much of the change in oxygen-18 in these shells is thought to be caused by sequestration of the more common lighter isotope of oxygen, oxygen-16, in large continental ice sheets. • The Vostok ice core data comes J. R. Petit et al., Vostok ice core data for 420,000 years, IGBP PAGES/World Data Center for Paleoclimatology Data Contribution Series #2001-076, NOAA/NGDC Paleoclimatology Program. It uses isotopic analysis from an ice core sample taken from Vostok, Antarctica: the temperature is inferred from the concentration of deuterium in the ice. The above graph was taken from Wikimedia Commons. The Milankovitch cycles were computed using Fortran programs available here: At this website you can create your own table of the eccentricity, obliquity and longitude of perhelion for selected years by entering the minimum year, maximum year, and yearly increment. ## Stochastic resonance There have been interesting attempts to argue that stochastic resonance is important in amplifying the small effects of Milankovitch cycles to create glacial cycles. Work along these lines began with Benzi, Patera, Sutera, and Vulpiani: • R. Benzi, A. Sutera, and A. Vulpiani, The mechanism of stochastic resonance, J. Phys. A 14 (1981), L453. • R. Benzi, G. Parisi, A. Sutera, and A. Vulpiani, Stochastic resonance in climatic change, Tellus 34 (1982), 10–16. For a review of this line of work see: Also see: For a fascinating but highly mathematical treatment using energy balance models see: For more, see Stochastic resonance. ## The effect of changes in eccentricity Changes in obliquity clearly have very little effect on the global, annual average insolation, since the Earth is nearly spherical. It is more tricky to determine how changes in the eccentricity of the Earth’s orbit affect this quantity. Let us work this out in detail, following a calculation presented by Greg Egan on the Azimuth blog. While the result is surely not new, Egan’s approach makes nice use of the fact that both gravity and solar radiation obey an inverse-square law! Here is his calculation: The angular velocity $\frac{d \theta}{d t} = \frac{J}{m r^2}$, where $J$ is the constant orbital angular momentum of the planet and $m$ is its mass, so if the radiant energy delivered per unit time to the planet is $\frac{d U}{d t} = \frac{C}{r^2}$ for some constant $C$, the energy delivered per unit of angular progress around the orbit is $\frac{d U}{d \theta} = \frac{C}{r^2} \frac{d t}{d \theta} = \frac{C m}{J}.$ So, the total energy delivered in one period will be $U=\frac{2\pi C m}{J}$. How can we relate the orbital angular momentum $J$ to the shape of the orbit? If you equate the total energy of the planet, kinetic $\frac{1}{2}m v^2$ plus potential $-\frac{G M m}{r}$, at its aphelion $r_1$ and perihelion $r_2$, and use $J$ to get the velocity in the kinetic energy term from its distance, $v=\frac{J}{m r}$, when we solve for $J$ we get: $J = m \sqrt{\frac{2 G M r_1 r_2}{r_1+r_2}} = m b \sqrt{\frac{G M}{a}}$ where $a=\frac{1}{2} (r_1+r_2)$ is the semi-major axis of the orbit and $b=\sqrt{r_1 r_2}$ is the semi-minor axis. But we can also relate $J$ to the period of the orbit, $T$, by integrating the rate at which orbital area is swept out by the planet, $\frac{1}{2} r^2 \frac{d \theta}{d t} = \frac{J}{2 m}$ over one orbit. Since the area of an ellipse is $\pi a b$, this gives us: $J = \frac{2 \pi a b m}{T}.$ Equating these two expressions for $J$ shows that the period is: $T = 2 \pi \sqrt{\frac{a^3}{G M}}$ So the period depends only on the semi-major axis; for a fixed value of $a$, it’s independent of the eccentricity. If we agree to hold the orbital period $T$, and hence the semi-major axis $a$, constant and only vary the eccentricity of the orbit, we have: $U=\frac{2\pi C m}{J} = \frac{2\pi C}{b} \sqrt{\frac{a}{G M}}$ Expressing the semi-minor axis in terms of the semi-major axis and the eccentricity, $b^2 = a^2 (1-e^2)$, we get: $U=\frac{2\pi C}{\sqrt{G M a (1-e^2)}}$ So to second order in $e$, we have: $U = \frac{\pi C}{\sqrt{G M a}} (2+e^2)$ The expressions simplify if we consider average rate of energy delivery over an orbit, which makes all the grungy constants related to gravitational dynamics go away: $\frac{U}{T} = \frac{C}{a^2 \sqrt{1-e^2}}$ or to second order in $e$: $\frac{U}{T} = \frac{C}{a^2} (1+\frac{1}{2} e^2)$ We can now work out how much the actual changes in the Earth’s orbit affect the amount of solar radiation it gets! In what follows we assume that the semi-major axis is indeed held constant while eccentricity changes; from the work of Laskar (see below) this is approximately correct. According to Wikipedia: The shape of the Earth’s orbit varies in time between nearly circular (low eccentricity of 0.005) and mildly elliptical (high eccentricity of 0.058) with the mean eccentricity of 0.028. The total energy the Earth gets each year from solar radiation is proportional to $\frac{1}{\sqrt{1-e^2}}$ where $e$ is the eccentricity. When the eccentricity is at its lowest value, $e = 0.005$, we get $\frac{1}{\sqrt{1-e^2}} = 1.0000125$ When the eccentricity is at its highest value, $e = 0.058$, we get $\frac{1}{\sqrt{1-e^2}} = 1.00168626$ $1.00168626/1.0000125 = 1.00167373$ In other words, a change of merely 0.167%. That’s very small And the effect on the Earth’s temperature would naively be even less! Naively, we can treat the Earth as a greybody. Since the temperature of a greybody is proportional to the fourth root of the power it receives, a 0.167% change in solar energy received per year corresponds to a percentage change in temperature roughly one fourth as big. That’s a 0.042% change in temperature. If we imagine starting with an Earth like ours, with an average temperature of roughly 290 kelvin, that’s a change of just 0.12 kelvin! The upshot seems to be this: in a naive model without any amplifying effects, changes in the eccentricity of the Earth’s orbit would cause temperature changes of just 0.12 °C! This is much less than the roughly 5 °C change we see between glacial and interglacial periods. So, if changes in eccentricity are important in glacial cycles, we have some explaining to do. Possible explanations include season-dependent phenomena and climate feedback effects. ## References ### News articles and overviews This news item surveys 17 papers in an issue of Paleoceanography entitled ‘Milankovitch climate cycles through geologic time’: This news item provides a quick summary of John Imbrie’s work on the SPECMAP project, which uses spectral analysis to look for effects of the Milankovitch cycles on different aspects of our climate: Abstract: Milankovic climate oscillations help define climate sensitivity and assess potential human-made climate effects. We conclude that Earth in the warmest interglacial periods was less than 1°C warmer than in the Holocene and that goals of limiting human-made warming to 2°C and CO2 to 450 ppm are prescriptions for disaster. Polar warmth in prior interglacials and the Pliocene does not imply that a significant cushion remains between today’s climate and dangerous warming, rather that Earth today is poised to experience strong amplifying polar feedbacks in response to moderate additional warming. Deglaciation, disintegration of ice sheets, is nonlinear, spurred by amplifying feedbacks. If warming reaches a level that forces deglaciation, the rate of sea level rise will depend on the doubling time for ice sheet mass loss. Gravity satellite data, although too brief to be conclusive, are consistent with a doubling time of 10 years or less, implying the possibility of multi-meter sea level rise this century. The emerging shift to accelerating ice sheet mass loss supports our conclusion that Earth’s temperature has returned to at least the Holocene maximum. Rapid reduction of fossil fuel emissions is required for humanity to succeed in preserving a planet resembling the one on which civilization developed. ### Technical papers #### Hays, Imbrie and Shackleton 1976 This was a major analysis that applied Fourier analysis to climate proxies. From several hundred sediment cores studied stratigraphically by the CLIMAP project, they selected two (RC11-120 and E49-18) from the Indian ocean and measured: 1) $\delta^{18}O$, the oxygen isotopic composition of planktonic foraminifera, 2) Ts, an estimate of summer sea-surface temperatures at the core site, derived from a statistical analysis of radiolarian assemblages, 3) percentage of Cycladophora davisiana, the relative abundance of a radiolarian species not used in the estimation of Ts. Identical samples were analyzed for the three variables at 10-cm intervals through each core. Here are their findings for RC11-120: and for E49-18: After some processing described in the paper, here are the power spectra for these data: As you can see, there seem to be peaks at frequences of 100 ka, 42 ka, 23 ka, and 19 ka. #### Berger 1989 This piece of work reviews evidence for the Milankovitch explanation of glacial cycles, at least up to 1989: 2 (1989), 1-14. Abstract: Spectral analysis of geological records show periodicities corresponding to those calculated for the eccentricity (400 and 100 ka), the obliquity (41 ka) and the climatic precession (23 and 19 ka). It is precisely the geological observations of this bi-partition of the precessional peak, confirmed to be real in astronomical computations, which was one of the most impressive of all tests for the Milankovitch theory. Concerning the question of whether or not the observed cycles account for most of the climatic variability having periods in the range predicted by the astronomical theory, substantial evidence (from cross-spectral analysis, coherency analysis, and modelling) is provided that, at least near frequencies of variation in obliquity and precession, a considerable fraction of the climate variance is driven in some way by insolation changes accompanying changes in the earth’s orbit. The variance components centered near a 100 ka cycle dominate most ice volume records and seem in phase with the eccentricity cycle, although the exceptional strength of this cycle needs a non-linear amplification by the glacial ice sheets themselves and associated feedbacks. As the insolation spectra change with latitudes and with the type of parameters considered, the diversity of the spectra of different climatic proxy data recorded in different places of the world over different periods is used to better understand how the climate system responds to the insolation forcing. This study of the physical mechanisms involved is also achieved through the analysis of the log-log shapè of the geological records and through the comparison, in frequency domain, between simulated climatic time series and proxy data. The evidence, both in the frequency and in the time domain, that orbital influences are felt by the climate system, implies that the astronomical theory may provide a clock with which to date Quaternary sediments with a precision several times greater than is now possible. Berger cites the aforementioned paper of Hays, Imbrie and Shackleton as important in finding empirical evidence for long-term climate cycles. He writes: By the 1970s, the grounds upon which the first strong test of the Milankovitch theory was going to be based, were settled. Judicious use of radiometric dating and other techniques gradually clarified the details of the time scale (Broecker et al., 1968); better instrumental methods came on the scene for using oxygen isotope data as ice age relics (Shackleton and Opdyke, 1973); ecological methods of core interpretation were perfected (Imbrie and Kipp, 1971); global climates in the past were reconstructed (CLIMAP, 1976); and astronomical calculations were checked and refined (Vernekar, 1972; Berger, 1976b). Unfortunately it is not possible in this paper to give credit to all scientists responsible for the new picture of ice ages and their cause (refer to Imbrie and Imbrie, 1979; Berger, 1980a, 1988; Imbrie, 1982). Scientists were thus ready to show that the climatic features in the geological record go through precisely the same rhythms as do the orbital parameters of the earth, with sufficiently close links in time (in phase) to confirm a relationship of cause and effect. In August 1975, at the climate conference of the World Meteorological Organization in Norwich, the spectral characteristics of two time series independently built – one in geology and one in astronomy – were compared, for the first time in the history of climate investigation. The joint efforts of Hays, Imbrie, and Shackleton (published in full in Science, December 1976) demonstrated that geologic spectra contain substantial variance components at many frequencies similar to those obtained by Berger from astronomical computations (published in Nature, January 1977). The principal feature of the variance spectra obtained from a composite deep-sea core from the Indian Ocean (RCl1-120 and E49-18), spanning the past 468 ka, was a characteristic red noise signal over periods ranging from about 100 ka to some thousand years. Superimposed on this signal were several peaks representing significant responses at orbital frequency (Fig. 1). On both isotopic (ice volume) and temperature radiolarian spectra, peaks appeared for cycles roughly 100 ka, 42 ka, 23 ka, and 19 ka long. The obliquity signal with a period of about 42 ka was gratifyingly consistent and detectable for the past 300 ka. The related climatic variations lagged about 9 ka behind the rhythm of the changing tilt of the earth. This consistency was less, however, in the older part of the record, because of a presumed less accurate time scale. This lead Hays et al. (1976) to adjust the time scale slightly (the ‘tune-up’ being well within the uncertainties of the dating) to obtain the same phase relationship going back to 425 ka BP. A 23 ka cycle, corresponding with the precession of the equinoxes, also appeared strongly in the geological record. Although the uncertainties about dating produced relatively greater uncertainties in the phase relationships for this higher frequency, and the earth’s orbit was almost circular between 350 ka and 450 ka, the tune-up adjustments still showed that the climatic factors change closely in step with the wobble over the whole period. The most delicate and impressive of all tests for the Milankovitch theory, however, came with a closer examination of the astronomical solution. The precession of the equinoxes was split into two frequency components with periods of 23 ka and 19 ka. This was calculated both by Berger (1977) from his theoretical investigation of the astronomical theory and observed by Imbrie in the Hays and Shackleton records (1976). Surprisingly, however, the dominant cycle in Hays et al.’s data had a consistent period of 100 ka, with the coldest periods of ice age activity coinciding dramatically with the periods of near-circularity of the earth’s orbit. This is exactly the opposite of what Milankovitch claimed, because in his theory eccentricity could only modulate the size of the precession effect, but the conclusion is compatible as far as the total energy effect is concerned (lower eccentricity, lower total energy received from the sun). As the 100 ka year cycle is extremely weak in the insolation data set, with obliquity and precession sharing almost entirely the total variance, one of the mysteries which remained when trying to relate paleoclimates to astronomical insolations was to find how this cycle could be enhanced to become the dominant cycle observed in geological records. Laskar seems to be an authority on celestial mechanics. Relevant papers of his include: 1. J. Laskar, The chaotic motion of the solar system. A numerical estimate of the chaotic zones, Icarus 88 (1990), 266-291. 2. J. Laskar, F. Joutel and F. Boudin, Orbital, precessional and insolation quantities for the Earth from -20 Myr to + 10Myr, Astronomy and Astrophysics 270 (1993), 522-533. 3. J. Laskar, M. Gastineau, F. Joutel, P. Robutel, B. Levrard, and A.C.M. Correia, A long term numerical solution for the insolation quantities of Earth. Astronomy and Astrophysics 428 (2004), 261-285. In (2) section 4.1, they state that for the purpose of computing insolation, they neglect the secular variations of the semi major axis. There is a diagram for the variation of the semi-major axis in (2) (fig. 11) which seems to justify this. In (3) section 7, they state that an early version of this fact was known to Laplace and Lagrange! In these papers they describe a rather complete computation of orbit elements, obliquity and precession for several tens of millions of years. There is some uncertainties, for instance due to the tides on Earth. In (2) 7.2 it is taken into account that the magnitude of the tides might change during an ice age. This is further discussed in [3] 4.2, but I’m not entirely convinced that they really really know by how much. They do seem pretty convinced that their calculations are correct from -20 My to present. If you try to extend the calculations to earlier times, you will eventually be up against chaos. In (3) section 11 they state that it is hopeless to try to get to the Mesozoic, before - 65My. The best place to start reading these papers is probably (3), especially because of the interesting diagrams and tables. #### Bennett 1990 Here is a curious paper on Milankovitch cycles and evolutionary biology: Abstract: The Quaternary ice ages were paced by astronomical cycles with periodicities of 20-100 ky (Milankovitch cycles). These cycles have been present throughout earth history. The Quaternary fossil record, marine and terrestrial, near to and remote from centers of glaciation, shows that communities of plants and animals are temporary, lasting only a few thousand years at the most. Response of populations to the climatic changes of Quaternary Milankovitch cycles can be taken as typical of the way populations have behaved throughout earth history. Milankovitch cycles thus force an instability of climate and other aspects of the biotic and abiotic environment on time scales much less than typical species durations (1-30 m.y.). Any microevolutionary change that accumulates on a time scale of thousands of years is likely to be lost as communities are reorganized following climatic changes. A four-tier hierarchy of time scales for evolutionary processes can be constructed as follows: ecological time (thousands of years), Milankovitch cycles (20-100 k.y.), geological time (millions of years), mass extinctions (approximately 26 m.y.). “Ecological time” and “geological time” are defined temporally as the intervals between events of the second and fourth tiers, respectively. Gould’s (1985) “paradox of the first tier” can be resolved, at least in part, through the undoing of Darwinian natural selection at the first tier by Milankovitch cycles at the second tier. #### Gallée et al 1991-1992 These papers present a model that simulates the last glacial cycle starting from insolation data over the last 122 kiloyears. Ice albedo feedback plays a crucial role in amplifying the Milankovich cycles. The importance of many other effects is studied, with many interesting conclusions. The effect of changing CO2 concentrations is neglected except in Section 6 of the second paper, and even here the feedback cycle involving temperature and CO2 concentration is not discussed. There are many interesting figures. Here is one of the most basic: a graph of June insolation as a function of latitude for the last 130,000 years: #### Huybers 2009 In this paper, Huybers proposes the following: The eccentricity period does not matter, that’s a red herring. Throw it away, don’t think about it. It’s all about tilt (obliquity). The glaciation 40ky period indeed corresponds to the dominant obliquity period. Sometimes, this period gets doubled or tripled, giving periods of 80 ky or 120 ky. These two cases he subsumes under the common name $\sim100$ ky. The problems he wants to solve is: why does doubling and tripling occur, and why do we get sequences of single periods (as in early Pleistocene) , and sequences of repeated periods (as in late Pleistocene and Holocene). That is, following a single period, we will most likely get a single period, and following a $\sim 100$ky period, we will most likely get a $\sim100$ ky period. As a preliminary and as supporting evidence he makes one strange observation about the $\delta^{18}O$ data. The curve $\gamma$ describing this, which presumably also measures the total amount of ice on the planet, has the following strange property: The derivative $\gamma\prime$ is negatively correlated to $\gamma$, with a time lag of 9 ky. The long time lag is curious, and he does suggest several possible physical explanations for this correlation, without chosing his favourite. It seems (though it’s not stated absolutely clearly in the article), that this correlation is mainly valid in the ablation phase, that is during de-glaciation. An interesting fact is that the correlation persists through the change from 40ky to $\sim100$ ky periods. This seems to imply that the physics of ablation is the same in both cases. I think that Huybers takes this as evidence that the forcing is also the same. Since eccentricity can’t be blamed when the period is 40ky, eccentricity can’t be guilty even when the period is 100ky. Based on this observation, Huybers writes down a time dependent difference equation, whose time dependence codifies a periodic forcing, with period 40 ky. He states that the difference equation has chaotic behaviour, but also that there exists a periodic orbit with period 40ky. Close to this periodic orbit, there are solutions with periods 80ky. These solutions alternate between 40ky of mild glaciation and 40 ky of severe glaciation. There are also cycles of period 120 ky (I don’t know if they are also supposed to be close to the basic periodic orbit) , passing through 40ky of interglacial, 40 ky of mild glacial and finally 40 ky of full glacial. So it seems that we can produce sequences of cycles close to these periodic orbits which reproduce the geological record. Unfortunately I did not entirely understand the mathematics of this part of the article. The system is chaotic, so we should expect random shifts between the various periodic orbits. There is no particular external reason for the shift from 40ky to 100ky. ### Books Let’s list books chronologically and comment on them. Here are some: • Milutin Milanković‘s original book is available in German under the title Kanon der Erdebestrahlung und Seine Anwendung auf das Eiszeitenproblem. It was also translated into English in 1969 under the title Canon of Insolation of the Ice-Age Problem by the Israel Program for Scientific Translations, and published by the U.S. Department of Commerce and the National Science Foundation, Washington, D.C.. • J. Imbrie and K. P. Imbrie, Ice Ages, Solving the Mystery, Enslow Publishers, Short Hills, New Jersey, 1979. • A. Berger, J. Imbrie, J. Hays, G. Kukla, and B. Saltzman, editors, Milankovitch and Climate, D. Reidel Publ. Company, Dordrecht, Netherlands, 1984. category: climate
2015-07-04 01:36:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 59, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7300834655761719, "perplexity": 1434.7696919551272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096293.72/warc/CC-MAIN-20150627031816-00171-ip-10-179-60-89.ec2.internal.warc.gz"}
https://cds.cern.ch/collection/Theses?ln=sk&as=1
Theses Posledne pridané: 2021-05-10 12:24 Calibrating the ATLAS Muon Spectrometer for a Search for Charged Stable Massive Particles / Habedank, Martin Many theories extending the Standard Model predict charged stable massive particles in reach of the LHC [...] CERN-THESIS-2018-488 - 139 p. Full text 2021-05-10 12:05 Background Processes Affecting the Machine-Detector Interface at FCC-ee With Focus on Synchrotron Radiation at 182.5 Gev Beam Energy / Luckhof, Marian Synchrotron radiation is a significant background source at circular lepton collid- ers [...] CERN-THESIS-2020-335 - 170 p. Full text 2021-05-08 18:18 Beam Loss Reduction by Barrier Buckets in the CERN Accelerator Complex / Vadai, Mihaly For a future intensity increase of the fixed-target beam in the accelerator complex at CERN, new techniques to reduce beam loss are required [...] CERN-THESIS-2021-043 - Full text 2021-05-07 17:02 A Model Independent Search for New Bosons Decaying into Muons at LHC / Shi, Wei A model independent search for pair production of new bosons in parameter space of mass, 0.21 < m < 60 GeV, and lifetime, 0 < ctau< 100 mm, is reported using events with four muons [...] CERN-THESIS-2021-042 - Full text 2021-05-07 10:33 Reducing Noisy Clusters in the NA61/SHINE Project with Help of Machine Learning / Pawlowski, Janik We live in an era where huge amounts of data are being generated in all sectors of science and industry [...] CERN-THESIS-2021-041 - 47 p. Full text 2021-05-06 16:03 Search for pentaquark candidates in $B^0_{(s)}\to J/\psi p \bar{p}$ decays at LHCb / Spadaro Norella, Elisabetta This thesis presents the analysis of the $B^0_{(s)}\to J/\psi p \bar{p}$ decays, which have been performed on the data collected by the LHCb experiment at CERN, during Run~1 and Run~2 [...] CERN-THESIS-2021-040 - 199 p. Full text 2021-05-06 15:47 Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems / Felsberger, Lukas Particle accelerators, such as the Large Hadron Collider at CERN, are among the largest and most complex engineered systems to date [...] CERN-THESIS-2020-334 - München : Universitätsbibliothek LMU München, 2021-04-19. - 163 p. Full text 2021-05-06 11:38 Search for flavour-changing neutral-current top quark decays to c-quark and Z boson using the ATLAS detector at the LHC / Marcoccia, Lorenzo The main focus of this thesis is the search for the $t\rightarrow Z c$ process in the proton–proton collisions data collected by the ATLAS detector at the Large Hadron Collider located at CERN.\\ The flavour-changing neutral-current (FCNC) processes are forbidden at tree-level and highly suppres [...] CERN-THESIS-2021-039 - 225 p. Full text 2021-05-05 21:20 Optimization of the Selection of Hidden Particles in the SHiP Experiment / Machado Santos Soares, Guilherme Although the Standard Model (SM) is one of the biggest achievements in physics, it cannot explain several outstanding phenomena [...] CERN-THESIS-2021-038 - 83 p. Full text 2021-05-04 17:49 Simulation Analysis and Machine Learning Based Detection of Beam-Induced Heating in Particle Accelerator at CERN / Giordano, Francesco A method for a first-order approximation estimation of the longitudinal impedance of a synchrotron component, starting from power loss measurements on the device, is proposed [...] CERN-THESIS-2020-332 - Full text
2021-05-17 17:23:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8447088599205017, "perplexity": 4991.907129071295}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991258.68/warc/CC-MAIN-20210517150020-20210517180020-00573.warc.gz"}
https://brilliant.org/problems/new-year-problems/
# New year problems Number Theory Level 3 $\large A=\underbrace{20162016\dots 2016}_{\text{2016 (2016)'s}}$ A number "$$A$$" is made by concatenating the integer 2016 by 2016 times .What is the remainder when $$A$$ is divided by 999? ×
2016-10-21 18:35:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2767699062824249, "perplexity": 1229.9349992740579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718296.19/warc/CC-MAIN-20161020183838-00266-ip-10-171-6-4.ec2.internal.warc.gz"}
https://docs.perl6.org/perl6.html
Welcome to the official documentation of the Perl 6 programming language! Besides online browsing and searching, you can also view everything in one file or contribute by reporting mistakes or sending patches. Language Reference & Tutorials A collection of documents describing, in detail, the various conceptual parts of the language. Type Reference Index of built-in classes and roles. Routine Reference Index of built-in subroutines and methods. Perl 6 Programs A collection of documents describing how to run the Perl 6 executable program and other utilities, how to debug Perl 6 programs, and how to hack on Perl 6 source code. The Perl 6 homepage offers a comprehensive list of Perl 6 resources, including tutorials, how-tos and FAQs (Frequently Asked Questions). Perl 6 compiler developers may also be interested in The Perl 6 Specification. Documentation for the different but related Perl 5 language can be found on the Perl 5 documentation website. ~ 2 Brief introduction Using Perl 6 official documentation Documenting a large language like Perl 6 has to balance several contradictory goals, such as being brief whilst being comprehensive, catering to professional developers with wide experience whilst also being accessible to newcomers to the language. For a quick hands-on introduction, there is a short annotated programming example. For programmers with experience in other languages, there are a number of Migration guides that compare and contrast the features of Perl6 with other languages. A number of Tutorials cover several areas in which Perl6 is particularly innovative. The section headers should help navigate the remaining documents. There are a number of useful resources listed elsewhere on the perl6.org site. These include articles, books, slide presentations, and videos. It has been found that newcomers to Perl 6 often ask questions that indicate assumptions carried over from other programming paradigms. It is suggested that the following sections in the Fundamental topics section should be reviewed first. • Signatures - each routine, which includes subroutines and methods, has a signature. Understanding the information given in the signature of a sub or method provides a quick way to grasp the operation and effect of the routine. • Containers - variables, which are like the nouns of a computer language, are containers in which information is stored. The first letter in the formal name of a container, such as the '$' of$my-variable, or '@' of @an-array-of-things, or '%' of %the-scores-in-the-competition, conveys information about the container. However, Perl6 is more abstract than other languages about what can be stored in a container. So, for example, a $scalar container can contain an object that is in fact an array. • Classes and Roles - Perl 6 is fundamentally based on objects, which are described in terms of classes and roles. Perl 6, unlike some languages, does not impose object-oriented programming practices, and useful programs can be written as if Perl 6 was purely procedural in nature. However, complex software, such as the Rakudo compiler of Perl 6, is made much simpler by writing in object-oriented idioms, which is why the Perl 6 documentation is more easily understood by reviewing what a class is and what a role is. Without understanding about Classes and Roles, it would be difficult to understand Types, to which a whole section of the documentation is devoted. • Traps to Avoid - Several common assumptions lead to code that does not work as the programmer intended. This section identifies some. It is worth reviewing when something doesn't quite work out. 3 Perl 6 by example P6-101 A basic introductory example of a Perl 6 program Suppose that you host a table tennis tournament. The referees tell you the results of each game in the format Player1 Player2 | 3:2, which means that Player1 won against Player2 by 3 to 2 sets. You need a script that sums up how many matches and sets each player has won to determine the overall winner. The input data (stored in a file called scores.txt) looks like this: Beth Ana Charlie Dave Ana Dave | 3:0 Charlie Beth | 3:1 Ana Beth | 2:3 Dave Charlie | 3:0 Ana Charlie | 3:1 Beth Dave | 0:3 The first line is the list of players. Every subsequent line records a result of a match. Here's one way to solve that problem in Perl 6: use v6; my$file = open 'scores.txt'; my @names = $file.get.words; my %matches; my %sets; for$file.lines -> $line { next unless$line; # ignore any empty lines my ($pairing,$result) = $line.split(' | '); my ($p1, $p2) =$pairing.words; my ($r1,$r2) = $result.split(':'); %sets{$p1} += $r1; %sets{$p2} += $r2; if$r1 > $r2 { %matches{$p1}++; } else { %matches{$p2}++; } } my @sorted = @names.sort({ %sets{$_} }).sort({ %matches{$_} }).reverse; for @sorted ->$n { lexical and block my declares a lexical variable, which are visible only in the current block from the point of declaration to the end of the block. If there's no enclosing block, it's visible throughout the remainder of the file (which would effectively be the enclosing block). A block is any part of the code enclosed between curly braces { }. string literal 'scores.txt' is a string literal. A string is a piece of text, and a string literal is a string which appears directly in the program. In this line, it's the argument provided to open. my @names = $file.get.words; array, method and invocant The right-hand side calls a method --a named group of behavior-- named get on the filehandle stored in$file. The get method reads and returns one line from the file, removing the line ending. If you print the contents of $file after calling get, you will see that the first line is no longer in there. words is also a method, called on the string returned from get. words decomposes its invocant--the string on which it operates--into a list of words, which here means strings separated by whitespace. It turns the single string 'Beth Ana Charlie Dave' into the list of strings 'Beth', 'Ana', 'Charlie', 'Dave'. Finally, this list gets stored in the Array @names. The @ sigil marks the declared variable as an Array. Arrays store ordered lists. my %matches; my %sets; hash These two lines of code declare two hashes. The % sigil marks each variable as a Hash. A Hash is an unordered collection of key-value pairs. Other programming languages call that a hash table, dictionary, or map. You can query a hash table for the value that corresponds to a certain$key with %hash{$key}. In the score counting program, %matches stores the number of matches each player has won. %sets stores the number of sets each player has won. Both of these hashes are indexed by the player's name. for$file.lines -> $line { ... } for and block for produces a loop that runs the block delimited by curly braces once for each item of the list, setting the variable$line to the current value of each iteration. $file.lines produces a list of the lines read from the file scores.txt, starting with the second line of the file since we already called$file.get once, and going all the way to the end of the file. During the first iteration, $line will contain the string Ana Dave | 3:0; during the second, Charlie Beth | 3:1, and so on. my ($pairing, $result) =$line.split(' | '); my can declare multiple variables simultaneously. The right-hand side of the assignment is a call to a method named split, passing along the string ' | ' as an argument. split decomposes its invocant into a list of strings, so that joining the list items with the separator ' | ' produces the original string. $pairing gets the first item of the returned list, and$result the second. After processing the first line, $pairing will hold the string Ana Dave and$result 3:0. The next two lines follow the same pattern: my ($p1,$p2) = $pairing.words; my ($r1, $r2) =$result.split(':'); The first extracts and stores the names of the two players in the variables $p1 and$p2. The second extracts the results for each player and stores them in $r1 and$r2. After processing the first line of the file, the variables contain the values: cell '0' Variable Contents $line 'Ana Dave | 3:0'$pairing 'Ana Dave' $result '3:0'$p1 'Ana' $p2 'Dave'$r1 '3' $r2 '0' The program then counts the number of sets each player has won: %sets{$p1} += $r1; %sets{$p2} += $r2; The += assignment operator is a shortcut for: %sets{$p1} = %sets{$p1} +$r1; fat arrow, pair and autovivification Before these two lines execute, %sets is empty. Adding to an entry that is not in the hash yet will cause that entry to spring into existence just-in-time, with a value starting at zero. (This is autovivification). After these two lines have run for the first time, %sets contains 'Ana' => 3, 'Dave' => 0 . (The fat arrow => separates key and value in a Pair.) if $r1 >$r2 { %matches{$p1}++; } else { %matches{$p2}++; } If $r1 is numerically larger than$r2, %matches{$p1} increments by one. If$r1 is not larger than $r2, %matches{$p2} increments. Just as in the case of +=, if either hash value did not exist previously, it is autovivified by the increment operation. stable sort When two array items have the same value, sort leaves them in the same order as it found them. Computer scientists call this a stable sort. The program takes advantage of this property of Perl 6's sort to achieve the goal by sorting twice: first by the number of sets won (the secondary criterion), then by the number of matches won. After the first sorting step, the names are in the order Beth Charlie Dave Ana. After the second sorting step, it's still the same, because no one has won fewer matches but more sets than someone else. Such a situation is entirely possible, especially at larger tournaments. sort sorts in ascending order, from smallest to largest. This is the opposite of the desired order. Therefore, the code calls the .reverse method on the result of the second sort, and stores the final list in @sorted. for @sorted -> $n { say "$n has won %matches{$n} matches and %sets{$n} sets"; } say, print and put To print out the players and their scores, the code loops over @sorted, setting $n to the name of each player in turn. Read this code as "For each element of sorted, set$n to the element, then execute the contents of the following block." say prints its arguments to the standard output (the screen, normally), followed by a newline. (Use print if you don't want the newline at the end.) Note that say will truncate certain data structures by calling the .gist method so put is safer if you want exact output. When you run the program, you'll see that say doesn't print the contents of that string verbatim. In place of $n it prints the contents of the variable$n-- the names of players stored in $n. This automatic substitution of code with its contents is interpolation. This interpolation happens only in strings delimited by double quotes "...". Single quoted strings '...' do not interpolate: double-quoted strings and single-quoted strings my$names = 'things'; say 'Do not call me $names'; # OUTPUT: «Do not call me$names␤» say "Do not call me $names"; # OUTPUT: «Do not call me things␤» Double quoted strings in Perl 6 can interpolate variables with the$ sigil as well as blocks of code in curly braces. Since any arbitrary Perl code can appear within curly braces, Arrays and Hashes may be interpolated by placing them within curly braces. Arrays within curly braces are interpolated with a single space character between each item. Hashes within curly braces are interpolated as a series of lines. Each line will contain a key, followed by a tab character, then the value associated with that key, and finally a newline. Let's see an example of this now. In this example, you will see some special syntax that makes it easier to make a list of strings. This is the <...> quote-words construct. When you put words in between the < and > they are all assumed to be strings, so you do not need to wrap them each in double quotes "..." . say "Math: { 1 + 2 }"; # OUTPUT: «Math: 3␤» my @people = <Luke Matthew Mark>; say "The synoptics are: {@people}"; # OUTPUT: «The synoptics are: Luke Matthew Mark␤» say "{%sets}␤"; # From the table tennis tournament # Charlie 4 # Dave 6 # Ana 8 # Beth 4 When array and hash variables appear directly in a double-quoted string (and not inside curly braces), they are only interpolated if their name is followed by a postcircumfix -- a bracketing pair that follows a statement. It's also ok to have a method call between the variable name and the postcircumfix. Zen slice my @flavors = <vanilla peach>; say "we have @flavors"; # OUTPUT: «we have @flavors␤» say "we have @flavors[0]"; # OUTPUT: «we have vanilla␤» # so-called "Zen slice" say "we have @flavors[]"; # OUTPUT: «we have vanilla peach␤» # method calls ending in postcircumfix say "we have @flavors.sort()"; # OUTPUT: «we have peach vanilla␤» # chained method calls: say "we have @flavors.sort.join(', ')"; # OUTPUT: «we have peach, vanilla␤» Exercises 1. The input format of the example program is redundant: the first line containing the name of all players is not necessary, because you can find out which players participated in the tournament by looking at their names in the subsequent rows. How can you make the program run if you do not use the @names variable? Hint: %hash.keys returns a list of all keys stored in %hash. Answer: Remove the line my @names = $file.get.words;, and change: my @sorted = @names.sort({ %sets{$_} }).sort({ %matches{$_} }).reverse; ... into: my @sorted = %sets.keys.sort({ %sets{$_} }).sort({ %matches{$_} }).reverse; 2. Instead of deleting the redundant @names variable, you can also use it to warn if a player appears that wasn't mentioned in the first line, for example due to a typo. How would you modify your program to achieve that? Hint: Try using membership operators. Answer: Change @names to @valid-players. When looping through the lines of the file, check to see that$p1 and $p2 are in @valid-players. Note that for membership operators you can also use (elem) and !(elem). ...; my @valid-players =$file.get.words; ...; for $file.lines ->$line { my ($pairing,$result) = $line.split(' | '); my ($p1, $p2) =$pairing.split(' '); if $p1 ∉ @valid-players { say "Warning: '$p1' is not on our list!"; } if $p2 ∉ @valid-players { say "Warning: '$p2' is not on our list!"; } ... } ~ 5 Perl 5 to Perl 6 guide - in a nutshell How do I do what I used to do? (Perl 6 in a nutshell) This page attempts to provide a fast-path to the changes in syntax and semantics from Perl 5 to Perl 6. Whatever worked in Perl 5 and must be written differently in Perl 6, should be listed here (whereas many new Perl 6 features and idioms need not). Hence this should not be mistaken for a beginner tutorial or a promotional overview of Perl 6; it is intended as a technical reference for Perl 6 learners with a strong Perl 5 background and for anyone porting Perl 5 code to Perl 6 (though note that #Automated translation might be more convenient). A note on semantics; when we say "now" in this document, we mostly just mean "now that you are trying out Perl 6." We don't mean to imply that Perl 5 is now suddenly obsolete. Quite the contrary, most of us love Perl 5, and we expect Perl 5 to continue in use for a good many years. Indeed, one of our more important goals has been to make interaction between Perl 5 and Perl 6 run smoothly. However, we do also like the design decisions in Perl 6, which are certainly newer and arguably better integrated than many of the historical design decisions in Perl 5. So many of us do hope that over the next decade or two, Perl 6 will become the more dominant language. If you want to take "now" in that future sense, that's okay too. But we're not at all interested in the either/or thinking that leads to fights. CPAN If the module that you were using has not been converted to Perl 6, and no alternative is listed in this document, then its use under Perl 6 may not have been addressed yet. The Inline::Perl5 project makes it possible to use Perl 5 modules directly from Perl 6 code by using an embedded instance of the perl interpreter to run Perl 5 code. This is as simple as: # the :from<Perl5> makes Perl 6 load Inline::Perl5 first (if installed) # and then load the Scalar::Util module from Perl 5 use Scalar::Util:from<Perl5> <looks_like_number>; say looks_like_number "foo"; # 0 say looks_like_number "42"; # 1 A number of Perl 5 modules have been ported to Perl 6, trying to maintain the API of these modules as much as possible, as part of the CPAN Butterfly Plan. These can be found at https://modules.perl6.org/t/CPAN5. Many Perl 5 built-in functions (about a 100 so far) have been ported to Perl 6 with the same semantics. Think about the shift function in Perl 5 having magic shifting from @_ or @ARGV by default, depending on context. These can be found at https://modules.perl6.org/t/Perl5 as separately loadable modules, and in the P5built-ins bundle to get them all at once. Syntax There are a few differences in syntax between the two languages, starting with how identifiers are defined. Identifiers Perl 6 allows the use of dashes (-), underscores (_), apostrophes ('), and alphanumerics in identifiers, : sub test-doesn't-hang { ... } my $ความสงบ = 42; my \Δ = 72; say 72 - Δ; -> Method calls If you've read any Perl 6 code at all, it's immediately obvious that method call syntax now uses a dot instead of an arrow:$person->name # Perl 5 $person.name # Perl 6 The dot notation is both easier to type and more of an industry standard. But we also wanted to steal the arrow for something else. (Concatenation is now done with the ~ operator, if you were wondering.) To call a method whose name is not known until runtime:$object->$methodname(@args); # Perl 5$object."$methodname"(@args); # Perl 6 If you leave out the quotes, then Perl 6 expects$methodname to contain a Method object, rather than the simple string name of the method. Yes, everything in Perl 6 can be considered an object. Whitespace Perl 5 allows a surprising amount of flexibility in the use of whitespace, even with strict mode and warnings turned on: # unidiomatic but valid Perl 5 say"Hello ".ucfirst ($people [$ i] -> name)."!"if$greeted[$i]<1; Perl 6 also endorses programmer freedom and creativity, but balanced syntactic flexibility against its design goal of having a consistent, deterministic, extensible grammar that supports single-pass parsing and helpful error messages, integrates features like custom operators cleanly, and doesn't lead programmers to accidentally misstate their intent. Also, the practice of "code golf" is slightly de-emphasized; Perl 6 is designed to be more concise in concepts than in keystrokes. As a result, there are various places in the syntax where whitespace is optional in Perl 5, but is either mandatory or forbidden in Perl 6. Many of those restrictions are unlikely to concern much real-life Perl code (e.g., whitespace being disallowed between the sigil and name of a variable), but there are a few that will unfortunately conflict with some Perl hackers' habitual coding styles: • No space allowed before the opening parenthesis of an argument list. substr ($s, 4, 1); # Perl 5 (in Perl 6 this would try to pass a single # argument of type List to substr) substr($s, 4, 1); # Perl 6 substr $s, 4, 1; # Perl 6 - alternative parentheses-less style Should this really be a problem for you, then you might want to have a look at the Slang::Tuxic module in the Perl 6 ecosystem: it changes the grammar of Perl 6 in such a way that you can have a space before the opening parenthesis of an argument list. • Space is required immediately after keywords my($alpha, $beta); # Perl 5, tries to call my() sub in Perl 6 my ($alpha, $beta); # Perl 6 if($a < 0) { ... } # Perl 5, dies in Perl 6 if ($a < 0) { ... } # Perl 6 if$a < 0 { ... } # Perl 6, more idiomatic while($x-- > 5) { ... } # Perl 5, dies in Perl 6 while ($x-- > 5) { ... } # Perl 6 while $x-- > 5 { ... } # Perl 6, more idiomatic • No space allowed after a prefix operator, or before a postfix/postcircumfix operator (including array/hash subscripts).$seen {$_} ++; # Perl 5 %seen{$_}++; # Perl 6 • Space required before an infix operator if it would conflict with an existing postfix/postcircumfix operator. $n<1; # Perl 5 (in Perl 6 this would conflict with postcircumfix < >)$n < 1; # Perl 6 • However, whitespace is allowed before the period of a method call! # Perl 5 my @books = $xml ->parse_file($file) # some comment ->findnodes("/library/book"); # Perl 6 my @books = $xml .parse-file($file) # some comment .findnodes("/library/book"); However, note that you can use unspace to add whitespace in Perl 6 code in places where it is otherwise not allowed. Sigils In Perl 5, arrays and hashes use changing sigils depending on how they are being accessed. In Perl 6 the sigils are invariant, no matter how the variable is being used - you can think of them as part of the variable's name. @ Array The @ sigil is now always used with "array" variables (e.g. @months, @months[2], @months[2, 4]), and no longer for value-slicing hashes. % Hash The % sigil is now always used with "hash" variables (e.g. %calories, %calories<apple>, %calories<pear plum>), and no longer for key/value-slicing arrays. & Sub The & sigil is now used consistently (and without the help of a backslash) to refer to the function object of a named subroutine/operator without invoking it, i.e. to use the name as a "noun" instead of a "verb": my $sub = \&foo; # Perl 5 my$sub = &foo; # Perl 6 callback => sub { say @_ } # Perl 5 - can't pass built-in sub directly callback => &say # Perl 6 - & gives "noun" form of any sub Since Perl 6 does not allow adding/removing symbols in a lexical scope once it has finished compiling, there is no equivalent to Perl 5's undef &foo;, and the closest equivalent to Perl 5's defined &foo would be defined ::('&foo') (which uses the "dynamic symbol lookup" syntax). However, you can declare a mutable named subroutine with my &foo; and then change its meaning at runtime by assigning to &foo. In Perl 5, the ampersand sigil can additionally be used to call subroutines in special ways with subtly different behavior compared to normal sub calls. In Perl 6 those special forms are no longer available: • &foo(...) for circumventing a function prototype In Perl 6 there are no prototypes, and it no longer makes a difference whether you, say, pass a literal code block or a variable holding a code object as an argument: # Perl 5: first_index { $_ > 5 } @values; &first_index($coderef, @values); # (disabling the prototype that parses a # literal block as the first argument) # Perl 6: first { $_ > 5 }, @values, :k; # the :k makes first return an index first$coderef, @values, :k; • &foo; and goto &foo; for re-using the caller's argument list / replacing the caller in the call stack. Perl 6 can use either callsame for re-dispatching or nextsame and nextwith, which have no exact equivalent in Perl 5. sub foo { say "before"; &bar; say "after" } # Perl 5 sub foo { say "before"; bar(|@_); say "after" } # Perl 6 - have to be explicit sub foo { say "before"; goto &bar } # Perl 5 proto foo (|) {*}; multi foo ( Any $n ) { say "Any"; say$n; }; multi foo ( Int $n ) { say "Int"; callsame; }; foo(3); # /language/functions#index-entry-dispatch_callsame * Glob TODO: Research what exact use-cases still need typeglobs in Perl 5 today, and refactor this section to list them (with translations). In Perl 5, the * sigil referred to the GLOB structure that Perl uses to store non-lexical variables, filehandles, subs, and formats. This should not be confused with the Perl 5 built-in glob() function, which reads filenames from a directory. You are most likely to encounter a GLOB in code written on an early Perl version that does not support lexical filehandles, when a filehandle needed to be passed into a sub. # Perl 5 - ancient method sub read_2 { local (*H) = @_; return scalar(<H>), scalar(<H>); } open FILE, '<',$path or die; my ($line1,$line2) = read_2(*FILE); You should refactor your Perl 5 code to remove the need for the GLOB, before translating into Perl 6. # Perl 5 - modern use of lexical filehandles my ($fh) = @_; return scalar(<$fh>), scalar(<$fh>); } open my$in_file, '<', $path or die; my ($line1, $line2) = read_2($in_file); And here's just one possible Perl 6 translation: # Perl 6 sub read-n($fh,$n) { return $fh.get xx$n; } my $in-file = open$path or die; say @months[2]; # Perl 6 - @ instead of $• Value-slicing say join ',', @months[6, 8..11]; # Perl 5 and Perl 6 • Key/value-slicing say join ',', %months[6, 8..11]; # Perl 5 say join ',', @months[6, 8..11]:kv; # Perl 6 - @ instead of %; use :kv adverb Also note that the subscripting square brackets are now a normal postcircumfix operator rather than a special syntactic form, and thus checking for existence of elements and unsetting elements is done with adverbs. {} Hash indexing/slicing Index and slice operations on hashes no longer inflect the variable's sigil, and adverbs can be used to control the type of slice. Also, single-word subscripts are no longer magically autoquoted inside the curly braces; instead, the new angle brackets version is available which always autoquotes its contents (using the same rules as the qw// quoting construct): • Indexing say$calories{"apple"}; # Perl 5 say %calories{"apple"}; # Perl 6 - % instead of $say$calories{apple}; # Perl 5 say %calories<apple>; # Perl 6 - angle brackets; % instead of $say %calories«"$key"»; # Perl 6 - double angles interpolate as a list of Str • Value-slicing say join ',', @calories{'pear', 'plum'}; # Perl 5 say join ',', %calories{'pear', 'plum'}; # Perl 6 - % instead of @ say join ',', %calories<pear plum>; # Perl 6 (prettier version) my $keys = 'pear plum'; say join ',', %calories«$keys»; # Perl 6 the split is done after interpolation • Key/value-slicing say join ',', %calories{'pear', 'plum'}; # Perl 5 say join ',', %calories{'pear', 'plum'}:kv; # Perl 6 - use :kv adverb say join ',', %calories<pear plum>:kv; # Perl 6 (prettier version) Also note that the subscripting curly braces are now a normal postcircumfix operator rather than a special syntactic form, and thus checking for existence of keys and removing keys is done with adverbs. Creating references and using them In Perl 5, references to anonymous arrays and hashes and subs are returned during their creation. References to existing named variables and subs were generated with the \ operator. the "referencing/dereferencing" metaphor does not map cleanly to the actual Perl 6 container system, so we will have to focus on the intent of the reference operators instead of the actual syntax. my $aref = \@aaa ; # Perl 5 This might be used for passing a reference to a routine, for instance. But in Perl 6, the (single) underlying object is passed (which you could consider to be a sort of pass by reference). my @array = 4,8,15; {$_[0] = 66 }(@array); # run the block with @array aliased to $_ say @array; # OUTPUT: «[66 8 15]␤» The underlying Array object of @array is passed, and its first value modified inside the declared routine. In Perl 5, the syntax for dereferencing an entire reference is the type-sigil and curly braces, with the reference inside the curly braces. In Perl 6, this concept simply does not apply, since the reference metaphor does not really apply. In Perl 5, the arrow operator, -> , is used for single access to a composite's reference or to call a sub through its reference. In Perl 6, the dot operator . is always used for object methods, but the rest does not really apply. # Perl 5 say$arrayref->[7]; say $hashref->{'fire bad'}; say$subref->($foo,$bar); In relatively recent versions of Perl 5 (5.20 and later), a new feature allows the use of the arrow operator for dereferencing: see Postfix Dereferencing. This can be used to create an array from a scalar. This operation is usually called decont, as in decontainerization, and in Perl 6 methods such as .list and .hash are used: # Perl 5.20 use experimental qw< postderef >; my @a = $arrayref->@*; my %h =$hashref->%*; my @slice = $arrayref->@[3..7]; # Perl 6 my @a =$contains-an-array.list; # or @($arrayref) my %h =$contains-a-hash.hash; # or %($hashref) The "Zen" slice does the same thing: # Perl 6 my @a =$contains-an-array[]; my %h = $contains-a-hash{}; See the "Containers" section of the documentation for more information. Operators See the documentation for operators for full details on all operators. Unchanged: • + Numeric Addition • - Numeric Subtraction • * Numeric Multiplication • / Numeric Division • % Numeric Modulus • ** Numeric Exponentiation • ++ Numeric Increment • -- Numeric Decrement • ! && || ^ Booleans, high-precedence • not and or xor Booleans, low-precedence • == != < > <= >= Numeric comparisons • eq ne lt gt le ge String comparisons , (Comma) List separator Unchanged, but note that in order to flatten an array variable to a list (in order to append or prefix more items) one should use the | operator (see also Slip). For instance: my @numbers = 100, 200, 300; my @more_numbers = 500, 600, 700; my @all_numbers = |@numbers, 400, |@more_numbers; That way one can concatenate arrays. Note that one does not need to have any parentheses on the right-hand side: the List Separator takes care of creating the list, not the parentheses! <=> cmp Three-way comparisons In Perl 5, these operators returned -1, 0, or 1. In Perl 6, they return Order::Less, Order::Same, or Order::More. cmp is now named leg; it forces string context for the comparison. <=> still forces numeric context. cmp in Perl 6 does either <=> or leg, depending on the existing type of its arguments. ~~ Smartmatch operator While the operator has not changed, the rules for what exactly is matched depend on the types of both arguments, and those rules are far from identical in Perl 5 and Perl 6. See ~~ and the smartmatch operator & | ^ String bitwise ops & | ^ Numeric bitwise ops & | ^ Boolean ops In Perl 5, & | ^ were invoked according to the contents of their arguments. For example, 31 | 33 returns a different result than "31" | "33". In Perl 6, those single-character ops have been removed, and replaced by two-character ops which coerce their arguments to the needed context. # Infix ops (two arguments; one on each side of the op) +& +| +^ And Or Xor: Numeric ~& ~| ~^ And Or Xor: String ?& ?| ?^ And Or Xor: Boolean # Prefix ops (one argument, after the op) +^ Not: Numeric ~^ Not: String ?^ Not: Boolean (same as the ! op) << >> Numeric shift left|right ops Replaced by +< and +> . say 42 << 3; # Perl 5 say 42 +< 3; # Perl 6 => Fat comma In Perl 5, => acted just like a comma, but also quoted its left-hand side. In Perl 6, => is the Pair operator, which is quite different in principle, but works the same in many situations. If you were using => in hash initialization, or in passing arguments to a sub that expects a hashref, then the usage is likely identical. sub get_the_loot { ... }; # Perl 6 stub # Works in Perl 5 and Perl 6 my %hash = ( AAA => 1, BBB => 2 ); get_the_loot( 'diamonds', { quiet_level => 'very', quantity => 9 }); # Note the curly braces If you were using => as a convenient shortcut to not have to quote part of a list, or in passing arguments to a sub that expects a flat list of KEY, VALUE, KEY, VALUE, then continuing to use => may break your code. The easiest workaround is to change that fat arrow to a regular comma, and manually add quotes to its left-hand side. Or, you can change the sub's API to slurp a hash. A better long-term solution is to change the sub's API to expect Pairs; however, this requires you to change all sub calls at once. # Perl 5 sub get_the_loot { my$loot = shift; my %options = @_; # ... } # Note: no curly braces in this sub call get_the_loot( 'diamonds', quiet_level => 'very', quantity => 9 ); # Perl 6, original API sub get_the_loot( $loot, *%options ) { # The * means to slurp everything ... } get_the_loot( 'diamonds', quiet_level => 'very', quantity => 9 ); # Note: no curly braces in this API # Perl 6, API changed to specify valid options # The colon before the sigils means to expect a named variable, # with the key having the same name as the variable. sub get_the_loot($loot, :$quiet_level?, :$quantity = 1 ) { # This version will check for unexpected arguments! ... } get_the_loot( 'diamonds', quietlevel => 'very' ); # Throws error for misspelled parameter name ? : Ternary operator The conditional operator ? : has been replaced by ?? !!: my $result =$score > 60 ? 'Pass' : 'Fail'; # Perl 5 my $result =$score > 60 ?? 'Pass' !! 'Fail'; # Perl 6 . (Dot) String concatenation Replaced by the tilde. Mnemonic: think of "stitching" together the two strings with needle and thread. $food = 'grape' . 'fruit'; # Perl 5$food = 'grape' ~ 'fruit'; # Perl 6 x List repetition or string repetition operator In Perl 5, x is the Repetition operator, which behaves differently in scalar or list contexts: • in scalar context x repeats a string; • in list context x repeats a list, but only if the left argument is parenthesized! Perl 6 uses two different Repetition operators to achieve the above: • x for string repetitions (in any context); • xx for list repetitions (in any context). Mnemonic: x is short and xx is long, so xx is the one used for lists. # Perl 5 print '-' x 80; # Print row of dashes @ones = (1) x 80; # A list of 80 1's @ones = (5) x @ones; # Set all elements to 5 # Perl 6 print '-' x 80; # Unchanged @ones = 1 xx 80; # Parentheses no longer needed @ones = 5 xx @ones; # Parentheses no longer needed ..... Two dots or three dots, range op or flipflop op In Perl 5, .. was one of two completely different operators, depending on context. In list context, .. is the familiar range operator. Ranges from Perl 5 code should not require translation. In scalar context, .. and ... were the little-known Flipflop operators. They have been replaced by ff and fff. String interpolation In Perl 5, "${foo}s" deliminates a variable name from regular text next to it. In Perl 6, simply extend the curly braces to include the sigil too: "{$foo}s". This is in fact a very simple case of interpolating an expression. Compound statements These statements include conditionals and loops. Conditionals ifelsifelseunless Mostly unchanged; parentheses around the conditions are now optional, but if used, must not immediately follow the keyword, or it will be taken as a function call instead. Binding the conditional expression to a variable is also a little different: if (my $x = dostuff()) {...} # Perl 5 if dostuff() ->$x {...} # Perl 6 (You can still use the my form in Perl 6, but it will scope to the outer block, not the inner.) The unless conditional only allows for a single block in Perl 6; it does not allow for an elsif or else clause. given-when The given-when construct is like a chain of if-elsif-else statements or like the switch-case construct in e.g. C. It has the general structure: given EXPR { when EXPR { ... } when EXPR { ... } default { ... } } In its simplest form, the construct is as follows: given $value { # assigns$_ when "a match" { # if $_ ~~ "a match" # do-something(); } when "another match" { # elsif$_ ~~ "another match" # do-something-else(); } default { # else # do-default-thing(); } } This is simple in the sense that a scalar value is matched in the when statements against $_, which was set by the given. More generally, the matches are actually smartmatches on$_ such that lookups using more complex entities such as regexps can be used instead of scalar values. Loops whileuntil Mostly unchanged; parentheses around the conditions are now optional, but if used, must not immediately follow the keyword, or it will be taken as a function call instead. Binding the conditional expression to a variable is also a little different: while (my $x = dostuff()) {...} # Perl 5 while dostuff() ->$x {...} # Perl 6 (You can still use the my form in Perl 6, but it will scope to the outer block, not the inner.) Note that reading line-by-line from a filehandle has changed. In Perl 5, it was done in a while loop using the diamond operator. Using for instead of while was a common bug, because the for causes the whole file to be sucked in at once, swamping the program's memory usage. In Perl 6, for statement is lazy, so we read line-by-line in a for loop using the .lines method. while (<IN_FH>) { } # Perl 5 for $IN_FH.lines { } # Perl 6 Also note that in Perl 6, lines are chomped by default. dowhile/until # Perl 5 do { ... } while$x < 10; do { ... } until $x >= 10; The construct is still present, but do was renamed to repeat, to better represent what the construct does: # Perl 6 repeat { ... } while$x < 10; repeat { ... } until $x >= 10; forforeach Note first this common misunderstanding about the for and foreach keywords: Many programmers think that they distinguish between the C-style three-expression form and the list-iterator form; they do not! In fact, the keywords are interchangeable; the Perl 5 compiler looks for the semicolons within the parentheses to determine which type of loop to parse. The C-style three-factor form now uses the loop keyword, and is otherwise unchanged. The parentheses are still required. for ( my$i = 1; $i <= 10;$i++ ) { ... } # Perl 5 loop ( my $i = 1;$i <= 10; $i++ ) { ... } # Perl 6 The loop-iterator form is named for in Perl 6 and foreach is no longer a keyword. The for loop has the following rules: • parentheses are optional; • the iteration variable, if any, has been moved from appearing before the list, to appearing after the list and an added arrow operator; • the iteration variable is now always lexical: my is neither needed nor allowed; • the iteration variable is a read-only alias to the current list element (in Perl 5 it is a read-write alias!). If a read-write alias is required, change the -> before the iteration variable to a <->. When translating from Perl 5, inspect the use of the loop variable to decide if read-write is needed. for my$car (@cars) {...} # Perl 5; read-write for @cars -> $car {...} # Perl 6; read-only for @cars <->$car {...} # Perl 6; read-write If the default topic $_ is being used, it is also read-write. for (@cars) {...} # Perl 5;$_ is read-write for @cars {...} # Perl 6; $_ is read-write for @cars <->$_ {...} # Perl 6; $_ is also read-write It is possible to consume more than one element of the list in each iteration simply specifying more than one variable after the arrow operator: my @array = 1..10; for @array ->$first, $second { say "First is$first, second is $second"; } each Here is the equivalent to Perl 5’s while…each(%hash) or while…each(@array) (i.e., iterating over both the keys/indices and values of a data structure) in Perl 6: while (my ($i, $v) = each(@array)) { ... } # Perl 5 for @array.kv ->$i, $v { ... } # Perl 6 while (my ($k, $v) = each(%hash)) { ... } # Perl 5 for %hash.kv ->$k, $v { ... } # Perl 6 Flow control statements Unchanged: • next • last • redo continue There is no longer a continue block. Instead, use a NEXT block (phaser) within the body of the loop. # Perl 5 my$str = ''; for (1..5) { next if $_ % 2 == 1;$str .= $_; } continue {$str .= ':' } # Perl 6 my $str = ''; for 1..5 { next if$_ % 2 == 1; $str ~=$_; NEXT { $str ~= ':' } } Please note that phasers don't really need a block. This can be very handy when you don't want another scope: # Perl 6 my$str = ''; for 1..5 { next if $_ % 2 == 1;$str ~= $_; NEXT$str ~= ':'; } Functions NOTE FOR EDITORS: When adding functions, please place them in alphabetical order. Built-ins with bare blocks Builtins that previously accepted a bare block followed, without a comma, by the remainder of the arguments will now require a comma between the block and the arguments e.g. map, grep, etc. my @results = grep { $_ eq "bars" } @foo; # Perl 5 my @results = grep {$_ eq "bars" }, @foo; # Perl 6 delete Turned into an adverb of the {} hash subscripting and [] array subscripting operators. my $deleted_value = delete$hash{$key}; # Perl 5 my$deleted_value = %hash{$key}:delete; # Perl 6 - use :delete adverb my$deleted_value = delete $array[$i]; # Perl 5 my $deleted_value = @array[$i]:delete; # Perl 6 - use :delete adverb exists Turned into an adverb of the {} hash subscripting and [] array subscripting operators. say "element exists" if exists $hash{$key}; # Perl 5 say "element exists" if %hash{$key}:exists; # Perl 6 - use :exists adverb say "element exists" if exists$array[$i]; # Perl 5 say "element exists" if @array[$i]:exists; # Perl 6 - use :exists adverb Regular expressions ( regex / regexp ) Change =~ and !~ to ~~ and !~~ . In Perl 5, matches and substitutions are done against a variable using the =~ regexp-binding op. In Perl 6, the ~~ smartmatch op is used instead. next if $line =~ /static/ ; # Perl 5 next if$line ~~ /static/ ; # Perl 6 next if $line !~ /dynamic/ ; # Perl 5 next if$line !~~ /dynamic/ ; # Perl 6 $line =~ s/abc/123/; # Perl 5$line ~~ s/abc/123/; # Perl 6 Alternately, the new .match and .subst methods can be used. Note that .subst is non-mutating. /(.+)/ and print $1; # Perl 5 /(.+)/ and print$0; # Perl 6 Move modifiers Move any modifiers from the end of the regex to the beginning. This may require you to add the optional m on a plain match like /abc/. next if $line =~ /static/i ; # Perl 5 next if$line ~~ m:i/static/ ; # Perl 6 If the actual regex is complex, you may want to use it as-is, by adding the P5 modifier. next if $line =~ m/[aeiou]/ ; # Perl 5 next if$line ~~ m:P5/[aeiou]/ ; # Perl 6, using P5 modifier next if $line ~~ m/ <[aeiou]> / ; # Perl 6, native new syntax Please note that the Perl 5 regular expression syntax dates from many years ago and may lack features that have been added since the beginning of the Perl 6 project. Special matchers generally fall under the <> syntax There are many cases of special matching syntax that Perl 5 regexes support. They won't all be listed here, but often instead of being surrounded by (), the assertions will be surrounded by <>. For character classes, this means that: • [abc] becomes <[abc]> • [^abc] becomes <-[abc]> • [a-zA-Z] becomes <[a..zA..Z]> • [[:upper:]] becomes <:Upper> • [abc[:upper:]] becomes <[abc]+:Upper> For lookaround assertions: • (?=[abc]) becomes <?[abc]> • (?=ar?bitrary* pattern) becomes <before ar?bitrary* pattern> • (?!=[abc]) becomes <![abc]> • (?!=ar?bitrary* pattern) becomes <!before ar?bitrary* pattern> • (?<=ar?bitrary* pattern) becomes <after ar?bitrary* pattern> • (?<!ar?bitrary* pattern) becomes <!after ar?bitrary* pattern> For more info see lookahead assertions. (Unrelated to <> syntax, the "lookaround" /foo\Kbar/ becomes /foo <( bar )> / • (?(?{condition))yes-pattern|no-pattern) becomes [ <?{condition}> yes-pattern | no-pattern ] Longest token matching (LTM) displaces alternation In Perl 6 regexes, | does LTM, which decides which alternation wins an ambiguous match based off of a set of rules, rather than about which was written first. The simplest way to deal with this is just to change any | in your Perl 5 regex to a ||. However, if a regex written with || is inherited or composed into a grammar that uses | either by design or typo, the result may not work as expected. So when the matching process becomes complex, you finally need to have some understanding of both, especially how LTM strategy works. Besides, | may be a better choice for grammar reuse. TODO more rules. Use translate_regex.pl from Blue Tiger in the meantime. Comments As with Perl 5, comments work as usual in regexes. / word #(match lexical "word") / BEGIN, UNITCHECK, CHECK, INIT and END Except for UNITCHECK, all of these special blocks exist in Perl 6 as well. In Perl 6, these are called Phasers. But there are some differences! UNITCHECK becomes CHECK There is currently no direct equivalent of CHECK blocks in Perl 6. The CHECK phaser in Perl 6 has the same semantics as the UNITCHECK block in Perl 5: it gets run whenever the compilation unit in which it occurs, has finished parsing. This is considered a much saner semantic than the current semantics of CHECK blocks in Perl 5. But for compatibility reasons, it was impossible to change the semantics of CHECK blocks in Perl 5, so a UNITCHECK block was introduced in 5.10. So it was decided that the Perl 6 CHECK phaser would follow the saner Perl 5 UNITCHECK semantics. No block necessary In Perl 5, these special blocks must have curly braces, which implies a separate scope. In Perl 6 this is not necessary, allowing these special blocks to share their scope with the surrounding lexical scope. my$foo; # Perl 5 BEGIN { $foo = 42 } BEGIN my$foo = 42; # Perl 6 Changed semantics with regards to precompilation If you put BEGIN and CHECK phasers in a module that is being precompiled, then these phasers will only be executed during precompilation and not when a precompiled module is being loaded. So when porting module code from Perl 5, you may need to change BEGIN and CHECK blocks to INIT blocks to ensure that they're run when loading that module. Pragmas strict Strict mode is now on by default. warnings Warnings are now on by default. no warnings is currently NYI, but putting things in a quietly {} block will silence. autodie The functions which were altered by autodie to throw exceptions on error, now generally return Failures by default. You can test a Failure for definedness / truthiness without any problem. If you use the Failure in any other way, then the Exception that was encapsulated by the Failure will be thrown. # Perl 5 open my $i_fh, '<',$input_path; # Fails silently on error use autodie; open my $o_fh, '>',$output_path; # Throws exception on error # Perl 6 my $i_fh = open$input_path, :r; # Returns Failure on error my $o_fh = open$output_path, :w; # Returns Failure on error Because you can check for truthiness without any problem, you can use the result of an open in an if statement: # Perl 6 if open($input_path,:r) ->$handle { my int $bar = 666; say$foo * $bar; # uses native integer multiplication lib Manipulate where modules are looked up at compile time. The underlying logic is very different from Perl 5, but in the case you are using an equivalent syntax, use lib in Perl 6 works the same as in Perl 5. mro No longer relevant. In Perl 6, method calls now always use the C3 method resolution order. If you need to find out parent classes of a given class, you can invoke the mro meta-method thusly: say Animal.^mro; # .^ indicates calling a meta-method on the object utf8 No longer relevant: in Perl 6, source code is expected to be in utf8 encoding. vars Discouraged in Perl 5. See https://perldoc.perl.org/vars.html. You should refactor your Perl 5 code to remove the need for use vars, before translating into Perl 6. Command-line flags Unchanged: -c -e -h -I -n -p -v -V -a Change your code to use .split manually. -F Change your code to use .split manually. -l This is now the default behavior. -M-m Only -M remains. And, as you can no longer use the "no Module" syntax, the use of - with -M to "no" a module is no longer available. -E Since all features are already enabled, just use lowercase -e . -d, -dt, -d:foo, -D, etc. Replaced with the ++BUG metasyntactic option. -s Switch parsing is now done by the parameter list of the MAIN subroutine. # Perl 5 #!/usr/bin/perl -s if ($xyz) { print "$xyz\n" } ./example.pl -xyz=5 5 # Perl 6 sub MAIN( Int :$xyz ) { say $xyz if$xyz.defined; } perl6 example.p6 --xyz=5 5 perl6 example.p6 -xyz=5 5 • -t Removed. • -P -u -U -W -X Removed. See S19#Removed Syntactic Features. • -w This is now the default behavior. • -S, -T. This has been eliminated. Several ways to replicate "taint" mode are discussed in Reddit. File-related operations Reading the lines of a text file into an array In Perl 5, a common idiom for reading the lines of a text file goes something like this: open my $fh, "<", "file" or die "$!"; my @lines = <$fh>; # lines are NOT chomped close$fh; In Perl 6, this has been simplified to my @lines = "file".IO.lines; # auto-chomped Do not be tempted to try slurping in a file and splitting the resulting string on newlines as this will give an array with a trailing empty element, which is one more than you probably expect (it's also more complicated), e.g.: # initialize the file to read spurt "test-file", q:to/END/; first line second line third line END my @lines = "test-file".IO.slurp.split(/\n/); say @lines.elems; #-> 4 If for some reason you do want to slurp the file first, then you can call the lines method on the result of slurp instead: my @lines = "test-file".IO.slurp.lines; # also auto-chomps Also, be aware that $! is not really relevant for file IO operation failures in Perl 6. An IO operation that fails will return a failure instead of throwing an exception. If you want to return the failure message, it is in the failure itself, not in$!. To do similar IO error checking and reporting as in Perl 5: my $fh = open('./bad/path/to/file', :w) or die$fh; Note: $fh instead of$!. Or, you can set $_ to the failure and die with$_: my $fh = open('./bad/path/to/file', :w) orelse .die; Any operation that tries to use the failure will cause the program to fault and terminate. Even just a call to the .self method is sufficient. my$fh = open('./bad/path/to/file', :w).self; Capturing the standard output of executables. Whereas in Perl 5 you would do: my $arg = 'Hello'; my$captured = echo \Q$arg\E; my$captured = qx(echo \Q$arg\E); Or using String::ShellQuote (because \Q…\E is not completely right): my$arg = shell_quote 'Hello'; my $captured = echo$arg; my $captured = qx(echo$arg); In Perl 6, you will probably want to run commands without using the shell: my $arg = 'Hello'; my$captured = run('echo', $arg, :out).out.slurp; my$captured = run(«echo "$arg"», :out).out.slurp; You can also use the shell if you really want to: my$arg = 'Hello'; my $captured = shell("echo$arg", :out).out.slurp; my $captured = qqx{echo$arg}; But beware that in this case there is no protection at all! run does not use the shell, so there is no need to escape the arguments (arguments are passed directly). If you are using shell or qqx, then everything ends up being one long string which is then passed to the shell. Unless you validate your arguments very carefully, there is a high chance to introduce shell injection vulnerabilities with such code. Environment variables Perl module library path In Perl 5 one of the environment variables to specify extra search paths for Perl modules is PERL5LIB. $PERL5LIB="/some/module/lib" perl program.pl In Perl 6 this is similar, one merely needs to change a number! As you probably guessed, you just need to use PERL6LIB:$ PERL6LIB="/some/module/lib" perl6 program.p6 In Perl 5 one uses the ':' (colon) as a directory separator for PERL5LIB, but in Perl 6 one uses the ',' (comma). For example: $export PERL5LIB=/module/dir1:/module/dir2; but$ export PERL6LIB=/module/dir1,/module/dir2; (Perl 6 does not recognize either the PERL5LIB or the older Perl environment variable PERLLIB.) As with Perl 5, if you don't specify PERL6LIB, you need to specify the library path within the program via the use lib pragma: use lib '/some/module/lib' Note that PERL6LIB is more of a developer convenience in Perl 6 (as opposed to the equivalent usage of PERL5LIB in Perl5) and shouldn't be used by module consumers as it could be removed in the future. This is because Perl 6's module loading isn't directly compatible with operating system paths. Misc. '0' is True Unlike Perl 5, a string containing nothing but zero ('0') is True. As Perl 6 has types in core, that makes more sense. This also means the common pattern: ... if defined $x and length$x; # or just length() in modern perls In Perl 6 becomes a simple ... if $x; dump Gone. The Perl 6 design allows for automatic transparent saving-and-loading of compiled bytecode. Rakudo supports this only for modules so far. AUTOLOAD The FALLBACK method provides similar functionality. Importing specific functions from a module In Perl 5 it is possible to selectively import functions from a given module like so: use ModuleName qw{foo bar baz}; In Perl 6 one specifies the functions which are to be exported by using the is export role on the relevant subs; all subs with this role are then exported. Hence, the following module Bar exports the subs foo and bar but not baz: unit module Bar; sub foo($a) is export { say "foo $a" } sub bar($b) is export { say "bar $b" } sub baz($z) { say "baz $z" } To use this module, simply use Bar and the functions foo and bar will be available use Bar; foo(1); #=> "foo 1" bar(2); #=> "bar 2" If one tries to use baz an "Undeclared routine" error is raised at compile time. So, how does one recreate the Perl 5 behavior of being able to selectively import functions? By defining an EXPORT sub inside the module which specifies the functions to be exported and removing the module Bar statement. The former module Bar now is merely a file called Bar.pm6 with the following contents: sub EXPORT(*@import-list) { my %exportable-subs = '&foo' => &foo, '&bar' => &bar, ; my %subs-to-export; for @import-list ->$import { if grep $sub-name, %exportable-subs.keys { %subs-to-export{$sub-name} = %exportable-subs{$sub-name}; } } return %subs-to-export; } sub foo($a, $b,$c) { say "foo, $a,$b, $c" } sub bar($a) { say "bar, $a" } sub baz($z) { say "baz, $z" } Note that the subs are no longer explicitly exported via the is export role, but by an EXPORT sub which specifies the subs in the module we want to make available for export and then we are populating a hash containing the subs which will actually be exported. The @import-list is set by the use statement in the calling code thus allowing us to selectively import the subs made available by the module. So, to import only the foo routine, we do the following in the calling code: use Bar <foo>; foo(1); #=> "foo 1" Here we see that even though bar is exportable, if we don't explicitly import it, it's not available for use. Hence this causes an "Undeclared routine" error at compile time: use Bar <foo>; foo(1); bar(5); #!> "Undeclared routine: bar used at line 3" However, this will work use Bar <foo bar>; foo(1); #=> "foo 1" bar(5); #=> "bar 5" Note also that baz remains unimportable even if specified in the use statement: use Bar <foo bar baz>; baz(3); #!> "Undeclared routine: baz used at line 2" In order to get this to work, one obviously has to jump through many hoops. In the standard use-case where one specifies the functions to be exported via the is export role, Perl 6 automatically creates the EXPORT sub in the correct manner for you, so one should consider very carefully whether or not writing one's own EXPORT routine is worthwhile. Importing groups of specific functions from a module If you would like to export groups of functions from a module, you just need to assign names to the groups, and the rest will work automagically. When you specify is export in a sub declaration, you are in fact adding this subroutine to the :DEFAULT export group. But you can add a subroutine to another group, or to multiple groups: unit module Bar; sub foo() is export { } # added by default to :DEFAULT sub bar() is export(:FNORBL) { } # added to the FNORBL export group sub baz() is export(:DEFAULT:FNORBL) { } # added to both So now you can use the Bar module like this: use Bar; # imports foo / baz use Bar :FNORBL; # imports bar / baz use Bar :ALL; # imports foo / bar / baz Note that :ALL is an auto-generated group that encompasses all subroutines that have an is export trait. Core modules Data::Dumper In Perl 5, the Data::Dumper module was used for serialization, and for debugging views of program data structures by the programmer. In Perl 6, these tasks are accomplished with the .perl method, which every object has. # Given: my @array_of_hashes = ( { NAME => 'apple', type => 'fruit' }, { NAME => 'cabbage', type => 'no, please no' }, ); # Perl 5 use Data::Dumper;$Data::Dumper::Useqq = 1; print Dumper \@array_of_hashes; # Note the backslash. # Perl 6 say @array_of_hashes.perl; # .perl on the array, not on its reference. In Perl 5, Data::Dumper has a more complex optional calling convention, which allows for naming the VARs. In Perl 6, placing a colon in front of the variable's sigil turns it into a Pair, with a key of the var name, and a value of the var value. # Given: my ( $foo,$bar ) = ( 42, 44 ); my @baz = ( 16, 32, 64, 'Hike!' ); # Perl 5 use Data::Dumper; print Data::Dumper->Dump( [ $foo,$bar, \@baz ], [ qw( foo bar *baz ) ], ); # Output # $foo = 42; #$bar = 44; # @baz = ( # 16, # 32, # 64, # 'Hike!' # ); # Perl 6 say [ :$foo, :$bar, :@baz ].perl; # OUTPUT: «["foo" => 42, "bar" => 44, "baz" => [16, 32, 64, "Hike!"]]␤» There is also a Rakudo-specific debugging aid for developers called dd (Tiny Data Dumper, so tiny it lost the "t"). This will print the .perl representation plus some extra information that could be introspected, of the given variables on STDERR: # Perl 6 dd $foo,$bar, @baz; # OUTPUT: «Int $foo = 42␤Int$bar = 44␤Array @baz = [16, 32, 64, "Hike!"]␤» Getopt::Long Switch parsing is now done by the parameter list of the MAIN subroutine. # Perl 5 use 5.010; use Getopt::Long; GetOptions( Spesh A functionality of the #MoarVM platform that uses runtime gathered data to improve commonly used pieces of #bytecode. It is much like a JIT compiler, except that those usually output machine code rather than bytecode. STD STD.pm is the "standard" Perl 6 grammar definition (see https://github.com/perl6/std/) that was used to implement Perl 6. STD.pm is no longer really a "specification" in a proscriptive sense: it's more of a guideline or model for Perl 6 implementations to follow. Stub Stubs define name and signature of methods whose implementation is deferred to other classes. role Canine { method bark { ... } # the ... indicates a stub } Classes with stubs are Abstract classes. Symbol Fancy alternative way to denote a name. Generally used in the context of modules linking, be it in the OS level, or at the Perl 6 #Virtual_machine level for modules generated from languages targeting these VMs. The set of imported or exported symbols is called the symbol table. Synopsis The current human-readable description of the Perl 6 language. Still in development. Much more a community effort than the Apocalypses and Exegeses were. The current state of the language is reflected by #roast, its #test suite, not the synopses where speculative material is not always so flagged or more recent additions have not been documented. This is even more true of material that has not been yet implemented. Syntax analysis A syntax or syntactic analysis is equivalent to parsing a string to generate its parse tree. Test suite The Perl 6 test suite is #roast. TheDamian #IRC screen name for #Damian Conway, writer of the original Exegeses. #IRC screen name for #Larry Wall, creator of Perl. The name comes from the pronunciation of #TIMTOWTDI as a word. token In this context, a token is a regex that does not backtrack. In general, tokens are extracted from the source program while #Lexing. Thunk A piece of code that isn't immediately executed, but doesn't have an independent scope. Tight and loose precedence In this context, tight or tighter refers to precedence rules and is the opposite of looser. Precedence rules for new terms are always expressed in relationship with other terms, so is tighter implies that operands with that operator will be grouped before operands with the looser operator. Operators with tight precedence are grouped with priority to others and are generally tighter than most others; loose exactly the opposite, so it is always convenient to be aware of the exact precedence of all operators used in an expression. twine A data structure used to hold a POD string with embedded formatting codes. For example: =begin pod C<foo> =end pod say $=pod[0].contents[0].contents.perl; The output will be: ["", Pod::FormattingCode.new(type => "C", meta => [], config => {}, contents => ["foo"]),""] The twine is an array with an odd number of elements beginning with a simple string, alternating with formatting code objects and simple strings, and ending with a simple string; the formatting code objects are inter twine d with the strings. The strings may be empty (as shown in the example). A twine with no formatting code will contain one simple string. Type objects A type object is an object that is used to represent a type or a class. Since in object oriented programming everything is an object, classes are objects too, which inherit from the ur-class which, in our case, is Mu. value A value is what is actually contained in a container such as a variable. Used in expressions such as lvalue, to indicate that that particular container can be assigned to. UB Stands for "Undefined Behavior". In other words, it is something that is not explicitly specified by the language specification. Value type A type is known as a value type if it is immutable and any instance of that type is interchangeable with any other instance "of the same value"—that is, any instance constructed in the same way. An instance of a value type is often called a value (but should not be confused with #lvalues or #rvalues). For example, numbers are value types, so a number constructed one place in your program with, for instance, the literal 3 can't be changed in any way—it simply is 3—and any later use of the literal 3 can safely be pointed at the same place in memory as the first with no ill consequences. Classes doing the roles Numeric and Stringy are among a few examples of built-in value types. A value type is created by ensuring that an instance of the value type is immutable (i.e., its attributes cannot be modified after construction) and that its WHICH method returns the same thing every time an instance with the same value is constructed (and conversely returns a different thing every time an instance with a different value is constructed). The language is free to optimize based on the assumption that equivalent instances of value types are interchangeable, but you should not depend on any such optimization. For instance, if you want clone to return an instance of self, or you want instance construction to be memoized so that re-construction of a previously-constructed value always returns the same instance, you currently must override this behavior yourself. (The same would hold true of object finalization, but if your instances need special destruction behavior, you almost certainly do not actually have a value type. Values should be thought of as "timeless" and existing in some ideal form outside of your program's memory, like natural values are.) Variable A variable is a name for a container. Variable interpolation The value of variables is interpolated into strings by simply inserting that variable into the string: my$polation="polation"; say "inter$polation"; # OUTPUT: «interpolation␤» This might need curly braces in case it precedes some alphanumeric characters my$inter="inter"; say "{$inter}polation"; # OUTPUT: «interpolation␤» Interpolation occurs in string context, so a valid stringification method must exist for the class. More general interpolation can be achieved using the double q quoting constructs. Virtual machine A virtual machine is the Perl compiler entity that executes the bytecode. It can optimize the bytecode or generate machine code Just in Time. Examples are #MoarVM, #Parrot (who are intended to run Perl 6) and more generic virtual machines such as JVM and Javascript. WAT The opposite of a #DWIM; counter-intuitive behavior. It is said that to every DWIM there is a corresponding WAT. See also https://www.destroyallsoftware.com/talks/wat. whitespace A character or group of blank characters, used to separate words. An example is the space character « ». 6model 6model is used in the MoarVM, and provides primitives used to create an object system. It is described in this presentation by Jonathan Worthington and implemented here in MoarVM. 36 Perl 6 pod An easy-to-use markup language for documenting Perl modules and programs Perl 6 pod is an easy-to-use markup language. Pod can be used for writing language documentation, for documenting programs and modules, as well as for other types of document composition. Every Pod document has to begin with =begin pod and end with =end pod. Everything between these two delimiters will be processed and used to generate documentation. =begin pod A very simple Perl 6 Pod document =end pod Block structure A Pod document may consist of multiple Pod blocks. There are four ways to define a block: delimited, paragraph, abbreviated, and declarator; the first three yield the same result but the fourth differs. You can use whichever form is most convenient for your particular documentation task. Delimited blocks Delimited blocks are bounded by =begin and =end markers, both of which are followed by a valid Perl 6 identifier, which is the typename of the block. Typenames that are entirely lowercase (for example: =begin head1) or entirely uppercase (for example: =begin SYNOPSIS) are reserved. =begin head1 Top Level Heading =end head1 Configuration information After the typename, the rest of the =begin marker line is treated as configuration information for the block. This information is used in different ways by different types of blocks, but is always specified using Perl6-ish option pairs. That is, any of: Value is... Specify with... Or with... Or with... List :key[$e1, $e2, ...] :key($e1, $e2, ...) :key<$e1 $e2 ...> Hash :key{$k1=>$v1,$k2=>$v2} Boolean (true) :key :key(True) :key[True] Boolean (false) :!key :key(False) :key[False] String :key<str> :key('str') :key("str") Int :key(42) :key[42] Number :key(2.3) :key[2.3] Where '$e1, $e2, ...' are list elements of type String, Int, Number, or Boolean. Lists may have mixed element types. Note that one-element lists are converted to the type of their element (String, Int, Number, or Boolean). Also note that "bigints" can be used if required. For hashes, '$k1, $k2, ...' are keys of type Str and '$v1, $v2, ...' are values of type String, Int, Number, or Boolean. Strings are delimited by single or double quotes. Whitespace is not significant outside of strings. Hash keys need not be quote-delimited unless they contain significant whitespace. Strings entered inside angle brackets become lists if any whitespace is used inside the angle brackets. All option keys and values must, of course, be constants since Pod is a specification language, not a programming language. Specifically, option values cannot be closures. See Synopsis 2 for details of the various Perl 6 pair notations. The configuration section may be extended over subsequent lines by starting those lines with an = in the first (virtual) column followed by a whitespace character. This feature is not yet completely implemented. All configuration information currently must be provided on the same line as the =begin marker line or =for name for paragraph blocks. Paragraph blocks Paragraph blocks begin by a =for marker and end by the next Pod directive or the first blank line. The =for marker is followed by the typename of the block plus, optionally, any configuration data as in the delimited blocks described above. =for head1 Top Level Heading Abbreviated blocks Abbreviated blocks begin by an '=' sign, which is followed immediately by the typename of the block. All following data are part of the contents of the block, thus configuration data cannot be specified for an abbreviated block. The block ends at the next Pod directive or the first blank line. =head1 Top level heading Declarator blocks Declarator blocks differ from the others by not having a specific type, instead they are attached to some source code. Declarator blocks are introduced by a special comment: either #| or #=, which must be immediately followed by either a space or an opening curly brace. If followed by a space, the block is terminated by the end of line; if followed by one or more opening curly braces, the block is terminated by the matching sequence of closing curly braces. Blocks starting with #| are attached to the code after them, and blocks starting with #= are attached to the code before them. Since declarator blocks are attached to source code, they can be used to document classes, roles, subroutines and in general any statement or block. The WHY method can be used on these classes, roles, subroutines etc. to return the attached Pod value. #| Base class for magicians class Magician { has Int$.level; has Str @.spells; } #| Fight mechanics sub duel(Magician $a, Magician$b) { } #= Magicians only, no mortals. say Magician.WHY; # OUTPUT: «Base class for magicians␤» say &duel.WHY.leading; # OUTPUT: «Fight mechanics␤» say &duel.WHY.trailing; # OUTPUT: «Magicians only, no mortals.␤» These declarations can extend multiple blocks: #|( This is an example of stringification: * Numbers turn into strings * Regexes operate on said strings * C<with> topicalizes and places result into $_ ) sub search-in-seq( Int$end, Int $number ) { with (^$end).grep( /^$number/ ) { .say for$_<>; } } #=« Uses * topic * decont operator » By using a matched pair of parenthesis constructs such as () or «» the comments can extend multiple lines. This format, however, will not translate to a multi-line display by perl6 -doc. Block types Pod offers a wide range of standard block types. Ordinary paragraphs An ordinary paragraph consists of text that is to be formatted into a document at the current level of nesting, with whitespace squeezed, lines filled, and any special inline mark-up applied. Ordinary paragraphs consist of one or more consecutive lines of text, each of which starts with a non-whitespace character. The paragraph is terminated by the first blank line or block directive. For example: This is an ordinary paragraph. Its text will be squeezed and short lines filled. It is terminated by the first blank line. This is another ordinary paragraph. Its text will also be squeezed and short lines filled. It is terminated by the trailing directive on the next line. This is yet another ordinary paragraph, at the first virtual column set by the previous directive Ordinary paragraphs do not require an explicit marker or delimiters. Alternatively, there is also an explicit =para marker that can be used to explicitly mark a paragraph. =para This is an ordinary paragraph. Its text will be squeezed and short lines filled. In addition, the longer =begin para and =end para form can be used. For example: =begin para This is an ordinary paragraph. Its text will be squeezed and short lines filled. This is still part of the same paragraph, which continues until an... =end para As demonstrated by the previous example, within a delimited =begin para and =end para block, any blank lines are preserved. Code blocks Code blocks are used to specify source code, which should be rendered without re-justification, without whitespace-squeezing, and without recognizing any inline formatting codes. Typically these blocks are used to show examples of code, mark-up, or other textual specifications, and are rendered using a fixed-width font. A code block may be implicitly specified as one or more lines of text, each of which starts with a whitespace character. The implicit code block is then terminated by a blank line. For example: This ordinary paragraph introduces a code block: my $name = 'John Doe'; say$name; Code blocks can also be explicitly defined by enclosing them in =begin code and =end code =begin code my $name = 'John Doe'; say$name; =end code I/O blocks Pod provides blocks for specifying the input and output of programs. The =input block is used to specify pre-formatted keyboard input, which should be rendered without re-justification or squeezing of whitespace. The =output block is used to specify pre-formatted terminal or file output, which should also be rendered without re-justification or whitespace-squeezing. Lists Unordered lists Lists in Pod are specified as a series of =item blocks. For example: The three suspects are: =item Happy =item Sleepy =item Grumpy The three suspects are: • Happy • Sleepy • Grumpy Definition lists Lists that define terms or commands use =defn, equivalent to the DL lists in HTML =defn Happy When you're not blue. =defn Blue When you're not happy. will be rendered as Happy When you're not blue. Blue When you're not happy. Multi-level lists Lists may be multi-level, with items at each level specified using the =item1, =item2, =item3, etc. blocks. Note that =item is just an abbreviation for =item1. For example: =item1 Animal =item2 Vertebrate =item2 Invertebrate =item1 Phase =item2 Solid =item2 Liquid =item2 Gas • Animal • Vertebrate • Invertebrate • Phase • Solid • Liquid • Gas Multi-paragraph lists Using the delimited form of the =item block (=begin item and =end item), we can specify items that contain multiple paragraphs. For example: Let's consider two common proverbs: =begin item I<The rain in Spain falls mainly on the plain.> This is a common myth and an unconscionable slur on the Spanish people, the majority of whom are extremely attractive. =end item =begin item I<The early bird gets the worm.> In deciding whether to become an early riser, it is worth considering whether you would actually enjoy annelids for breakfast. =end item As you can see, folk wisdom is often of dubious value. Let's consider two common proverbs: • The rain in Spain falls mainly on the plain. This is a common myth and an unconscionable slur on the Spanish people, the majority of whom are extremely attractive. • The early bird gets the worm. In deciding whether to become an early riser, it is worth considering whether you would actually enjoy annelids for breakfast. As you can see, folk wisdom is often of dubious value. Tables Comments are useful for meta-documentation (documenting the documentation). Single-line comments use the comment keyword: For multi-line comments use a delimited comment block: =begin comment This comment is multi-line. =end comment Semantic blocks All uppercase block typenames are reserved for specifying standard documentation, publishing, source components, or meta-information. =NAME =AUTHOR =VERSION =TITLE =SUBTITLE Formatting codes Formatting codes provide a way to add inline mark-up to a piece of text. All Pod formatting codes consist of a single capital letter followed immediately by a set of single or double angle brackets; Unicode double angle brackets may be used. Formatting codes may nest other formatting codes. The following codes are available: B, C, E, I, K, L, N, P, R, T, U, V, X, and Z. Bold To format a text in bold enclose it in B< > Perl 6 is B<awesome> Perl 6 is awesome Italic To format a text in italic enclose it in I< > Perl 6 is I<awesome> Perl 6 is awesome Underlined To underline a text enclose it in U< > Perl 6 is U<awesome> Code To flag text as Code and treat it verbatim enclose it in C< > C<my $var = 1; say$var;> my $var = 1; say$var; To create a link enclose it in L< > A vertical bar (optional) separates label and target. The target location can be an URL (first example) or a local POD document (second example). Local file names are relative to the base of the project, not the current document. Perl 6 homepage L<https://perl6.org> L<Perl 6 homepage|https://perl6.org> Perl 6 homepage https://perl6.org Perl 6 homepage Structure To create a link to a section in the same document: This code is not implemented in Pod::To::HTML, but is partially implemented in Pod::To::BigPage. A second kind of link — the P<> or placement link — works in the opposite direction. Instead of directing focus out to another document, it allows you to assimilate the contents of another document into your own. In other words, the P<> formatting code takes a URI and (where possible) inserts the contents of the corresponding document inline in place of the code itself. P<> codes are handy for breaking out standard elements of your documentation set into reusable components that can then be incorporated directly into multiple documents. For example: =DISCLAIMER P<http://www.MegaGigaTeraPetaCorp.com/std/disclaimer.txt> might produce: Disclaimer ABSOLUTELY NO WARRANTY IS IMPLIED. NOT EVEN OF ANY KIND. WE HAVE SOLD YOU THIS SOFTWARE WITH NO HINT OF A SUGGESTION THAT IT IS EITHER USEFUL OR USABLE. AS FOR GUARANTEES OF CORRECTNESS...DON'T MAKE US LAUGH! AT SOME TIME IN THE FUTURE WE MIGHT DEIGN TO SELL YOU UPGRADES THAT PURPORT TO ADDRESS SOME OF THE APPLICATION'S MANY DEFICIENCIES, BUT NO PROMISES THERE EITHER. WE HAVE MORE LAWYERS ON STAFF THAN YOU HAVE TOTAL EMPLOYEES, SO DON'T EVEN *THINK* ABOUT SUING US. HAVE A NICE DAY. If a renderer cannot find or access the external data source for a placement link, it must issue a warning and render the URI directly in some form, possibly as an outwards link. For example: Disclaimer See: http://www.MegaGigaTeraPetaCorp.com/std/disclaimer.txt You can use any of the following URI forms (see #Links) in a placement link. A comment is text that is never rendered. To create a comment enclose it in Z< > Perl 6 is awesome Z<Of course it is!> Perl 6 is awesome Notes Notes are rendered as footnotes. To create a note enclose it in N< > Perl 6 is multi-paradigmatic N<Supporting Procedural, Object Oriented, and Functional programming> Keyboard input To flag text as keyboard input enclose it in K< > Replaceable The R<> formatting code specifies that the contained text is a replaceable item, a placeholder, or a metasyntactic variable. It is used to indicate a component of a syntax or specification that should eventually be replaced by an actual value. For example: The basic ln command is: ln source_file target_file or: Then enter your details at the prompt: =for input Terminal output To flag text as terminal output enclose it in T< > Hello T<John Doe> Unicode To include Unicode code points or HTML5 character references in a Pod document, enclose them in E< > E< > can enclose a number, that number is treated as the decimal Unicode value for the desired code point. It can also enclose explicit binary, octal, decimal, or hexadecimal numbers using the Perl 6 notations for explicitly based numbers. Perl 6 makes considerable use of the E<171> and E<187> characters. Perl 6 makes considerable use of the E<laquo> and E<raquo> characters. Perl 6 makes considerable use of the E<0b10101011> and E<0b10111011> characters. Perl 6 makes considerable use of the E<0o253> and E<0o273> characters. Perl 6 makes considerable use of the E<0d171> and E<0d187> characters. Perl 6 makes considerable use of the E<0xAB> and E<0xBB> characters. Perl 6 makes considerable use of the « and » characters. Verbatim text This code is not implemented by Pod::To::HTML, but is implemented in Pod::To::BigPage. The V<> formatting code treats its entire contents as being verbatim, disregarding every apparent formatting code within it. For example: The B<V< V<> >> formatting code disarms other codes such as V< I<>, C<>, B<>, and M<> >. Note, however that the V<> code only changes the way its contents are parsed, not the way they are rendered. That is, the contents are still wrapped and formatted like plain text, and the effects of any formatting codes surrounding the V<> code are still applied to its contents. For example the previous example is rendered: The V<> formatting code disarms other codes such as I<>, C<>, B<>, and M<> . Indexing terms Anything enclosed in an X<> code is an index entry. The contents of the code are both formatted into the document and used as the (case-insensitive) index entry: An X<array> is an ordered list of scalars indexed by number, starting with 0. A X<hash> is an unordered collection of scalar values indexed by their associated string key. You can specify an index entry in which the indexed text and the index entry are different, by separating the two with a vertical bar: An X<array|arrays> is an ordered list of scalars indexed by number, starting with 0. A X<hash|hashes> is an unordered collection of scalar values indexed by their associated string key. In the two-part form, the index entry comes after the bar and is case-sensitive. You can specify hierarchical index entries by separating indexing levels with commas: An X<array| arrays, definition of > is an ordered list of scalars indexed by number, starting with 0. A X<hash| hashes, definition of > is an unordered collection of scalar values indexed by their associated string key. You can specify two or more entries for a single indexed text, by separating the entries with semicolons: A X<hash| hashes, definition of; associative arrays > is an unordered collection of scalar values indexed by their associated string key. The indexed text can be empty, creating a "zero-width" index entry: X<|puns, deliberate> This is called the "Orcish Maneuver" because you "OR" the "cache". Rendering Pod HTML In order to generate HTML from Pod, you need the Pod::To::HTML module. If it is not already installed, install it by running the following command: zef install Pod::To::HTML Using the terminal run the following command: perl6 --doc=HTML input.pod6 > output.html Markdown In order to generate Markdown from Pod, you need the Pod::To::Markdown module. If it is not already installed, install it by running the following command: zef install Pod::To::Markdown Using the terminal run the following command: perl6 --doc=Markdown input.pod6 > output.md Text In order to generate Text from Pod, you can use the default Pod::To::Text module. Using the terminal, run the following command: perl6 --doc=Text input.pod6 > output.txt You can omit the =Text portion: perl6 --doc input.pod6 > output.txt You can even embed Pod directly in your program and add the traditional Unix command line "--man" option to your program with a multi MAIN subroutine like this: multi MAIN(Bool :$man) { run$*EXECUTABLE, '--doc', $*PROGRAM; } Now myprogram --man will output your Pod rendered as a man page. Accessing Pod In order to access Pod documentation from within a Perl 6 program it is required to use the special = twigil, as documented in the variables section. The = twigil provides the introspection over the Pod structure, providing a Pod::Block tree root from which it is possible to access the whole structure of the Pod document. As an example, the following piece of code introspects its own Pod documentation: =begin pod =head1 This is a head1 title This is a paragraph. =head2 Subsection Here some text for the subsection. =end pod for$=pod -> $pod-item { for$pod-item.contents -> $pod-block {$pod-block.perl.say; } } producing the following output: Pod::Heading.new(level => 1, config => {}, contents => [Pod::Block::Para.new(config => {}, contents => ["This is a head1 title"])]); Pod::Block::Para.new(config => {}, contents => ["This is a paragraph."]); Pod::Heading.new(level => 2, config => {}, contents => [Pod::Block::Para.new(config => {}, contents => ["Subsection"])]); Pod::Block::Para.new(config => {}, contents => ["Here some text for the subsection."]); 37 Pod 6 tables The good, the bad and the ugly The official specification for Perl 6 POD tables is located in the Documentation specification here: Tables. Although Pod 6 specifications are not completely handled properly yet, several projects are ongoing to correct the situation; one such project is ensuring the proper handling of Pod 6 tables. As part of that effort, this document explains the current state of Pod 6 tables by example: valid tables, invalid tables, and ugly tables (i.e., valid tables that, because of sloppy construction, may result in something different than the user expects). Restrictions 1. The only valid column separators are either visible (' | ' or ' + ') (note at least one space is required before and after the visible column separators) or invisible [two or more contiguous whitespace (WS) characters (e.g., '  ')]. Column separators are not normally recognized as such at the left or right side of a table, but one on the right side may result in one or more empty cells depending upon the number of the cells in other rows. (Note that a pipe or plus character meant as part of cell data will result in an unintended extra column unless the character is escaped with a backslash, e.g., '\|' or '\+'.) 2. Mixing visible and invisible column separators in the same table is illegal. 3. The only valid row separator characters are '_', '-', '+', ' ', '|', and '='. 4. Consecutive interior row-separator lines are illegal. 5. Leading and trailing row-separator lines generate a warning. 6. Formatting blocks in table cells currently are ignored and treated as plain text. HINT: During development, use of the environment variable RAKUDO_POD6_TABLE_DEBUG will show you how Rakudo interprets your pod tables before they are passed to renderers such as Pod::To::HTML, Pod::To::Text, and Pod::To::Markdown. Best practices HINT: Not adhering to the following best practices may require more table processing due to additional looping over table rows. 1. Use of WS for column separators is fragile, and they should only be used for simple tables. The Ugly Tables section below illustrates the problem. 2. Align table columns and rows carefully. See the examples in later best practices. 3. Don't use visual borders on the table. 4. For tables with a heading and single- or multi-line content, use one or more contiguous equal signs ('=') as the row separator after the heading, and use one or more contiguous hyphens ('-') as the row separator in the content portion of the table. For example, • Heading and single- or multi-line content =begin table hdr col 0 | hdr col 1 ====================== row 0 | row 0 col 0 | col 1 ---------------------- row 1 | row 1 col 0 | col 1 ---------------------- =end table =begin table hdr col 0 | hdr col 1 ====================== row 0 col 0 | row 0 col 1 row 1 col 0 | row 1 col 1 =end table 5. For tables with no header and multi-line content, use one or more contiguous hyphens ('-') as the row separator in the content portion of the table. For example, =begin table row 0 | row 0 col 0 | col 1 ---------------------- row 1 col 0 | row 1 col 1 =end table 6. For tables with many rows and no multi-line content, using no row separators is fine. However, with one or more rows with multi-line content, it is easier to ensure proper results by using a row separator line (visible or invisible) between every content row. 7. Ensure intentionally empty cells have column separators, otherwise expect a warning about short rows being filled with empty cells. (Tables rows will always have the same number of cells as the row with the most cells. Short rows are padded on the right with empty cells and generate a warning.) 8. Adding a caption to a table is possible using the =begin table line as in this example: mow lawn take out trash =end table Although not a good practice, currently there is in use an alternate method of defining a caption as shown in this example: =begin table :config{caption => "My Tasks"} mow lawn take out trash =end table Note the alternative method of putting the caption in the config hash was necessary before the :caption method was implemented, but that method is now considered to be deprecated. The practice will generate a warning in the upcoming version 6.d, and it will raise an exception in version 6.e. Good tables Following are examples of valid (Good) tables (taken from the current Specification Tests). =begin table The Shoveller Eddie Stevens King Arthur's singing shovel Blue Raja Geoffrey Smith Master of cutlery Mr Furious Roy Orson Ticking time bomb of fury The Bowler Carol Pinnsler Haunted bowling ball =end table =table Constants 1 Variables 10 Subroutines 33 Everything else 57 =for table mouse | mice horse | horses elephant | elephants =table Animal | Legs | Eats ======================= Human + 2 + Pizza Shark + 0 + Fish =table Superhero | Secret | | Identity | Superpower ==============|=================|================================ The Shoveller | Eddie Stevens | King Arthur's singing shovel =begin table Secret Superhero Identity Superpower ============= =============== =================== The Shoveller Eddie Stevens King Arthur's singing shovel Blue Raja Geoffrey Smith Master of cutlery Mr Furious Roy Orson Ticking time bomb of fury The Bowler Carol Pinnsler Haunted bowling ball =end table =table X | O | ---+---+--- | X | O ---+---+--- | | X =table X O =========== X O =========== X =begin table foo bar =end table Following are examples of invalid (Bad) tables, and they should trigger an unhandled exception during parsing. • Mixed column separator types in the same row are not allowed: =begin table r0c0 + r0c1 | r0c3 =end table • Mixed visual and whitespace column separator types in the same table are not allowed: =begin table r0c0 + r0c1 | r0c3 r1c0 r0c1 r0c3 =end table • Two consecutive interior row separators are not allowed: =begin table r0c0 | r0c1 ============ ============ r1c0 | r1c1 =end table Ugly tables Following are examples of valid tables that are probably intended to be two columns, but the columns are not aligned well so each will parse as a single-column table. • Unaligned columns with WS column separators: Notice the second row has the two words separated by only one WS character, while it takes at least two adjacent WS characters to define a column separation. This is a valid table but will be parsed as a single-column table. =begin table r0c0 r0c1 r1c0 r0c1 =end table • Unaligned columns with visual column separators: Notice the second row has the two words separated by a visible character ('|') but the character is not recognized as a column separator because it doesn't have an adjacent WS character on both sides of it. Although this is a legal table, the result will not be what the user intended because the first row has two columns while the second row has only one column, and it will thus have an empty second column. =begin table r0c0 | r0c1 r1c0 |r0c1 =end table 38 Terms Perl 6 terms Most syntactic constructs in Perl 6 can be categorized in terms and operators. Here you can find an overview of different kinds of terms. Literals Int 42 12_300_00 Int literals consist of digits and can contain underscores between any two digits. To specify a base other than ten, use the colonpair form :radix<number> . Rat 12.34 1_200.345_678 Rat literals (rational numbers) contain two integer parts joined by a dot. Note that trailing dots are not allowed, so you have to write 1.0 instead of 1. (this rule is important because there are infix operators starting with a dot, for example the .. Range operator). Num 12.3e-32 3e8 Num literals (floating point numbers) consist of Rat or Int literals followed by an e and a (possibly negative) exponent. 3e8 constructs a Num with value 3 * 10**8. Str 'a string' 'I\'m escaped!' "I don't need to be" "\"But I still can be,\" he said." q|Other delimiters can be used too!| String literals are most often created with ' or ", however strings are actually a powerful sub-language of Perl 6. See Quoting Constructs. Regex / match some text / rx/slurp \s rest (.*) $/ These forms produce regex literals. See Quoting Constructs. Pair a => 1 'a' => 'b' :identifier :!identifier :identifier<value> :identifier<value1 value2> :identifier($value) :identifier['val1', 'val2'] :identifier{key1 => 'val1', key2 => 'value2'} :valueidentifier :$item :@array :%hash :&callable Pair objects can be created either with infix:«=>» (which auto-quotes the left-hand side if it is an identifier), or with the various colon-pair forms. Those almost always start with a colon and then are followed either by an identifier or the name of an already existing variable (whose name without the sigil is used as the key and value of the variable is used as the value of the pair). There is a special form where an integer value is immediately after the colon and the key is immediately after the value. In the identifier form of a colon-pair, the optional value can be any circumfix. If it is left blank, the value is Bool::True. The value of the :!identifier form is Bool::False. If used in an argument list, all of these forms count as named arguments, with the exception of 'quoted string' =>$value . List () 1, 2, 3 <a b c> «a b c» qw/a b c/ List literals are: the empty pair of parentheses (), a comma-separated list, or several quoting constructs. term * Creates an object of type Whatever. See Whatever documentation for more details. Identifier terms There are built-in identifier terms in Perl 6, which are listed below. In addition one can add new identifier terms with the syntax: sub term:<forty-two> { 42 }; say forty-two or as constants: constant forty-two = 42; say forty-two; term self Inside a method, self refers to the invocant (i.e. the object the method was called on). If used in a context where it doesn't make sense, a compile-time exception of type X::Syntax::NoSelf is thrown. term now Returns an Instant object representing the current time. It includes leap seconds and as such a few dozen seconds larger than time: say (now - time).Int; # OUTPUT: «37␤» term time Returns the current POSIX time as an Int. See now for high-resolution timestamp that includes leap seconds. term rand Returns a pseudo-random Num in the range 0..^1. term π Returns the number π, i.e., the ratio between circumference and diameter of a circle. The ASCII equivalent of π is pi. term pi Returns the number π, i.e., the ratio between circumference and diameter of a circle. pi is the ASCII equivalent of π. term τ Returns the number τ, i.e., the ratio between circumference and radius of a circle. The ASCII equivalent of τ is tau. term tau Returns the number τ, i.e., the ratio between circumference and radius of a circle. tau is the ASCII equivalent of τ. term 𝑒 Returns Euler's number. The ASCII equivalent of 𝑒 is e. term e Returns Euler's number. e is the ASCII equivalent of 𝑒. term i Returns the imaginary unit (for Complex numbers). Variables Variables are discussed in the variable language docs. Constants Constants are similar to variables without a container, and thus cannot be rebound. However, their initializers are evaluated at BEGIN time: constant speed-of-light = 299792458; # m/s constant @foo = 1, 2, 3; constant &talk = &say; talk speed-of-light²; # OUTPUT: «89875517873681764␤» talk @foo; # OUTPUT: «(1 2 3)␤» Compile-time evaluation means you should be careful with using constants inside modules, which get automatically pre-compiled, and so the value of the constant would not change even between multiple executions of the program: # Foo.pm6 unit module Foo; constant comp-time = DateTime.now; # The value of the constant remains the same even though our script # is executed multiple times: $perl6 -I. -MFoo -e 'say Foo::comp-time' 2018-06-17T18:18:50.021484-04:00$ perl6 -I. -MFoo -e 'say Foo::comp-time' 2018-06-17T18:18:50.021484-04:00 Constants are declared with keyword constant followed by an identifier with an optional sigil. Constants are our scoped by default. constant foo = 42; my constant $baz = rand; our constant @foo = 1, 2, 3; constant %bar = %(:42foo, :100bar); NOTE: if you're using the Rakudo compiler, you need version 2018.08 or newer for type constraints and auto-coercion on constants to be available. Auto-coercion on %-sigilled constants requires 6.d language, preview version of which can be enabled with the use v6.d.PREVIEW pragma An optional type constraint can be used, in which case the use of scope declarator is required: # !!WRONG!! missing scope declarator before type: Int constant bar = 42; # RIGHT: our Int constant bar = 42; Unlike variables, you cannot parameterize @-, %-, and &-sigilled constants by specifying the parameterization type in the declarator itself: # !!WRONG!! cannot parameterize @-sigilled constant with Int our Int constant @foo = 42; # OK: parameterized types as values are fine constant @foo = Array[Int].new: 42; The reason for the restriction is that constants with @ and % sigils default to List and Map types, which cannot be parameterized. To keep things simple and consistent, parameterization was simply disallowed in these constructs. The @-, %-, and &-sigilled constants specify implied typecheck of the given value for Positional, Associative, and Callable roles respectively. The @-sigilled constants—and as of 6.d language version, the %-sigilled constants as well—perform auto-coercion of the value if it does not pass the implied typecheck. The @-sigilled constants will coerce using method cache and %-sigilled constants coerce using method Map. constant @foo = 42; @foo.perl.say; # OUTPUT: «(42,)» constant @bar = [<a b c>]; @bar.perl.say; # OUTPUT: «["a", "b", "c"]» use v6.d.PREVIEW; constant %foo = <foo bar>; %foo.perl.say; # OUTPUT: «Map.new((:foo("bar")))» constant %bar = {:10foo, :72bar}; %bar.perl.say; # OUTPUT: «{:bar(72), :foo(10)}» # Pair is already Associative, so it remains a Pair constant %baz = :72baz; %baz.perl.say; # OUTPUT: «:baz(72)» For convenience and consistency reasons, you can use the binding operator ( := ) instead of the assignment operator, use backslash before sigilless name of the constant variable (same as with sigilless variables), and even omit the name of the constant entirely to have an anonymous constant. Since you can't refer to anonymous entities, you may be better off using a BEGIN phaser instead, for clarity. constant %foo := :{:42foo}; constant \foo = 42; constant = 'anon'; 39 Testing Writing and running tests in Perl 6 Testing code is an integral part of software development. Tests provide automated, repeatable verifications of code behavior, and ensures your code works as expected. In Perl 6, the Test module provides a testing framework, used also by Perl 6's official spectest suite. The testing functions emit output conforming to the Test Anything Protocol. In general, they are used in sink context: ok check-name($meta, :$relaxed-name), "name has a hyphen rather than '::'" but all functions also return as a Boolean if the test has been successful or not, which can be used to print a message if the test fails: ok check-name($meta, :$relaxed-name), "name has a hyphen rather than '::'" \ or diag "\nTo use hyphen in name, pass :relaxed-name to meta-ok\n"; Writing tests As with any Perl project, the tests live under the t directory in the project's base directory. A typical test file looks something like this: use v6.c; use Test; # a Standard module included with Rakudo use lib 'lib'; plan$num-tests; # .... tests done-testing; # optional with 'plan' We ensure that we're using Perl 6, via the use v6.c pragma, then we load the Test module and specify where our libraries are. We then specify how many tests we plan to run (such that the testing framework can tell us if more or fewer tests were run than we expected) and when finished with the tests, we use done-testing to tell the framework we are done. Note that routines in Test module are not thread-safe. This means you should not attempt to use the testing routines in multiple threads simultaneously, as the TAP output might come out of order and confuse the program interpreting it. There are no current plans to make it thread safe. If threaded-testing is crucial to you, you may find some suitable ecosystem modules to use instead of Test for your testing needs. Running tests Tests can be run individually by specifying the test filename on the command line: $perl6 t/test-filename.t Or via the prove command from Perl 5, where using --exec to specify the executable that runs the tests:$ prove --exec perl6 -r t To abort the test suite upon first failure, set the PERL6_TEST_DIE_ON_FAIL environmental variable: $PERL6_TEST_DIE_ON_FAIL=1 perl6 t/test-filename.t The same variable can be used within the test file. Set it before loading the Test module: BEGIN %*ENV<PERL6_TEST_DIE_ON_FAIL> = 1; use Test; ... Test plans Tests plans use plan for declaring how many plans are going to be done or, as might be the case, skipped. If no plan is declared, done-testing is used to declare the end of the tests. Testing return values The Test module exports various functions that check the return value of a given expression and produce standardized test output. In practice, the expression will often be a call to a function or method that you want to unit-test. ok and nok will match True and False. However, where possible it's better to use one of the specialized comparison test functions below, because they can print more helpful diagnostics output in case the comparison fails. By string comparison is and nok test for equality using the proper operator, depending on the object (or class) it's handled. By approximate numeric comparison is-approx compares numbers with a certain precision, which can be absolute or relative. It can be useful for numeric values whose precision will depend on the internal representation. By structural comparison Structures can be also compared using is-deeply, which will check that internal structures of the objects compared is the same. By arbitrary comparison You can use any kind of comparison with cmp-ok, which takes as an argument the function or operator that you want to be used for comparing. By object type isa-ok tests whether an object is of a certain type. By method name can-ok is used on objects to check whether they have that particular method. By role • does-ok($variable, $role,$description?) does-ok checks whether the given variable can do a certain Role. By regex like and unlike check using regular expressions; in the first case passes if a match exists, in the second case when it does not. Testing modules Modules are tentatively loaded with use-ok, which fails if they fail to load. Testing exceptions dies-ok and lives-ok are opposites ways of testing code; the first checks that it throws an exception, the second that it does not; throws-like checks that the code throws the specific exception it gets handed as an argument; fails-like, similarly, checks if the code returns a specific type of Failure. eval-dies-ok and eval-lives-ok work similarly on strings that are evaluated prior to testing. Grouping tests The result of a group of subtests is only ok if all subtests are ok; they are grouped using subtest. Skipping tests Sometimes tests just aren't ready to be run, for instance a feature might not yet be implemented, in which case tests can be marked as todo. Or it could be the case that a given feature only works on a particular platform - in which case one would skip the test on other platforms; skip-rest will skip the remaining tests instead of a particular number given as argument; bail-out will simply exit the tests with a message. Manual control If the convenience functionality documented above does not suit your needs, you can use the following functions to manually direct the test harness output; pass will say a test has passed, and diag will print a (possibly) informative message. 40 Traps to avoid Traps to avoid when getting started with Perl 6 When learning a programming language, possibly with the background of being familiar with another programming language, there are always some things that can surprise you and might cost valuable time in debugging and discovery. This document aims to show common misconceptions in order to avoid them. During the making of Perl 6 great pains were taken to get rid of warts in the syntax. When you whack one wart, though, sometimes another pops up. So a lot of time was spent finding the minimum number of warts or trying to put them where they would rarely be seen. Because of this, Perl 6's warts are in different places than you may expect them to be when coming from another language. Variables and constants Constants are computed at compile time Constants are computed at compile time, so if you use them in modules keep in mind that their values will be frozen due to pre-compilation of the module itself: # WRONG (most likely): unit module Something::Or::Other; constant $config-file = "config.txt".IO.slurp; The$config-file will be slurped during pre-compilation and changes to config.txt file won't be re-loaded when you start the script again; only when the module is re-compiled. Avoid using a container and prefer binding a value to a variable that offers a behavior similar to a constant, but allowing the value to get updated: # Good; file gets updated from 'config.txt' file on each script run: unit module Something::Or::Other; my $config-file := "config.txt".IO.slurp; Assigning to Nil produces a different value, usually Any Actually, assigning to Nil reverts the variable to its default value. So: my @a = 4, 8, 15, 16; @a[2] = Nil; say @a; # OUTPUT: «[4 8 (Any) 16]␤» In this case, Any is the default value of an Array element. You can purposefully assign Nil as a default value: my %h is default(Nil) = a => Nil; say %h; # OUTPUT: «Hash %h = {:a(Nil)}␤» Or bind a value to Nil if that is the result you want: @a[3] := Nil; say @a; # OUTPUT: «[4 8 (Any) Nil]␤» This trap might be hidden in the result of functions, such as matches: my$result2 = 'abcdef' ~~ / dex /; say "Result2 is { $result2.^name }"; # OUTPUT: «Result2 is Any␤» A Match will be Nil if it finds nothing; however it assigning Nil to$result2 above will result in its default value, which is Any as shown. Using a block to interpolate anon state vars The programmer intended for the code to count the number of times the routine is called, but the counter is not increasing: sub count-it { say "Count is {$++}" } count-it; count-it; # OUTPUT: # Count is 0 # Count is 0 When it comes to state variables, the block in which the vars are declared gets cloned —and vars get initialized anew— whenever that block's block is re-entered. This lets constructs like the one below behave appropriately; the state variable inside the loop gets initialized anew each time the sub is called: sub count-it { for ^3 { state$count = 0; say "Count is $count";$count++; } } count-it; say "…and again…"; count-it; # OUTPUT: # Count is 0 # Count is 1 # Count is 2 # …and again… # Count is 0 # Count is 1 # Count is 2 The same layout exists in our buggy program. The { } inside a double-quoted string isn't merely an interpolation to execute a piece of code. It's actually its own block, which is just as in the example above gets cloned each time the sub is entered, re-initializing our state variable. To get the right count, we need to get rid of that inner block, using a scalar contextualizer to interpolate our piece of code instead: sub count-it { say "Count is $($++)" } count-it; count-it; # OUTPUT: # Count is 0 # Count is 1 Alternatively, you can also use the concatenation operator instead: sub count-it { say "Count is " ~ $++ } Using set subroutines on Associative when the value is falsy Using (cont), , , (elem), , or on classes implementing Associative will return False if the value of the key is falsy: enum Foo «a b»; say Foo.enums ∋ 'a'; # OUTPUT: # False Instead, use :exists: enum Foo «a b»; say Foo.enums<a>:exists; # OUTPUT: # True Blocks Beware of empty "blocks" Curly braces are used to declare blocks. However, empty curly braces will declare a hash.$ = {say 42;} # Block $= {;} # Block$ = {…} # Block $= { } # Hash You can use the second form if you effectively want to declare an empty block: my &does-nothing = {;}; say does-nothing(33); # OUTPUT: «Nil␤» Objects Assigning to attributes Newcomers often think that, because attributes with accessors are declared as has$.x, they can assign to $.x inside the class. That's not the case. For example class Point { has$.x; has $.y; method double {$.x *= 2; # WRONG $.y *= 2; # WRONG self; } } say Point.new(x => 1, y => -2).double.x # OUTPUT: «Cannot assign to an immutable value␤» the first line inside the method double is marked with # WRONG because$.x, short for $( self.x ), is a call to a read-only accessor. The syntax has$.x is short for something like has $!x; method x() {$!x }, so the actual attribute is called $!x, and a read-only accessor method is automatically generated. Thus the correct way to write the method double is method double {$!x *= 2; $!y *= 2; self; } which operates on the attributes directly. BUILD prevents automatic attribute initialization from constructor arguments When you define your own BUILD submethod, you must take care of initializing all attributes by yourself. For example class A { has$.x; has $.y; submethod BUILD {$!y = 18; } } say A.new(x => 42).x; # OUTPUT: «Any␤» leaves $!x uninitialized, because the custom BUILD doesn't initialize it. Note: Consider using TWEAK instead. Rakudo supports TWEAK method since release 2016.11. One possible remedy is to explicitly initialize the attribute in BUILD: submethod BUILD(:$x) { $!y = 18;$!x := $x; } which can be shortened to: submethod BUILD(:$!x) { $!y = 18; } Whitespace Whitespace in regexes does not match literally say 'a b' ~~ /a b/; # OUTPUT: «False␤» Whitespace in regexes is, by default, considered an optional filler without semantics, just like in the rest of the Perl 6 language. Ways to match whitespace: • \s to match any one whitespace, \s+ to match at least one • ' ' (a blank in quotes) to match a single blank • \t, \n for specific whitespace (tab, newline) • \h, \v for horizontal, vertical whitespace • .ws, a built-in rule for whitespace that oftentimes does what you actually want it to do • with m:s/a b/ or m:sigspace/a b/, the blank in the regexes matches arbitrary whitespace Ambiguities in parsing While some languages will let you get away with removing as much whitespace between tokens as possible, Perl 6 is less forgiving. The overarching mantra is we discourage code golf, so don't scrimp on whitespace (the more serious underlying reason behind these restrictions is single-pass parsing and ability to parse Perl 6 programs with virtually no backtracking). The common areas you should watch out for are: Block vs. Hash slice ambiguity # WRONG; trying to hash-slice a Bool: while ($++ > 5){ .say } # RIGHT: while ($++ > 5) { .say } # EVEN BETTER; Perl 6 does not require parentheses there: while$++ > 5 { .say } Reduction vs. Array constructor ambiguity # WRONG; ambiguity with [<] meta op: my @a = [[<foo>],]; # RIGHT; reductions cannot have spaces in them, so put one in: my @a = [[ <foo>],]; # No ambiguity here, natural spaces between items suffice to resolve it: my @a = [[<foo bar ber>],]; Less than vs. Word quoting/Associative indexing # WRONG; trying to index 3 associatively: say 3<5>4 # RIGHT; prefer some extra whitespace around infix operators: say 3 < 5 > 4 Captures Containers versus values in a capture Beginners might expect a variable in a Capture to supply its current value when that Capture is later used. For example: my $a = 2; say join ",", ($a, ++$a); # OUTPUT: «3,3␤» Here the Capture contained the container pointed to by$a and the value of the result of the expression ++$a. Since the Capture must be reified before &say can use it, the ++$a may happen before &say looks inside the container in $a (and before the List is created with the two terms) and so it may already be incremented. Instead, use an expression that produces a value when you want a value. my$a = 2; say join ",", (+$a, ++$a); # OUTPUT: «2,3␤» Or even simpler my $a = 2; say "$a, {++$a}"; # OUTPUT: «2, 3␤» The same happens in this case: my @arr; my ($a, $b) = (1,1); for ^5 { ($a,$b) = ($b, $a+$b); @arr.push: ($a,$b); say @arr }; Outputs «[(1 2)]␤[(2 3) (2 3)]␤[(3 5) (3 5) (3 5)]␤.... $a and$b are not reified until say is called, the value that they have in that precise moment is the one printed. To avoid that, decontainerize values or take them out of the variable in some way before using them. my @arr; my ($a,$b) = (1,1); for ^5 { ($a,$b) = ($b,$a+$b); @arr.push: ($a.item, $b.item); say @arr }; With item, the container will be evaluated in item context, its value extracted, and the desired outcome achieved. Cool tricks Perl 6 includes a Cool class, which provides some of the DWIM behaviors we got used to by coercing arguments when necessary. However, DWIM is never perfect. Especially with Lists, which are Cool, there are many methods that will not do what you probably think they do, including contains, starts-with or index. Please see some examples in the section below. Strings are not Lists, so beware indexing In Perl 6, strings are not lists of characters. One cannot iterate over them or index into them as you can with lists, despite the name of the .index routine. Lists become strings, so beware .index()ing List inherits from Cool, which provides access to .index. Because of the way .index coerces a List into a Str, this can sometimes appear to be returning the index of an element in the list, but that is not how the behavior is defined. my @a = <a b c d>; say @a.index(‘a’); # 0 say @a.index('c'); # 4 -- not 2! say @a.index('b c'); # 2 -- not undefined! say @a.index(<a b>); # 0 -- not undefined! These same caveats apply to .rindex. Lists become strings, so beware .contains() Similarly, .contains does not look for elements in the list. my @menu = <hamburger fries milkshake>; say @menu.contains('hamburger'); # True say @menu.contains('hot dog'); # False say @menu.contains('milk'); # True! say @menu.contains('er fr'); # True! say @menu.contains(<es mi>); # True! If you actually want to check for the presence of an element, use the (cont) operator for single elements, and the superset and strict superset operators for multiple elements. my @menu = <hamburger fries milkshake>; say @menu (cont) 'fries'; # True say @menu (cont) 'milk'; # False say @menu (>) <hamburger fries>; # True say @menu (>) <milkshake fries>; # True (! NB: order doesn't matter) If you are doing a lot of element testing, you may be better off using a Set. Numeric literals are parsed before coercion Experienced programmers will probably not be surprised by this, but Numeric literals will be parsed into their numeric value before being coerced into a string, which may create nonintuitive results. say 0xff.contains(55); # True say 0xff.contains(0xf); # False say 12_345.contains("23"); # True say 12_345.contains("2_"); # False Getting a random item from a List A common task is to retrieve one or more random elements from a collection, but List.rand isn't the way to do that. Cool provides rand, but that first coerces the List into the number of items in the list, and returns a random real number between 0 and that value. To get random elements, see pick and roll. my @colors = <red orange yellow green blue indigo violet>; say @colors.rand; # 2.21921955680514 say @colors.pick; # orange say @colors.roll; # blue say @colors.pick(2); # yellow violet (cannot repeat) say @colors.roll(3); # red green red (can repeat) Lists numify to their number of elements in numeric context You want to check whether a number is divisible by any of a set of numbers: say 42 %% <11 33 88 55 111 20325>; # OUTPUT: «True␤» What? There's no single number 42 should be divisible by. However, that list has 6 elements, and 42 is divisible by 6. That's why the output is true. In this case, you should turn the List into a Junction: say 42 %% <11 33 88 55 111 20325>.any; # OUTPUT: «any(False, False, False, False, False, False)␤» which will clearly reveal the falsehood of the divisiveness of all the numbers in the list, which will be numified separately. Arrays Referencing the last element of an array In some languages one could reference the last element of an array by asking for the "-1th" element of the array, e.g.: my @array = qw{victor alice bob charlie eve}; say @array[-1]; # OUTPUT: «eve␤» In Perl 6 it is not possible to use negative subscripts, however the same is achieved by actually using a function, namely *-1. Thus accessing the last element of an array becomes: my @array = qw{victor alice bob charlie eve}; say @array[*-1]; # OUTPUT: «eve␤» Yet another way is to utilize the array's tail method: my @array = qw{victor alice bob charlie eve}; say @array.tail; # OUTPUT: «eve␤» say @array.tail(2); # OUTPUT: «(charlie eve)␤» Typed array parameters Quite often new users will happen to write something like: sub foo(Array @a) { ... } ...before they have gotten far enough in the documentation to realize that this is asking for an Array of Arrays. To say that @a should only accept Arrays, use instead: sub foo(@a where Array) { ... } It is also common to expect this to work, when it does not: sub bar(Int @a) { 42.say }; bar([1, 2, 3]); # expected Positional[Int] but got Array The problem here is that [1, 2, 3] is not an Array[Int], it is a plain old Array that just happens to have Ints in it. To get it to work, the argument must also be an Array[Int]. my Int @b = 1, 2, 3; bar(@b); # OUTPUT: «42␤» bar(Array[Int].new(1, 2, 3)); This may seem inconvenient, but on the upside it moves the type-check on what is assigned to @b to where the assignment happens, rather than requiring every element to be checked on every call. Using «» quoting when you don't need it This trap can be seen in different varieties. Here are some of them: my$x = ‘hello’; my $y = ‘foo bar’; my %h =$x => 42, $y => 99; say %h«$x»; # ← WRONG; assumption that $x has no whitespace say %h«$y»; # ← WRONG; splits ‘foo bar’ by whitespace say %h«"$y"»; # ← KINDA OK; it works but there is no good reason to do that say %h{$y}; # ← RIGHT; this is what should be used run «touch $x»; # ← WRONG; assumption that only one file will be created run «touch$y»; # ← WRONG; will touch file ‘foo’ and ‘bar’ run «touch "$y"»; # ← WRONG; better, but has a different issue if$y starts with - run «touch -- "$y"»; # ← KINDA OK; it works but there is no good enough reason to do that run ‘touch’, ‘--’,$y; # ← RIGHT; explicit and *always* correct run <touch -->, $y; # ← RIGHT; < > are OK, this is short and correct Basically, «» quoting is only safe to use if you remember to always quote your variables. The problem is that it inverts the default behavior to unsafe variant, so just by forgetting some quotes you are risking to introduce either a bug or maybe even a security hole. To stay on the safe side, refrain from using «». Strings Quotes and interpolation Interpolation in string literals can be too clever for your own good. "$foo<html></html>" # Perl 6 understands that as: "$foo{'html'}{'/html'}" "$foo(" ~ @args ~ ")" # Perl 6 understands that as: "$foo(' ~ @args ~ ')" You can avoid those problems using non-interpolating single quotes and switching to more liberal interpolation with \qq[] escape sequence: my$a = 1; say '\qq[$a]()$b()'; # OUTPUT: «1()$b()␤» Another alternative is to use Q:c quoter, and use code blocks {} for all interpolation: my$a = 1; say Q:c«{$a}()$b()»; # OUTPUT: «1()$b()␤» Strings are not iterable There are methods that Str inherits from Any that work on iterables like lists. Iterators on strings contain one element that is the whole string. To use list-based methods like sort, reverse, you need to convert the string into a list first. say "cba".sort; # OUTPUT: «(cba)␤» say "cba".comb.sort.join; # OUTPUT: «abc␤» .chars gets the number of graphemes, not Codepoints In Perl 6, .chars returns the number of graphemes, or user visible characters. These graphemes could be made up of a letter plus an accent for example. If you need the number of codepoints, you should use .codes. If you need the number of bytes when encoded as UTF8, you should use .encode.bytes to encode the string as UTF8 and then get the number of bytes. say "\c[LATIN SMALL LETTER J WITH CARON, COMBINING DOT BELOW]"; # OUTPUT: «ǰ̣» say 'ǰ̣'.codes; # OUTPUT: «2» say 'ǰ̣'.chars; # OUTPUT: «1» say 'ǰ̣'.encode.bytes; # OUTPUT: «4» For more information on how strings work in Perl 6, see the Unicode page. All text is normalized by default Perl 6 normalizes all text into Unicode NFC form (Normalization Form Canonical). Filenames are the only text not normalized by default. If you are expecting your strings to maintain a byte for byte representation as the original, you need to use UTF8-C8 when reading or writing to any filehandles. Allomorphs generally follow numeric semantics Str "0" is True, while Numeric is False. So what's the Bool value of allomorph <0>? In general, allomorphs follow Numeric semantics, so the ones that numerically evaluate to zero are False: say so <0>; # OUTPUT: «False␤» say so <0e0>; # OUTPUT: «False␤» say so <0.0>; # OUTPUT: «False␤» To force comparison being done for the Stringy part of the allomorph, use prefix ~ operator or the Str method to coerce the allomorph to Str, or use the chars routine to test whether the allomorph has any length: say so ~<0>; # OUTPUT: «True␤» say so <0>.Str; # OUTPUT: «True␤» say so chars <0>; # OUTPUT: «True␤» Case-insensitive comparison of strings In order to do case-insensitive comparison, you can use .fc (fold-case). The problem is that people tend to use .lc or .uc, and it does seem to work within the ASCII range, but fails on other characters. This is not just a Perl 6 trap, the same applies to other languages. say ‘groß’.lc eq ‘GROSS’.lc; # ← WRONG; False say ‘groß’.uc eq ‘GROSS’.uc; # ← WRONG; True, but that's just luck say ‘groß’.fc eq ‘GROSS’.fc; # ← RIGHT; True If you are working with regexes, then there is no need to use .fc and you can use :i (:ignorecase) adverb instead. Pairs Constants on the left-hand side of pair notation Consider this code: enum Animals <Dog Cat>; my %h := :{ Dog => 42 }; say %h{Dog}; # OUTPUT: «(Any)␤» The :{ … } syntax is used to create object hashes. The intentions of someone who wrote that code were to create a hash with Enum objects as keys (and say %h{Dog} attempts to get a value using the Enum object to perform the lookup). However, that's not how pair notation works. For example, in Dog => 42 the key will be a Str. That is, it doesn't matter if there is a constant, or an enumeration with the same name. The pair notation will always use the left-hand side as a string literal, as long as it looks like an identifier. To avoid this, use (Dog) => 42 or ::Dog => 42. Scalar values within Pair When dealing with Scalar values, the Pair holds the container to the value. This means that it is possible to reflect changes to the Scalar value from outside the Pair: my$v = 'value A'; my $pair = Pair.new( 'a',$v ); $pair.say; # OUTPUT: a => value A$v = 'value B'; $pair.say; # OUTPUT: a => value B Use the method freeze to force the removal of the Scalar container from the Pair. For more details see the documentation about Pair. Sets, bags and mixes Sets, bags and mixes do not have a fixed order When iterating over this kind of objects, an order is not defined. my$set = <a b c>.Set; .say for $set.list; # OUTPUT: «a => True␤c => True␤b => True␤» # OUTPUT: «a => True␤c => True␤b => True␤» # OUTPUT: «c => True␤b => True␤a => True␤» Every iteration might (and will) yield a different order, so you cannot trust a particular sequence of the elements of a set. If order does not matter, just use them that way. If it does, use sort my$set = <a b c>.Set; .say for $set.list.sort; # OUTPUT: «a => True␤b => True␤c => True␤» In general, sets, bags and mixes are unordered, so you should not depend on them having a particular order. Operators Some operators commonly shared among other languages were repurposed in Perl 6 for other, more common, things: Junctions The ^, |, and & are not bitwise operators, they create Junctions. The corresponding bitwise operators in Perl 6 are: +^, +|, +& for integers and ?^, ?|, ?& for booleans. Exclusive sequence operator Lavish use of whitespace helps readability, but keep in mind infix operators cannot have any whitespace in them. One such operator is the sequence operator that excludes right point: ...^ (or its Unicode equivalent …^). say 1... ^5; # OUTPUT: «(1 0 1 2 3 4)␤» say 1...^5; # OUTPUT: «(1 2 3 4)␤» If you place whitespace between the ellipsis () and the caret (^), it's no longer a single infix operator, but an infix inclusive sequence operator () and a prefix Range operator (^). Iterables are valid endpoints for the sequence operator, so the result you'll get might not be what you expected. String ranges/Sequences In some languages, using strings as range end points, considers the entire string when figuring out what the next string should be; loosely treating the strings as numbers in a large base. Here's Perl 5 version: say join ", ", "az".."bc"; # OUTPUT: «az, ba, bb, bc␤» Such a range in Perl 6 will produce a different result, where each letter will be ranged to a corresponding letter in the end point, producing more complex sequences: say join ", ", "az".."bc"; #{ OUTPUT: « az, ay, ax, aw, av, au, at, as, ar, aq, ap, ao, an, am, al, ak, aj, ai, ah, ag, af, ae, ad, ac, bz, by, bx, bw, bv, bu, bt, bs, br, bq, bp, bo, bn, bm, bl, bk, bj, bi, bh, bg, bf, be, bd, bc ␤»} say join ", ", "r2".."t3"; # OUTPUT: «r2, r3, s2, s3, t2, t3␤» To achieve simpler behavior, similar to the Perl 5 example above, use a sequence operator that calls .succ method on the starting string: say join ", ", ("az", *.succ ... "bc"); # OUTPUT: «az, ba, bb, bc␤» Topicalizing operators The smartmatch operator ~~ and andthen set the topic$_ to their left-hand-side. In conjunction with implicit method calls on the topic this can lead to surprising results. my &method = { note $_;$_ }; $_ = 'object'; say .&method; # OUTPUT: «object␤object␤» say 'topic' ~~ .&method; # OUTPUT: «topic␤True␤» In many cases flipping the method call to the LHS will work. my &method = { note$_; $_ };$_ = 'object'; say .&method; # OUTPUT: «object␤object␤» say .&method ~~ 'topic'; # OUTPUT: «object␤False␤» Fat arrow and constants The fat arrow operator => will turn words on its left-hand side to Str without checking the scope for constants or \-sigiled variables. Use explicit scoping to get what you mean. constant V = 'x'; my %h = V => 'oi‽', ::V => 42; say %h.perl # OUTPUT: «{:V("oi‽"), :x(42)}␤» Infix operator assignment Infix operators, both built in and user defined, can be combined with the assignment operator as this addition example demonstrates: my $x = 10;$x += 20; say $x; # OUTPUT: «30␤» For any given infix operator op, L op= R is equivalent to L = L op R (where L and R are the left and right arguments, respectively). This means that the following code may not behave as expected: my @a = 1, 2, 3; @a += 10; say @a; # OUTPUT: «[13]␤» Coming from a language like C++, this might seem odd. It is important to bear in mind that += isn't defined as method on the left hand argument (here the @a array) but is simply shorthand for: my @a = 1, 2, 3; @a = @a + 10; say @a; # OUTPUT: «[13]␤» Here @a is assigned the result of adding @a (which has three elements) and 10; 13 is therefore placed in @a. Use the hyper form of the assignment operators instead: my @a = 1, 2, 3; @a »+=» 10; say @a; # OUTPUT: «[11 12 13]␤» Regexes <{$x}> vs $($x): Implicit EVAL Sometimes you may need to match a generated string in a regex. This can be done using $(…) or <{…}> syntax: my$x = ‘ailemac’; say ‘I ♥ camelia’ ~~ / $($x.flip) /; # OUTPUT: «「camelia」␤» say ‘I ♥ camelia’ ~~ / <{$x.flip}> /; # OUTPUT: «「camelia」␤» However, the latter only works sometimes. Internally <{…}> EVAL-s the given string inside an anonymous regex, while$(…) lexically interpolates the given string. So <{…}> immediately breaks with more complicated inputs. For example: my $x = ‘ailemac#’; say ‘I ♥ #camelia’ ~~ /$($x.flip) /; # OUTPUT: «「#camelia」␤» # ⚠ ↓↓ WRONG ↓↓ ⚠ say ‘I ♥ #camelia’ ~~ / <{$x.flip}> /; # OUTPUT: # ===SORRY!=== # Regex not terminated. # at EVAL_0:1 # ------> anon regex { #camelia}⏏<EOL> # Malformed regex # at EVAL_0:1 # ------> anon regex { #camelia}⏏<EOL> # expecting any of: # infix stopper Therefore, try not to use <{}> unless you really need EVAL. Note that even though EVAL is normally considered unsafe, in this case it is restricted to a set of safe operations (which is why it works without MONKEY-SEE-NO-EVAL pragma). In theory, careless use of <{}> will only result in an exception being thrown, and should not introduce security issues. | vs ||: which branch will win To match one of several possible alternatives, || or | will be used. But they are so different. When there are multiple matching alternations, for those separated by ||, the first matching alternation wins; for those separated by |, which to win is decided by LTM strategy. See also: documentation on || and documentation on |. For simple regexes just using || instead of | will get you familiar semantics, but if writing grammars then it's useful to learn about LTM and declarative prefixes and prefer |. And keep yourself away from using them in one regex. When you have to do that, add parentheses and ensure that you know how LTM strategy works to make the code do what you want. The trap typically arises when you try to mix both | and || in the same regex: say 42 ~~ / [ 0 || 42 ] | 4/; # OUTPUT: «「4」␤» say 42 ~~ / [ 42 || 0 ] | 4/; # OUTPUT: «「42」␤» The code above may seem like it is producing a wrong result, but the implementation is actually right. bar ($a); # okay: one arg: 1 Now declare a function of two parameters: sub foo($a, $b) { say "two args:$a, $b" } Execute it with and without the space after the name: foo($a, $b); # okay: two args: 1, 2 foo ($a, $b); # FAIL: Too few positionals passed; expected 2 arguments but got 1 The lesson is: "be careful with spaces following sub and method names when using the function call format." As a general rule, good practice might be to avoid the space after a function name when using the function call format. Note that there are clever ways to eliminate the error with the function call format and the space, but that is bordering on hackery and will not be mentioned here. For more information, consult Functions. Finally, note that, currently, when declaring the functions whitespace may be used between a function or method name and the parentheses surrounding the parameter list without problems. Named parameters Many built-in subroutines and method calls accept named parameters and your own code may accept them as well, but be sure the arguments you pass when calling your routines are actually named parameters: sub foo($a, :$b) { ... } foo(1, 'b' => 2); # FAIL: Too many positionals passed; expected 1 argument but got 2 What happened? That second argument is not a named parameter argument, but a Pair passed as a positional argument. If you want a named parameter it has to look like a name to Perl: foo(1, b => 2); # okay foo(1, :b(2)); # okay foo(1, :b<it>); # okay my$b = 2; foo(1, :b($b)); # okay, but redundant foo(1, :$b); # okay # Or even... my %arg = 'b' => 2; foo(1, |%arg); # okay too That last one may be confusing, but since it uses the | prefix on a Hash, which is a special compiler construct indicating you want to use the contents of the variable as arguments, which for hashes means to treat them as named arguments. If you really do want to pass them as pairs you should use a List or Capture instead: my $list = ('b' => 2),; # this is a List containing a single Pair foo(|$list, :$b); # okay: we passed the pair 'b' => 2 to the first argument foo(1, |$list); # FAIL: Too many positionals passed; expected 1 argument but got 2 foo(1, |$list.Capture); # OK: .Capture call converts all Pair objects to named args in a Capture my$cap = \('b' => 2); # a Capture with a single positional value foo(|$cap, :$b); # okay: we passed the pair 'b' => 2 to the first argument foo(1, |$cap); # FAIL: Too many positionals passed; expected 1 argument but got 2 A Capture is usually the best option for this as it works exactly like the usual capturing of routine arguments during a regular call. The nice thing about the distinction here is that it gives the developer the option of passing pairs as either named or positional arguments, which can be handy in various instances. Argument count limit While it is typically unnoticeable, there is a backend-dependent argument count limit. Any code that does flattening of arbitrarily sized arrays into arguments won't work if there are too many elements. my @a = 1 xx 9999; my @b; @b.push: |@a; say @b.elems # OUTPUT: «9999␤» my @a = 1 xx 999999; my @b; @b.push: |@a; # OUTPUT: «Too many arguments in flattening array.␤ in block <unit> at <tmp> line 1␤␤» Avoid this trap by rewriting the code so that there is no flattening. In the example above, you can replace push with append. This way, no flattening is required because the array can be passed as is. my @a = 1 xx 999999; my @b; @b.append: @a; say @b.elems # OUTPUT: «999999␤» Phasers and implicit return sub returns-ret () { CATCH { default {} } "ret"; } sub doesn't-return-ret () { "ret"; CATCH { default {} } } say returns-ret; # OUTPUT: «ret» say doesn't-return-ret; # BAD: outputs «Nil» and a warning «Useless use of constant string "ret" in sink context (line 13)» Code for returns-ret and doesn't-return-ret might look exactly the same, since in principle it does not matter where the CATCH block goes. However, a block is an object and the last object in a sub will be returned, so the doesn't-return-ret will return Nil, and, besides, since "ret" will be now in sink context, it will issue a warning. In case you want to place phasers last for conventional reasons, use the explicit form of return. sub explicitly-return-ret () { return "ret"; CATCH { default {} } } Input and output Closing open filehandles and pipes Unlike some other languages, Perl 6 does not use reference counting, and so the filehandles are NOT closed when they go out of scope. You have to explicitly close them either by using close routine or using the :close argument several of IO::Handle's methods accept. See IO::Handle.close for details. The same rules apply to IO::Handle's subclass IO::Pipe, which is what you operate on when reading from a Proc you get with routines run and shell. The caveat applies to IO::CatHandle type as well, though not as severely. See IO::CatHandle.close for details. IO::Path stringification Partly for historical reasons and partly by design, an IO::Path object stringifies without considering its CWD attribute, which means if you chdir and then stringify an IO::Path, or stringify an IO::Path with custom$!CWD attribute, the resultant string won't reference the original filesystem object: with 'foo'.IO { .Str.say; # OUTPUT: «foo␤» .relative.say; # OUTPUT: «foo␤» chdir "/tmp"; .Str.say; # OUTPUT: «foo␤» .relative.say # OUTPUT: «../home/camelia/foo␤» } # Deletes ./foo, not /bar/foo The easy way to avoid this issue is to not stringify an IO::Path object at all. Core routines that work with paths can take an IO::Path object, so you don't need to stringify the paths. If you do have a case where you need a stringified version of an IO::Path, use absolute or relative methods to stringify it into an absolute or relative path, respectively. If you are facing this issue because you use chdir in your code, consider rewriting it in a way that does not involve changing the current directory. For example, you can pass cwd named argument to run without having to use chdir around it. Splitting the input data into lines There is a difference between using .lines on IO::Handle and on a Str. The trap arises if you start assuming that both split data the same way. say $_.perl for$*IN.lines # .lines called on IO::Handle # OUTPUT: # "foox" # "fooy\rbar" # "fooz" As you can see in the example above, there was a line which contained \r (“carriage return” control character). However, the input is split strictly by \n, so \r was kept as part of the string. On the other hand, Str.lines attempts to be “smart” about processing data from different operating systems. Therefore, it will split by all possible variations of a newline. say $_.perl for$*IN.slurp(:bin).decode.lines # .lines called on a Str # OUTPUT: # "foox" # "fooy" # "bar" # "fooz" The rule is quite simple: use IO::Handle.lines when working with programmatically generated output, and Str.lines when working with user-written texts. Use $data.split(“\n”) in cases where you need the behavior of IO::Handle.lines but the original IO::Handle is not available. RT#132154 Note that if you really want to slurp the data first, then you will have to use .IO.slurp(:bin).decode.split(“\n”). Notice how we use :bin to prevent it from doing the decoding, only to call .decode later anyway. All that is needed because .slurp is assuming that you are working with text and therefore it attempts to be smart about newlines. RT#131923 If you are using Proc::Async, then there is currently no easy way to make it split data the right way. You can try reading the whole output and then using Str.split (not viable if you are dealing with large data) or writing your own logic to split the incoming data the way you need. Same applies if your data is null-separated. Proc::Async and print When using Proc::Async you should not assume that .print (or any other similar method) is synchronous. The biggest issue of this trap is that you will likely not notice the problem by running the code once, so it may cause a hard-to-detect intermittent fail. Here is an example that demonstrates the issue: loop { my$proc = Proc::Async.new: :w, ‘head’, ‘-n’, ‘1’; my $got-something; react { whenever$proc.stdout.lines { $got-something = True } whenever$proc.start { die ‘FAIL!’ unless $got-something }$proc.print: “one\ntwo\nthree\nfour”; $proc.close-stdin; } say$++; } And the output it may produce: 0 1 2 3 An operation first awaited: in block <unit> at print.p6 line 4 Died with the exception: FAIL! in block at print.p6 line 6 Resolving this is easy because .print returns a promise that you can await on. The solution is even more beautiful if you are working in a react block: whenever $proc.print: “one\ntwo\nthree\nfour” {$proc.close-stdin; } Using .stdout without .lines Method .stdout of Proc::Async returns a supply that emits chunks of data, not lines. The trap is that sometimes people assume it to give lines right away. my $proc = Proc::Async.new(‘cat’, ‘/usr/share/dict/words’); react { whenever$proc.stdout.head(1) { .say } # ← WRONG (most likely) whenever $proc.start { } } The output is clearly not just 1 line: A A's AMD AMD's AOL AOL's Aachen Aachen's Aaliyah Aaliyah's Aaron Aaron's Abbas Abbas's Abbasid Abbasid's Abbott Abbott's Abby If you want to work with lines, then use$proc.stdout.lines. If you're after the whole output, then something like this should do the trick: whenever $proc.stdout {$out ~= $_ }. Exception handling Sunk Proc Some methods return a Proc object. If it represents a failed process, Proc itself won't be exception-like, but sinking it will cause an X::Proc::Unsuccessful exception to be thrown. That means this construct will throw, despite the try in place: try run("perl6", "-e", "exit 42"); say "still alive"; # OUTPUT: «The spawned process exited unsuccessfully (exit code: 42)␤» This is because try receives a Proc and returns it, at which point it sinks and throws. Explicitly sinking it inside the try avoids the issue and ensures the exception is thrown inside the try: try sink run("perl6", "-e", "exit 42"); say "still alive"; # OUTPUT: «still alive␤» If you're not interested in catching any exceptions, then use an anonymous variable to keep the returned Proc in; this way it'll never sink:$ = run("perl6", "-e", "exit 42"); say "still alive"; # OUTPUT: «still alive␤» Using shortcuts The ^ twigil Using the ^ twigil can save a fair amount of time and space when writing out small blocks of code. As an example: for 1..8 -> $a,$b { say $a +$b; } can be shortened to just for 1..8 { say $^a +$^b; } The trouble arises when a person wants to use more complex names for the variables, instead of just one letter. The ^ twigil is able to have the positional variables be out of order and named whatever you want, but assigns values based on the variable's Unicode ordering. In the above example, we can have $^a and$^b switch places, and those variables will keep their positional values. This is because the Unicode character 'a' comes before the character 'b'. For example: # In order sub f1 { say "$^first$^second"; } f1 "Hello", "there"; # OUTPUT: «Hello there␤» # Out of order sub f2 { say "$^second$^first"; } f2 "Hello", "there"; # OUTPUT: «there Hello␤» Due to the variables allowed to be called anything, this can cause some problems if you are not accustomed to how Perl 6 handles these variables. # BAD NAMING: alphabetically four comes first and gets value 1 in it: for 1..4 { say "$^one$^two $^three$^four"; } # OUTPUT: «2 4 3 1␤» # GOOD NAMING: variables' naming makes it clear how they sort alphabetically: for 1..4 { say "$^a$^b $^c$^d"; } # OUTPUT: «1 2 3 4␤» Using » and map interchangeably While » may look like a shorter way to write map, they differ in some key aspects. First, the » includes a hint to the compiler that it may autothread the execution, thus if you're using it to call a routine that produces side effects, those side effects may be produced out of order (the result of the operator is kept in order, however). Also if the routine being invoked accesses a resource, there's the possibility of a race condition, as multiple invocations may happen simultaneously, from different threads. This is an actual output from Rakudo 2015.09 <a b c d>».say # OUTPUT: «d␤b␤c␤a␤» Second, » checks the nodality of the routine being invoked and based on that will use either deepmap or nodemap to map over the list, which can be different from how a map call would map over it: say ((1, 2, 3), [^4], '5')».Numeric; # OUTPUT: «((1 2 3) [0 1 2 3] 5)␤» say ((1, 2, 3), [^4], '5').map: *.Numeric; # OUTPUT: «(3 4 5)␤» The bottom line is that map and » are not interchangeable, but using one instead of the other is OK as long as you understand the differences. Word splitting in « » Keep in mind that « » performs word splitting similarly to how shells do it, so many shell pitfalls apply here as well (especially when using in combination with run): my $file = ‘--my arbitrary filename’; run ‘touch’, ‘--’,$file; # RIGHT run <touch -->, $file; # RIGHT run «touch -- "$file"»; # RIGHT but WRONG if you forget quotes run «touch -- $file»; # WRONG; touches ‘--my’, ‘arbitrary’ and ‘filename’ run ‘touch’,$file; # WRONG; error from touch run «touch "$file"»; # WRONG; error from touch Note that -- is required for many programs to disambiguate between command-line arguments and filenames that begin with hyphens. Scope Using an once block The once block is a block of code that will only run once when its parent block is run. As an example: my$var = 0; for 1..10 { once { $var++; } } say "Variable =$var"; # OUTPUT: «Variable = 1␤» This functionality also applies to other code blocks like sub and while, not just for loops. Problems arise though, when trying to nest once blocks inside of other code blocks: my $var = 0; for 1..10 { do { once {$var++; } } } say "Variable = $var"; # OUTPUT: «Variable = 10␤» In the above example, the once block was nested inside of a code block which was inside of a for loop code block. This causes the once block to run multiple times, because the once block uses state variables to determine whether it has run previously. This means that if the parent code block goes out of scope, then the state variable the once block uses to keep track of if it has run previously, goes out of scope as well. This is why once blocks and state variables can cause some unwanted behavior when buried within more than one code block. If you want to have something that will emulate the functionality of a once block, but still work when buried a few code blocks deep, we can manually build the functionality of a once block. Using the above example, we can change it so that it will only run once, even when inside the do block by changing the scope of the state variable. my$var = 0; for 1..10 { state $run-code = True; do { if ($run-code) { $run-code = False;$var++; } } } say "Variable = $var"; # OUTPUT: «Variable = 1␤» In this example, we essentially manually build a once block by making a state variable called$run-code at the highest level that will be run more than once, then checking to see if $run-code is True using a regular if. If the variable$run-code is True, then make the variable False and continue with the code that should only be completed once. The main difference between using a state variable like the above example and using a regular once block is what scope the state variable is in. The scope for the state variable created by the once block, is the same as where you put the block (imagine that the word 'once' is replaced with a state variable and an if to look at the variable). The example above using state variables works because the variable is at the highest scope that will be repeated; whereas the example that has a once block inside of a do, made the variable within the do block which is not the highest scope that is repeated. Using a once block inside a class method will cause the once state to carry across all instances of that class. For example: class A { method sayit() { once say 'hi' } } my $a = A.new;$a.sayit; # OUTPUT: «hi␤» my $b = A.new;$b.sayit; # nothing LEAVE phaser and exit Using LEAVE phaser to perform graceful resource termination is a common pattern, but it does not cover the case when the program is stopped with exit. The following nondeterministic example should demonstrate the complications of this trap: my $x = say ‘Opened some resource’; LEAVE say ‘Closing the resource gracefully’ with$x; exit 42 if rand < ⅓; # ① 「exit」 is bad die ‘Dying because of unhandled exception’ if rand < ½; # ② 「die」 is ok # fallthru ③ There are three possible results: ① Opened some resource ② Opened some resource Closing the resource gracefully Dying because of unhandled exception in block <unit> at print.p6 line 5 ③ Opened some resource Closing the resource gracefully A call to exit is part of normal operation for many programs, so beware unintentional combination of LEAVE phasers and exit calls. LEAVE phaser may run sooner than you think Parameter binding is executed when we're "inside" the routine's block, which means LEAVE phaser would run when we leave that block if parameter binding fails when wrong arguments are given: sub foo(Int) { my $x = 42; LEAVE say$x.Int; # ← WRONG; assumes that $x is set } say foo rand; # OUTPUT: «No such method 'Int' for invocant of type 'Any'␤» A simple way to avoid this issue is to declare your sub or method a multi, so the candidate is eliminated during dispatch and the code never gets to binding anything inside the sub, thus never entering the routine's body: multi foo(Int) { my$x = 42; LEAVE say $x.Int; } say foo rand; # OUTPUT: «Cannot resolve caller foo(Num); none of these signatures match: (Int)␤» Another alternative is placing the LEAVE into another block (assuming it's appropriate for it to be executed when that block is left, not the routine's body: sub foo(Int) { my$x = 42; { LEAVE say $x.Int; } } say foo rand; # OUTPUT: «Type check failed in binding to parameter '<anon>'; expected Int but got Num (0.7289418947969465e0)␤» You can also ensure LEAVE can be executed even if the routine is left due to failed argument binding. In our example, we check$x is defined before doing anything with it. sub foo(Int) { my $x = 42; LEAVE$x andthen .Int.say; } say foo rand; # OUTPUT: «Type check failed in binding to parameter '<anon>'; expected Int but got Num (0.8517160389079508e0)␤» Grammars Using regexes within grammar's actions grammar will-fail { token TOP {^ <word> $} token word { \w+ } } class will-fail-actions { method TOP ($/) { my $foo = ~$/; say $foo ~~ /foo/; } } Will fail with Cannot assign to a readonly variable ($/) or a value on method TOP. The problem here is that regular expressions also affect $/. Since it is in TOP's signature, it is a read-only variable, which is what produces the error. You can safely either use another variable in the signature or add is copy, this way: method TOP ($/ is copy) { my $foo = ~$/; my $v =$foo ~~ /foo/; } Using certain names for rules/token/regexes Grammars are actually a type of classes. grammar G {}; say G.^mro; # OUTPUT: «((G) (Grammar) (Match) (Capture) (Cool) (Any) (Mu))␤» ^mro prints the class hierarchy of this empty grammar, showing all the superclasses. And these superclasses have their very own methods. Defining a method in that grammar might clash with the ones inhabiting the class hierarchy: grammar g { token TOP { <item> }; token item { 'defined' } }; say g.parse('defined'); # OUTPUT: «Too many positionals passed; expected 1 argument but got 2␤ in regex item at /tmp/grammar-clash.p6 line 3␤ in regex TOP at /tmp/grammar-clash.p6 line 2␤ in block <unit> at /tmp/grammar-clash.p6 line 5» item seems innocuous enough, but it is a sub defined in class Mu. The message is a bit cryptic and totally unrelated to that fact, but that is why this is listed as a trap. In general, all subs defined in any part of the hierarchy are going to cause problems; some methods will too. For instance, CREATE, take and defined (which are defined in Mu). In general, multi methods and simple methods will not have any problem, but it might not be a good practice to use them as rule names. Also avoid phasers for rule/token/regex names: TWEAK, BUILD, BUILD-ALL will throw another kind of exception if you do that: Cannot find method 'match': no method cache and no .^find_method, once again only slightly related to what is actually going on. Unfortunate generalization :exists with more than one key Let's say you have a hash and you want to use :exists on more than one element: my %h = a => 1, b => 2; say ‘a exists’ if %h<a>:exists; # ← OK; True say ‘y exists’ if %h<y>:exists; # ← OK; False say ‘Huh‽’ if %h<x y>:exists; # ← WRONG; returns a 2-item list Did you mean “if any of them exists”, or did you mean that all of them should exist? Use any or all Junction to clarify: my %h = a => 1, b => 2; say ‘x or y’ if any %h<x y>:exists; # ← RIGHT (any); False say ‘a, x or y’ if any %h<a x y>:exists; # ← RIGHT (any); True say ‘a, x and y’ if all %h<a x y>:exists; # ← RIGHT (all); False say ‘a and b’ if all %h<a b>:exists; # ← RIGHT (all); True The reason why it is always True (without using a junction) is that it returns a list with Bool values for each requested lookup. Non-empty lists always give True when you Boolify them, so the check always succeeds no matter what keys you give it. Using […] metaoperator with a list of lists Every now and then, someone gets the idea that they can use [Z] to create the transpose of a list-of-lists: my @matrix = <X Y>, <a b>, <1 2>; my @transpose = [Z] @matrix; # ← WRONG; but so far so good ↙ say @transpose; # [(X a 1) (Y b 2)] And everything works fine, until you get an input @matrix with exactly one row (child list): my @matrix = <X Y>,; my @transpose = [Z] @matrix; # ← WRONG; ↙ say @transpose; # [(X Y)] – not the expected transpose [(X) (Y)] This happens partly because of the single argument rule, and there are other cases when this kind of a generalization may not work. Using [~] for concatenating a list of blobs The ~ infix operator can be used to concatenate Strs or Blobs. However, an empty list will always be reduced to an empty Str. This is due to the fact that, in the presence of a list with no elements, the reduction metaoperator returns the identity element for the given operator. Identity element for ~ is an empty string, regardless of the kind of elements the list could be populated with. my Blob @chunks; say ([~] @chunks).perl; # OUTPUT: «""␤» This might cause a problem if you attempt to use the result while assuming that it is a Blob: my Blob @chunks; say ([~] @chunks).decode; # OUTPUT: «No such method 'decode' for invocant of type 'Str'. Did you mean 'encode'?␤…» There are many ways to cover that case. You can avoid [ ] metaoperator altogether: my @chunks; # … say Blob.new: |«@chunks; # OUTPUT: «Blob:0x<>␤» Alternatively, you can initialize the array with an empty Blob: my @chunks = Blob.new; # … say [~] @chunks; # OUTPUT: «Blob:0x<>␤» Or you can utilize || operator to make it use an empty Blob in case the list is empty: my @chunks; # … say [~] @chunks || Blob.new; # OUTPUT: «Blob:0x<>␤» Please note that a similar issue may arise when reducing lists with other operators. Maps Beware of nesting Maps in sink context Maps apply an expression to every element of a List and return a Seq: say <þor oðin loki>.map: *.codes; # OUTPUT: «(3 4 4)␤» Maps are often used as a compact substitute for a loop, performing some kind of action in the map code block: <þor oðin loki>.map: *.codes.say; # OUTPUT: «3␤4␤4␤» The problem might arise when maps are nested and in a sink context. <foo bar ber>.map: { $^a.comb.map: {$^b.say}}; # OUTPUT: «» You might expect the innermost map to bubble the result up to the outermost map, but it simply does nothing. Maps return Seqs, and in sink context the innermost map will iterate and discard the produced values, which is why it yields nothing. Simply using say at the beginning of the sentence will save the result from sink context: say <foo bar ber>.map: *.comb.map: *.say ; # OUTPUT: «f␤o␤o␤b␤a␤r␤b␤e␤r␤((True True True) (True True True) (True True True))␤» However, it will not be working as intended; the first f␤o␤o␤b␤a␤r␤b␤e␤r␤ is the result of the innermost say, but then say returns a Bool, True in this case. Those Trues are what get printed by the outermost say, one for every letter. A much better option would be to flatten the outermost sequence: <foo bar ber>.map({ $^a.comb.map: {$^b.say}}).flat # OUTPUT: «f␤o␤o␤b␤a␤r␤b␤e␤r␤» Of course, saving say for the result will also produce the intended result, as it will be saving the two nested sequences from void context: say <foo bar ber>.map: { $^þ.comb }; # OUTPUT: « ((f o o) (b a r) (b e r))» Smartmatching The smartmatch operator shortcuts to the right hand side accepting the left hand side. This may cause some confusion. Smartmatch and WhateverCode Using WhateverCode in the left hand side of a smartmatch does not work as expected, or at all: my @a = <1 2 3>; say @a.grep( *.Int ~~ 2 ); # OUTPUT: «Cannot use Bool as Matcher with '.grep'. Did you mean to # use$_ inside a block?␤␤␤» The error message does not make a lot of sense. It does, however, if you put it in terms of the ACCEPTS method: that code is equivalent to 2.ACCEPTS( *.Int ), but *.Int cannot be coerced to Numeric, being as it is a Block. Solution: don't use WhateverCode in the left hand side of a smartmatch: my @a = <1 2 3>; say @a.grep( 2 ~~ *.Int ); # OUTPUT: «(2)␤» ~ 42 Containers A low-level explanation of Perl 6 containers This section explains the levels of indirection involved in dealing with variables and container elements. The difference types of containers used in Perl 6 are explained and the actions applicable to them like assigning, binding and flattening. More advanced topics like self-referential data, type constraints and custom containers are discussed at the end. What is a variable? Some people like to say "everything is an object", but in fact a variable is not a user-exposed object in Perl 6. When the compiler encounters a variable declaration like my $x, it registers it in some internal symbol table. This internal symbol table is used to detect undeclared variables and to tie the code generation for the variable to the correct scope. At runtime, a variable appears as an entry in a lexical pad, or lexpad for short. This is a per-scope data structure that stores a pointer for each variable. In the case of my$x, the lexpad entry for the variable $x is a pointer to an object of type Scalar, usually just called the container. Scalar containers Although objects of type Scalar are everywhere in Perl 6, you rarely see them directly as objects, because most operations decontainerize, which means they act on the Scalar container's contents instead of the container itself. In code like my$x = 42; say $x; the assignment$x = 42 stores a pointer to the Int object 42 in the scalar container to which the lexpad entry for $x points. The assignment operator asks the container on the left to store the value on its right. What exactly that means is up to the container type. For Scalar it means "replace the previously stored value with the new one". Note that subroutine signatures allow passing around of containers: sub f($a is rw) { $a = 23; } my$x = 42; f($x); say$x; # OUTPUT: «23␤» Inside the subroutine, the lexpad entry for $a points to the same container that$x points to outside the subroutine. Which is why assignment to $a also modifies the contents of$x. Likewise a routine can return a container if it is marked as is rw: my $x = 23; sub f() is rw {$x }; f() = 42; say $x; # OUTPUT: «42␤» For explicit returns, return-rw instead of return must be used. Returning a container is how is rw attribute accessors work. So class A { has$.attr is rw; } is equivalent to class A { has $!attr; method attr() is rw {$!attr } } Scalar containers are transparent to type checks and most kinds of read-only accesses. A .VAR makes them visible: my $x = 42; say$x.^name; # OUTPUT: «Int␤» say $x.VAR.^name; # OUTPUT: «Scalar␤» And is rw on a parameter requires the presence of a writable Scalar container: sub f($x is rw) { say $x }; f 42; CATCH { default { say .^name, ': ', .Str } }; # OUTPUT: «X::Parameter::RW: Parameter '$x' expected a writable container, but got Int value␤» Callable containers Callable containers provide a bridge between the syntax of a Routine call and the actual call of the method CALL-ME of the object that is stored in the container. The sigil & is required when declaring the container and has to be omitted when executing the Callable. The default type constraint is Callable. my &callable = -> $ν { say "$ν is", $ν ~~ Int??" whole"!!" not whole" } callable( ⅓ ); callable( 3 ); The sigil has to be provided when referring to the value stored in the container. This in turn allows Routines to be used as arguments to calls. sub f() {} my &g = sub {} sub caller(&c1, &c2){ c1, c2 } caller(&f, &g); Binding Next to assignment, Perl 6 also supports binding with the := operator. When binding a value or a container to a variable, the lexpad entry of the variable is modified (and not just the container it points to). If you write my$x := 42; then the lexpad entry for $x directly points to the Int 42. Which means that you cannot assign to it anymore: my$x := 42; $x = 23; CATCH { default { say .^name, ': ', .Str } }; # OUTPUT: «X::AdHoc: Cannot assign to an immutable value␤» You can also bind variables to other variables: my$a = 0; my $b = 0;$a := $b;$b = 42; say $a; # OUTPUT: «42␤» Here, after the initial binding, the lexpad entries for$a and $b both point to the same scalar container, so assigning to one variable also changes the contents of the other. You've seen this situation before: it is exactly what happened with the signature parameter marked as is rw. Sigilless variables and parameters with the trait is raw always bind (whether = or := is used): my$a = 42; my \b = $a; b++; say$a; # OUTPUT: «43␤» sub f($c is raw) {$c++ } f($a); say$a; # OUTPUT: «44␤» Scalar containers and listy things There are a number of positional container types with slightly different semantics in Perl 6. The most basic one is List; it is created by the comma operator. say (1, 2, 3).^name; # OUTPUT: «List␤» A list is immutable, which means you cannot change the number of elements in a list. But if one of the elements happens to be a scalar container, you can still assign to it: my $x = 42; ($x, 1, 2)[0] = 23; say $x; # OUTPUT: «23␤» ($x, 1, 2)[1] = 23; # Cannot modify an immutable value CATCH { default { say .^name, ': ', .Str } }; # OUTPUT: «X::Assignment::RO: Cannot modify an immutable Int␤» So the list doesn't care about whether its elements are values or containers, they just store and retrieve whatever was given to them. Lists can also be lazy; in that case, elements at the end are generated on demand from an iterator. An Array is just like a list, except that it forces all its elements to be containers, which means that you can always assign to elements: my @a = 1, 2, 3; @a[0] = 42; say @a; # OUTPUT: «[42 2 3]␤» @a actually stores three scalar containers. @a[0] returns one of them, and the assignment operator replaces the integer value stored in that container with the new one, 42. Assigning and binding to array variables Assignment to a scalar variable and to an array variable both do the same thing: discard the old value(s), and enter some new value(s). Nevertheless, it's easy to observe how different they are: my $x = 42; say$x.^name; # OUTPUT: «Int␤» my @a = 42; say @a.^name; # OUTPUT: «Array␤» This is because the Scalar container type hides itself well, but Array makes no such effort. Also assignment to an array variable is coercive, so you can assign a non-array value to an array variable. To place a non-Array into an array variable, binding works: my @a := (1, 2, 3); say @a.^name; # OUTPUT: «List␤» Binding to array elements As a curious side note, Perl 6 supports binding to array elements: my @a = (1, 2, 3); @a[0] := my $x;$x = 42; say @a; # OUTPUT: «[42 2 3]␤» If you've read and understood the previous explanations, it is now time to wonder how this can possibly work. After all, binding to a variable requires a lexpad entry for that variable, and while there is one for an array, there aren't lexpad entries for each array element, because you cannot expand the lexpad at runtime. The answer is that binding to array elements is recognized at the syntax level and instead of emitting code for a normal binding operation, a special method (called BIND-KEY) is called on the array. This method handles binding to array elements. Note that, while supported, one should generally avoid directly binding uncontainerized things into array elements. Doing so may produce counter-intuitive results when the array is used later. my @a = (1, 2, 3); @a[0] := 42; # This is not recommended, use assignment instead. my $b := 42; @a[1] :=$b; # Nor is this. @a[2] = $b; # ...but this is fine. @a[1, 2] := 1, 2; # runtime error: X::Bind::Slice CATCH { default { say .^name, ': ', .Str } }; # OUTPUT: «X::Bind::Slice: Cannot bind to Array slice␤» Operations that mix Lists and Arrays generally protect against such a thing happening accidentally. Flattening, items and containers The % and @ sigils in Perl 6 generally indicate multiple values to an iteration construct, whereas the$ sigil indicates only one value. my @a = 1, 2, 3; for @a { }; # 3 iterations my $a = (1, 2, 3); for$a { }; # 1 iteration @-sigiled variables do not flatten in list context: my @a = 1, 2, 3; my @b = @a, 4, 5; say @b.elems; # OUTPUT: «3␤» There are operations that flatten out sublists that are not inside a scalar container: slurpy parameters (*@a) and explicit calls to flat: my @a = 1, 2, 3; say (flat @a, 4, 5).elems; # OUTPUT: «5␤» sub f(*@x) { @x.elems }; say f @a, 4, 5; # OUTPUT: «5␤» You can also use | to create a Slip, introducing a list into the other. my @l := 1, 2, (3, 4, (5, 6)), [7, 8, (9, 10)]; say (|@l, 11, 12); # OUTPUT: «(1 2 (3 4 (5 6)) [7 8 (9 10)] 11 12)␤» say (flat @l, 11, 12) # OUTPUT: «(1 2 3 4 5 6 7 8 (9 10) 11 12)␤» In the first case, every element of @l is slipped as the corresponding elements of the resulting list. flat, in the other hand, flattens all elements including the elements of the included array, except for (9 10). As hinted above, scalar containers prevent that flattening: sub f(*@x) { @x.elems }; my @a = 1, 2, 3; say f $@a, 4, 5; # OUTPUT: «3␤» The @ character can also be used as a prefix to coerce the argument to a list, thus removing a scalar container: my$x = (1, 2, 3); .say for @$x; # 3 iterations However, the decont operator <> is more appropriate to decontainerize items that aren't lists: my$x = ^Inf .grep: *.is-prime; say "$_ is prime" for @$x; # WRONG! List keeps values, thus leaking memory say "$_ is prime" for$x<>; # RIGHT. Simply decontainerize the Seq my $y := ^Inf .grep: *.is-prime; # Even better; no Scalars involved at all Methods generally don't care whether their invocant is in a scalar, so my$x = (1, 2, 3); $x.map(*.say); # 3 iterations maps over a list of three elements, not of one. Self-referential data Containers types, including Array and Hash, allow you to create self-referential structures. my @a; @a[0] = @a; put @a.perl; # OUTPUT: «((my @Array_75093712) = [@Array_75093712,])␤» Although Perl 6 does not prevent you from creating and using self-referential data, by doing so you may end up in a loop trying to dump the data. As a last resort, you can use Promises to handle timeouts. Type constraints Any container can have a type constraint in the form of a type object or a subset. Both can be placed between a declarator and the variable name or after the trait of. The constraint is a property of the variable, not the container. subset Three-letter of Str where .chars == 3; my Three-letter$acronym = "ÞFL"; In this case, the type constraint is the (compile-type defined) subset Three-letter. Variables may have no container in them, yet still offer the ability to re-bind and typecheck that rebind. The reason for that is in such cases the binding operator := performs the typecheck: my Int \z = 42; z := 100; # OK z := "x"; # Typecheck failure The same isn't the case when, say, binding to a Hash key, as the binding is then handled by a method call (even though the syntax remains the same, using := operator). The default type constraint of a Scalar container is Mu. Introspection of type constraints on containers is provided by .VAR.of method, which for @ and % sigiled variables gives the constraint for values: my Str $x; say$x.VAR.of; # OUTPUT: «(Str)␤» my Num @a; say @a.VAR.of; # OUTPUT: «(Num)␤» my Int %h; say %h.VAR.of; # OUTPUT: «(Int)␤» Definedness constraints A container can also enforce a variable to be defined. Put a smiley in the declaration: my Int:D $def = 3; say$def; # OUTPUT: «3␤» $def = Int; # Typecheck failure You'll also need to initialize the variable in the declaration, it can't be left undefined after all. It's also possible to have this constraint enforced in all variables declared in a scope with the default defined variables pragma. People coming from other languages where variables are always defined will want to have a look. Custom containers To provide custom containers Perl 6 provides the class Proxy. It takes two methods that are called when values are stored or fetched from the container. Type checks are not done by the container itself and other restrictions like readonlyness can be broken. The returned value must therefore be of the same type as the type of the variable it is bound to. We can use type captures to work with types in Perl 6. sub lucky(::T$type) { my T $c-value; # closure variable return Proxy.new( FETCH => method () {$c-value }, STORE => method (T $new-value) { X::OutOfRange.new(what => 'number', got => '13', range => '-∞..12, 14..∞').throw if$new-value == 13; $c-value =$new-value; } ); } my Int $a := lucky(Int); say$a = 12; # OUTPUT: «12␤» say $a = 'FOO'; # X::TypeCheck::Binding say$a = 13; # X::OutOfRange CATCH { default { say .^name, ': ', .Str } }; 43 Contexts and contextualizers What are contexts and how to get into them A context is needed, in many occasions, to interpret the value of a container. In Perl 6, we will use context to coerce the value of a container into some type or class, or decide what to do with it, as in the case of the sink context. Sink Sink is equivalent to void context, that is, a context in which we throw (down the sink, as it were) the result of an operation or the return value from a block. In general, this context will be invoked in warnings and errors when a statement does not know what to do with that value. my $sub = ->$a { return $a² };$sub; # OUTPUT: «WARNINGS:␤Useless use of $sub in sink context (line 1)␤» You can force that sink context on Iterators, by using the sink-all method. Procs can also be sunk via the sink method, forcing them to raise an exception and not returning anything. In general, blocks will warn if evaluated in sink context; however, gather/take blocks are explicitly evaluated in sink context, with values returned explicitly using take. In sink context, an object will call its sink method if present: sub foo { return [<a b c>] does role { method sink { say "sink called" } } } foo # OUTPUT: sink called Number This context, and probably all of them except sink above, are conversion or interpretation contexts in the sense that they take an untyped or typed variable and duck-type it to whatever is needed to perform the operation. In some cases that will imply a conversion (from Str to Numeric, for instance); in other cases simply an interpretation (IntStr will be interpreted as Int or as Str). Number context is called whenever we need to apply a numerical operation on a variable. my$not-a-string="1 "; my $neither-a-string="3 "; say$not-a-string+$neither-a-string; # OUTPUT: «4␤» In the code above, strings will be interpreted in numeric context as long as there are only a few digits and no other characters. It can have any number of leading or trailing whitespace, however. Numeric context can be forced by using arithmetic operators such as + or -. In that context, the Numeric method will be called if available and the value returned used as the numeric value of the object. my$t = True; my $f = False; say$t+$f; # OUTPUT: «1␤» say$t.Numeric; # OUTPUT: «1␤» say $f.Numeric; # OUTPUT: «0␤» my$list= <a b c>; say True+$list; # OUTPUT: «4␤» In the case of listy things, the numeric value will be in general equivalent to .elems; in some cases, like Thread it will return an unique thread identifier. String In a string context, values can be manipulated as strings. This context is used, for instance, for coercing non-string values so that they can be printed to standard output. put$very-complicated-and-hairy-object; # OUTPUT: something meaningful Or when smartmatching to a regular expression: put 333444777 ~~ /(3+)/; # OUTPUT: «「333」␤ 0 => 「333」␤» In general, the Str routine will be called on a variable to contextualize it; since this method is inherited from Mu, it is always present, but it is not always guaranteed to work. In some core classes it will issue a warning. ~ is the (unary) string contextualizer. As an operator, it concatenates strings, but as a prefix operator it becomes the string context operator. my @array = [ [1,2,3], [4,5,6]]; say ~@array; # OUTPUT: «1 2 3 4 5 6␤» This will happen also in a reduction context, when [~] is applied to a list say [~] [ 3, 5+6i, Set(<a b c>), [1,2,3] ]; # OUTPUT: «35+6ic a b1 2 3␤» In that sense, empty lists or other containers will stringify to an empty string: say [~] [] ; # OUTPUT: «␤» Since ~ acts also as buffer concatenation operator, it will have to check that every element is not empty, since a single empty buffer in string context will behave as a string, thus yielding an error. say [~] Buf.new(0x3,0x33), Buf.new(0x2,0x22); # OUTPUT: «Buf:0x<03 33 02 22>␤» However, my $non-empty = Buf.new(0x3, 0x33); my$empty = []; my $non-empty-also = Buf.new(0x2,0x22); say [~]$non-empty, $empty,$non-empty-also; # OUTPUT: «Cannot use a Buf as a string, but you called the Stringy method on it Since ~ is putting in string context the second element of this list, ~ is going to be using the second form that applies to strings, thus yielding the shown error. Simply making sure that everything you concatenate is a buffer will avoid this problem. my $non-empty = Buf.new(0x3, 0x33); my$empty = Buf.new(); my $non-empty-also = Buf.new(0x2,0x22); say [~]$non-empty, $empty,$non-empty-also; # OUTPUT: «Buf:0x<03 33 02 22>␤» In general, a context will coerce a variable to a particular type by calling the contextualizer; in the case of mixins, if the context class is mixed in, it will behave in that way. my $described-number = 1i but 'Unity in complex plane'; put$described-number; # OUTPUT: «Unity in complex plane␤» but creates a mixin, which endows the complex number with a Str method. put contextualizes it into a string, that is, it calls Str, the string contextualizer, with the result shown above. 44 Control flow Statements used to control the flow of execution statements Perl 6 programs consists of one or more statements. Simple statements are separated by semicolons. The following program will say "Hello" and then say "World" on the next line. say "Hello"; say "World"; In most places where spaces appear in a statement, and before the semicolon, it may be split up over many lines. Also, multiple statements may appear on the same line. It would be awkward, but the above could also be written as: say "Hello"; say "World"; blocks Blocks Like many languages, Perl 6 uses blocks enclosed by { and } to turn multiple statements into a single statement. It is OK to skip the semicolon between the last statement in a block and the closing }. { say "Hello"; say "World" } When a block stands alone as a statement, it will be entered immediately after the previous statement finishes, and the statements inside it will be executed. say 1; # OUTPUT: «1␤» { say 2; say 3 }; # OUTPUT: «2␤3␤» say 4; # OUTPUT: «4␤» Unless it stands alone as a statement, a block simply creates a closure. The statements inside are not executed immediately. Closures are another topic and how they are used is explained elsewhere. For now it is just important to understand when blocks run and when they do not: say "We get here"; { say "then here." }; { say "not here"; 0; } or die; In the above example, after running the first statement, the first block stands alone as a second statement, so we run the statement inside it. The second block does not stand alone as a statement, so instead, it makes an object of type Block but does not run it. Object instances are usually considered to be true, so the code does not die, even though that block would evaluate to 0, were it to be executed. The example does not say what to do with the Block object, so it just gets thrown away. Most of the flow control constructs covered below are just ways to tell perl 6 when, how, and how many times, to enter blocks like that second block. Before we go into those, an important side-note on syntax: If there is nothing (or nothing but comments) on a line after a closing curly brace where you would normally put semicolon, then you do not need the semicolon: # All three of these lines can appear as a group, as is, in a program { 42.say } # OUTPUT: «42␤» { 43.say } # OUTPUT: «43␤» { 42.say }; { 43.say } # OUTPUT: «42␤43␤» ...but: { 42.say } { 43.say } # Syntax error { 42.say; } { 43.say } # Also a syntax error, of course So, be careful when you backspace in a line-wrapping editor: { "Without semicolons line-wrapping can be a bit treacherous.".say } \ { 43.say } # Syntax error You have to watch out for this in most languages anyway to prevent things from getting accidentally commented out. Many of the examples below may have unnecessary semicolons for clarity. Class bodies behave like simple blocks for any top level expression; same goes to roles and other packages, like grammars (which are actually classes) or modules. class C { say "I live"; die "I will never live!" }; my $c = C.new; │ # OUTPUT: Fails and writes «I live␤I will never live!␤ This block will first run the first statement, and then die printing the second statement.$c will never get a value. Phasers Blocks may have phasers: special labeled blocks that break their execution into phases that run in particular phases. See the page phasers for the details. do The simplest way to run a block where it cannot be a stand-alone statement is by writing do before it: # This dies half of the time do { say "Heads I win, tails I die."; Bool.pick } or die; say "I win."; Note that you need a space between the do and the block. The whole do {...} evaluates to the final value of the block. The block will be run when that value is needed in order to evaluate the rest of the expression. So: False and do { 42.say }; ...will not say 42. However, the block is only evaluated once each time the expression it is contained in is evaluated: # This says "(..1 ..2 ..3)" not "(..1 ...2 ....3)" my $f = "."; say do {$f ~= "." } X~ 1, 2, 3; In other words, it follows the same reification rules as everything else. Technically, do is a loop which runs exactly one iteration. A do may also be used on a bare statement (without curly braces) but this is mainly just useful for avoiding the syntactical need to parenthesize a statement if it is the last thing in an expression: 3, do if 1 { 2 } ; # OUTPUT: «(3, 2)␤» 3, (if 1 { 2 }) ; # OUTPUT: «(3, 2)␤» 3, if 1 { 2 } ; # Syntax error start The simplest way to run a block asynchronously is by writing start before it: start { sleep 1; say "done" } say "working"; # working, done Note that you need a space between the start and the block. The start {...} immediately returns a Promise that can be safely ignored if you are not interested in the result of the block. If you are interested in the final value of the block, you can call the .result method on the returned promise. So: my $promise = start { sleep 10; 42 } # ... do other stuff say "The result is$promise.result()"; If the code inside the block has not finished, the call to .result will wait until it is done. A start may also be used on a bare statement (without curly braces). This is mainly just useful when calling a subroutine / method on an object is the only thing to do asynchronously. if To conditionally run a block of code, use an if followed by a condition. The condition, an expression, will be evaluated immediately after the statement before the if finishes. The block attached to the condition will only be evaluated if the condition means True when coerced to Bool. Unlike some languages the condition does not have to be parenthesized, instead the { and } around the block are mandatory: if 1 { "1 is true".say } ; # says "1 is true" if 1 "1 is true".say ; # syntax error, missing block if 0 { "0 is true".say } ; # does not say anything, because 0 is false if 42.say and 0 { 43.say }; # says "42" but does not say "43" There is also a form of if called a "statement modifier" form. In this case, the if and then the condition come after the code you want to run conditionally. Do note that the condition is still always evaluated first: 43.say if 42.say and 0; # says "42" but does not say "43" 43.say if 42.say and 1; # says "42" and then says "43" say "It is easier to read code when 'if's are kept on left of screen" if True; # says the above, because it is true { 43.say } if True; # says "43" as well The statement modifier form is probably best used sparingly. The if statement itself will either slip us an empty list, if it does not run the block, or it will return the value which the block produces: my $d = 0; say (1, (if 0 {$d += 42; 2; }), 3, $d); # says "(1 3 0)" my$c = 0; say (1, (if 1 { $c += 42; 2; }), 3,$c); # says "(1 2 3 42)" say (1, (if 1 { 2, 2 }), 3); # does not slip, says "(1 (2 2) 3)" For the statement modifier it is the same, except you have the value of the statement instead of a block: say (1, (42 if True) , 2); # says "(1 42 2)" say (1, (42 if False), 2); # says "(1 2)" say (1, 42 if False , 2); # says "(1 42)" because "if False, 2" is true The if does not change the topic ($_) by default. In order to access the value which the conditional expression produced, you have to ask for it more strongly:$_ = 1; if 42 { $_.say } ; # says "1"$_ = 1; if 42 -> $_ {$_.say } ; # says "42" $_ = 1; if 42 ->$a { $_.say;$a.say } ; # says "1" then says "42" $_ = 1; if 42 {$_.say; $^a.say } ; # says "1" then says "42" else/elsif A compound conditional may be produced by following an if conditional with else to provide an alternative block to run when the conditional expression is false: if 0 { say "no" } else { say "yes" } ; # says "yes" if 0 { say "no" } else{ say "yes" } ; # says "yes", space is not required The else cannot be separated from the conditional statement by a semicolon, but as a special case, it is OK to have a newline. if 0 { say "no" }; else { say "yes" } ; # syntax error if 0 { say "no" } else { say "yes" } ; # says "yes" Additional conditions may be sandwiched between the if and the else using elsif. An extra condition will only be evaluated if all the conditions before it were false, and only the block next to the first true condition will be run. You can end with an elsif instead of an else if you want. if 0 { say "no" } elsif False { say "NO" } else { say "yes" } # says "yes" if 0 { say "no" } elsif True { say "YES" } else { say "yes" } # says "YES" if 0 { say "no" } elsif False { say "NO" } # does not say anything sub right { "Right!".say; True } sub wrong { "Wrong!".say; False } if wrong() { say "no" } elsif right() { say "yes" } else { say "maybe" } # The above says "Wrong!" then says "Right!" then says "yes" You cannot use the statement modifier form with else or elsif: 42.say if 0 else { 43.say } # syntax error All the same rules for semicolons and newlines apply, consistently if 0 { say 0 }; elsif 1 { say 1 } else { say "how?" } ; # syntax error if 0 { say 0 } elsif 1 { say 1 }; else { say "how?" } ; # syntax error if 0 { say 0 } elsif 1 { say 1 } else { say "how?" } ; # says "1" if 0 { say 0 } elsif 1 { say 1 } else { say "how?" } ; # says "1" if 0 { say 0 } elsif 1 { say 1 } else { say "how?" } ; # says "1" if 0 { say "no" } elsif False { say "NO" } else { say "yes" } ; # says "yes" The whole thing either slips us an empty list (if no blocks were run) or returns the value produced by the block that did run: my$d = 0; say (1, (if 0 { $d += 42; "two"; } elsif False {$d += 43; 2; }), 3, $d); # says "(1 3 0)" my$c = 0; say (1, (if 0 { $c += 42; "two"; } else {$c += 43; 2; }), 3, $c); # says "(1 2 3 43)" It's possible to obtain the value of the previous expression inside an else, which could be from if or the last elsif if any are present:$_ = 1; if 0 { } else -> $a { "$_ $a".say } ; # says "1 0"$_ = 1; if False { } else -> $a { "$_ $a".say } ; # says "1 False" if False { } elsif 0 { } else ->$a { $a.say } ; # says "0" unless When you get sick of typing "if not (X)" you may use unless to invert the sense of a conditional statement. You cannot use else or elsif with unless because that ends up getting confusing. Other than those two differences unless works the same as #if: unless 1 { "1 is false".say } ; # does not say anything, since 1 is true unless 1 "1 is false".say ; # syntax error, missing block unless 0 { "0 is false".say } ; # says "0 is false" unless 42.say and 1 { 43.say } ; # says "42" but does not say "43" 43.say unless 42.say and 0; # says "42" and then says "43" 43.say unless 42.say and 1; # says "42" but does not say "43"$_ = 1; unless 0 { $_.say } ; # says "1"$_ = 1; unless 0 -> $_ {$_.say } ; # says "0" $_ = 1; unless False ->$a { $a.say } ; # says "False" my$c = 0; say (1, (unless 0 { $c += 42; 2; }), 3,$c); # says "(1 2 3 42)" my $d = 0; say (1, (unless 1 {$d += 42; 2; }), 3, $d); # says "(1 3 0)" with, orwith, without The with statement is like if but tests for definedness rather than truth. In addition, it topicalizes on the condition, much like given: with "abc".index("a") { .say } # prints 0 Instead of elsif, orwith may be used to chain definedness tests: # The below code says "Found a at 0" my$s = "abc"; with $s.index("a") { say "Found a at$_" } orwith $s.index("b") { say "Found b at$_" } orwith $s.index("c") { say "Found c at$_" } else { say "Didn't find a, b or c" } You may intermix if-based and with-based clauses. # This says "Yes" if 0 { say "No" } orwith Nil { say "No" } orwith 0 { say "Yes" }; As with unless, you may use without to check for undefinedness, but you may not add an else clause: my $answer = Any; without$answer { warn "Got: {$_.perl}" } There are also with and without statement modifiers: my$answer = (Any, True).roll; say 42 with $answer; warn "undefined answer" without$answer; when The when block is similar to an if block and either or both can be used in an outer block; they also both have a "statement modifier" form. But there is a difference in how following code in the same, outer block is handled: When the when block is executed, control is passed to the enclosing block and following statements are ignored; but when the if block is executed, following statements are executed. There are other ways to modify the default behavior of each which are discussed in other sections. The following examples should illustrate the if or when block's default behavior assuming no special exit or other side effect statements are included in the if or when blocks: { if X {...} # if X is true in boolean context, block is executed # following statements are executed regardless } { when X {...} # if X is true in boolean context, block is executed # and control passes to the outer block # following statements are NOT executed } Should the if and when blocks above appear at file scope, following statements would be executed in each case. There is one other feature when has that if doesn't: the when's boolean context test defaults to $_ ~~ while the if's does not. That has an effect on how one uses the X in the when block without a value for$_ (it's Any in that case and Any smartmatches on True: Any ~~ True yields True). Consider the following: { my $a = 1; my$b = True; when $a { say 'a' }; # no output when so$a { say 'a' } # a (in "so $a" 'so' coerces$a to Boolean context True # which matches with Any) when $b { say 'b' }; # no output (this statement won't be run) } Finally, when's statement modifier form does not effect execution of following statements either inside or outside of another block: say "foo" when X; # if X is true statement is executed # following statements are not affected Since a successful match will exit the block, the behavior of this piece of code:$_ = True; my $a; {$a = do when .so { "foo" } }; say $a; # OUTPUT: «(Any)␤» is explained since the do block is abandoned before any value is stored or processed. However, in this case:$_ = False; my $a; {$a = do when .so { "foo" } }; say $a; # OUTPUT: «False␤» the block is not abandoned since the comparison is false, so$a will actually get a value. for The for loop iterates over a list, running the statements inside a block once on each iteration. If the block takes parameters, the elements of the list are provided as arguments. my @foo = 1..3; for @foo { $_.print } # prints each value contained in @foo for @foo { .print } # same thing, because .print implies a$_ argument for @foo { 42.print } # prints 42 as many times as @foo has elements Pointy block syntax or a placeholder may be used to name the parameter, of course. my @foo = 1..3; for @foo -> $item { print$item } for @foo { print $^item } # same thing Multiple parameters can be declared, in which case the iterator takes as many elements from the list as needed before running the block. my @foo = 1..3; for @foo.kv ->$idx, $val { say "$idx: $val" } my %hash = <a b c> Z=> 1,2,3; for %hash.kv ->$key, $val { say "$key => $val" } for 1, 1.1, 2, 2.1 { say "$^x < $^y" } # says "1 < 1.1" then says "2 < 2.1" Parameters of a pointy block can have default values, allowing to handle lists with missing elements. my @list = 1,2,3,4; for @list ->$a, $b = 'N/A',$c = 'N/A' { say "$a$b $c" } # OUTPUT: «1 2 3␤4 N/A N/A␤» If the postfix form of for is used a block is not required and the topic is set for the statement list. say „I$_ butterflies!“ for <♥ ♥ ♥>; # OUTPUT«I ♥ butterflies!␤I ♥ butterflies!␤I ♥ butterflies!␤» A for may be used on lazy lists – it will only take elements from the list when they are needed, so to read a file line by line, you could use: for $*IN.lines ->$line { .say } Iteration variables are always lexical, so you don't need to use my to give them the appropriate scope. Also, they are read-only aliases. If you need them to be read-write, use <-> instead of ->. If you need to make $_ read-write in a for loop, do so explicitly. my @foo = 1..3; for @foo <->$_ { $_++ } A for loop can produce a List of the values produced by each run of the attached block. To capture these values, put the for loop in parenthesis or assign them to an array: (for 1, 2, 3 {$_ * 2 }).say; # OUTPUT «(2 4 6)␤» my @a = do for 1, 2, 3 { $_ * 2 }; @a.say; # OUTPUT «[2 4 6]␤» my @b = (for 1, 2, 3 {$_ * 2 }); @b.say; # OUTPUT: «[2 4 6]␤» gather/take gather is a statement or block prefix that returns a sequence of values. The values come from calls to take in the dynamic scope of the gather block. my @a = gather { take 1; take 5; take 42; } say join ', ', @a; # OUTPUT: «1, 5, 42␤» gather/take can generate values lazily, depending on context. If you want to force lazy evaluation use the lazy subroutine or method. Binding to a scalar or sigilless container will also force laziness. For example my @vals = lazy gather { take 1; say "Produced a value"; take 2; } say @vals[0]; say 'between consumption of two values'; say @vals[1]; # OUTPUT: # 1 # between consumption of two values # Produced a value # 2 gather/take is scoped dynamically, so you can call take from subs or methods that are called from within gather: sub weird(@elems, :$direction = 'forward') { my %direction = ( forward => sub { take$_ for @elems }, backward => sub { take $_ for @elems.reverse }, random => sub { take$_ for @elems.pick(*) }, ); return gather %direction{$direction}(); } say weird(<a b c>, :direction<backward> ); # OUTPUT: «(c b a)␤» If values need to be mutable on the caller side, use take-rw. Note that gather/take also work for hashes. The return value is still a Seq but the assignment to a hash in the following example makes it a hash. my %h = gather { take "foo" => 1; take "bar" => 2}; say %h; # OUTPUT: «{bar => 2, foo => 1}␤» supply/emit Emits the invocant into the enclosing supply: my$supply = supply { emit $_ for "foo", 42, .5; }$supply.tap: { say "received {.^name} ($_)"; } # OUTPUT: # received Str (foo) # received Int (42) # received Rat (0.5) given The given statement is Perl 6's topicalizing keyword in a similar way that switch topicalizes in languages such as C. In other words, given sets$_ inside the following block. The keywords for individual cases are when and default. The usual idiom looks like this: my $var = (Any, 21, any <answer lie>).pick; given$var { when 21 { say $_ * 2 } when 'lie' { .say } default { say 'default' } } The given statement is often used alone: given 42 { .say; .Numeric; } This is a lot more understandable than: { .say; .Numeric; }(42) default and when A block containing a default statement will be left immediately when the sub-block after the default statement is left. It is as though the rest of the statements in the block are skipped. given 42 { "This says".say;$_ == 42 and ( default { "This says, too".say; 43; } ); "This never says".say; } # The above block evaluates to 43 A when statement will also do this (but a when statement modifier will not.) In addition, when statements smartmatch the topic ($_) against a supplied expression such that it is possible to check against values, regular expressions, and types when specifying a match. for 42, 43, "foo", 44, "bar" { when Int { .say } when /:i ^Bar/ { .say } default { say "Not an Int or a Bar" } } # OUTPUT: «42␤43␤Not an Int or a Bar␤44␤Bar␤» In this form, the given/when construct acts much like a set of if/elsif/else statements. Be careful with the order of the when statements. The following code says "Int" not 42. given 42 { when Int { say "Int" } when 42 { say 42 } default { say "huh?" } } # OUTPUT: «Int␤» When a when statement or default statement causes the outer block to return, nesting when or default blocks do not count as the outer block, so you can nest these statements and still be in the same "switch" just so long as you do not open a new block: given 42 { when Int { when 42 { say 42 } say "Int" } default { say "huh?" } } # OUTPUT: «42» when statements can smartmatch against Signatures. proceed succeed Both proceed and succeed are meant to be used only from inside when or default blocks. The proceed statement will immediately leave the when or default block, skipping the rest of the statements, and resuming after the block. This prevents the when or default from exiting the outer block. given * { default { proceed; "This never says".say } } "This says".say; This is most often used to enter multiple when blocks. proceed will resume matching after a successful match, like so: given 42 { when Int { say "Int"; proceed } when 42 { say 42 } when 40..* { say "greater than 40" } default { say "huh?" } } # OUTPUT: «Int␤» # OUTPUT: «42␤» Note that the when 40..* match didn't occur. For this to match such cases as well, one would need a proceed in the when 42 block. This is not like a C switch statement, because the proceed does not merely enter the directly following block, it attempts to match the given value once more, consider this code: given 42 { when Int { "Int".say; proceed } when 43 { 43.say } when 42 { 42.say } default { "got change for an existential answer?".say } } # OUTPUT: «Int␤» # OUTPUT: «42␤» ...which matches the Int, skips 43 since the value doesn't match, matches 42 since this is the next positive match, but doesn't enter the default block since the when 42 block doesn't contain a proceed. By contrast, the succeed keyword short-circuits execution and exits the entire given block at that point. It may also take an argument to specify a final value for the block. given 42 { when Int { say "Int"; succeed "Found"; say "never this!"; } when 42 { say 42 } default { say "dunno?" } } # OUTPUT: «Int␤» If you are not inside a when or default block, it is an error to try to use proceed or succeed. Also remember, the when statement modifier form does not cause any blocks to be left, and any succeed or proceed in such a statement applies to the surrounding clause, if there is one: given 42 { { say "This says" } when Int; "This says too".say; when * > 41 { { "And this says".say; proceed } when * > 41; "This never says".say; } "This also says".say; } given as a statement given can follow a statement to set the topic in the statement it follows. .say given "foo"; # OUTPUT: «foo␤» printf "%s %02i.%02i.%i", <Mo Tu We Th Fr Sa Su>[.day-of-week - 1], .day, .month, .year given DateTime.now; # OUTPUT: «Sa 03.06.2016» loop The loop statement takes three statements in parentheses separated by ; that take the role of initializer, conditional and incrementer. The initializer is executed once and any variable declaration will spill into the surrounding block. The conditional is executed once per iteration and coerced to Bool, if False the loop is stopped. The incrementer is executed once per iteration. loop (my$i = 0; $i < 10;$i++) { say $i; } The infinite loop does not require parentheses. loop { say 'forever' } The loop statement may be used to produce values from the result of each run of the attached block if it appears in lists: (loop ( my$i = 0; $i++ < 3;) {$i * 2 }).say; # OUTPUT: «(2 4 6)␤» my @a = (loop ( my $j = 0;$j++ < 3;) { $j * 2 }); @a.say; # OUTPUT: «[2 4 6]␤» my @b = do loop ( my$k = 0; $k++ < 3;) {$k * 2 }; @b.say; # same thing Unlike a for loop, one should not rely on whether returned values are produced lazily. It would probably be best to use eager to guarantee that a loop whose return value may be used actually runs: (eager loop (; 2.rand < 1;) { "heads".say }) } while, until The while statement executes the block as long as its condition is true. So my $x = 1; while$x < 4 { print $x++; } print "\n"; # OUTPUT: «123␤» Similarly, the until statement executes the block as long as the expression is false. my$x = 1; until $x > 3 { print$x++; } print "\n"; # OUTPUT: «123␤» The condition for while or until can be parenthesized, but there must be a space between the keyword and the opening parenthesis of the condition. Both while and until can be used as statement modifiers. E. g. my $x = 42;$x-- while $x > 12 Also see repeat/while and repeat/until below. All these forms may produce a return value the same way loop does. repeat/while, repeat/until Executes the block at least once and, if the condition allows, repeats that execution. This differs from while/until in that the condition is evaluated at the end of the loop, even if it appears at the front. my$x = -42; repeat { $x++; } while$x < 5; $x.say; # OUTPUT: «5␤» repeat {$x++; } while $x < 5;$x.say; # OUTPUT: «6␤» repeat while $x < 10 {$x++; } $x.say; # OUTPUT: «10␤» repeat while$x < 10 { $x++; }$x.say; # OUTPUT: «11␤» repeat { $x++; } until$x >= 15; $x.say; # OUTPUT: «15␤» repeat {$x++; } until $x >= 15;$x.say; # OUTPUT: «16␤» repeat until $x >= 20 {$x++; } $x.say; # OUTPUT: «20␤» repeat until$x >= 20 { $x++; }$x.say; # OUTPUT: «21␤» All these forms may produce a return value the same way loop does. return The sub return will stop execution of a subroutine or method, run all relevant phasers and provide the given return value to the caller. The default return value is Nil. If a return type constraint is provided it will be checked unless the return value is Nil. If the type check fails the exception X::TypeCheck::Return is thrown. If it passes a control exception is raised and can be caught with CONTROL. Any return in a block is tied to the first Routine in the outer lexical scope of that block, no matter how deeply nested. Please note that a return in the root of a package will fail at runtime. A return in a block that is evaluated lazily (e.g. inside map) may find the outer lexical routine gone by the time the block is executed. In almost any case last is the better alternative. Please check the functions documentation for more information on how return values are handled and produced. return-rw The sub return will return values, not containers. Those are immutable and will lead to runtime errors when attempted to be mutated. sub s(){ my $a = 41; return$a }; say ++s(); CATCH { default { say .^name, ': ', .Str } }; # OUTPUT: «X::Multi::NoMatch.new(dispatcher … To return a mutable container, use return-rw. sub s(){ my $a = 41; return-rw$a }; say ++s(); # OUTPUT: «42␤» The same rules as for return regarding phasers and control exceptions apply. fail Leaves the current routine and returns the provided Exception or Str wrapped inside a Failure, after all relevant phasers are executed. If the caller activated fatal exceptions via the pragma use fatal;, the exception is thrown instead of being returned as a Failure. sub f { fail "WELP!" }; say f; CATCH { default { say .^name, ': ', .Str } } once A block prefix with once will be executed exactly once, even if placed inside a loop or a recursive routine. my $guard = 3; loop { last if$guard-- <= 0; once { put 'once' }; print 'many' } # OUTPUT: «once␤manymanymany» This works per "clone" of the containing code object, so: ({ once 42.say } xx 3).map: {$_(),$_()}; # says 42 thrice Note that this is not a thread-safe construct when the same clone of the same block is run by multiple threads. Also remember that methods only have one clone per class, not per object. quietly A quietly block will suppress all warnings generated in it. quietly { warn 'kaput!' }; warn 'still kaput!'; # OUTPUT: «still kaput! [...]␤» Any warning generated from any routine called from within the block will also be suppressed: sub told-you { warn 'hey...' }; quietly { told-you; warn 'kaput!' }; warn 'Only telling you now!' # OUTPUT: «Only telling you now!␤ [...] ␤» LABELs while, until, loop and for loops can all take a label, which can be used to identify them for next, last, and redo. Nested loops are supported, for instance: OUTAHERE: while True { for 1,2,3 -> $n { last OUTAHERE if$n == 2; } } Labels can be used also within nested loops to name each loop, for instance: OUTAHERE: loop ( my $i = 1; True;$i++ ) { OUTFOR: for 1,2,3 -> $n { # exits the for loop before its natural end last OUTFOR if$n == 2; } # exits the infinite loop last OUTAHERE if $i >= 2; } next The next command starts the next iteration of the loop. So the code my @x = 1, 2, 3, 4, 5; for @x ->$x { next if $x == 3; print$x; } prints "1245". If the NEXT phaser is present, it runs before the next iteration: my Int $i = 0; while ($i < 10) { if ($i % 2 == 0) { next; } say "$i is odd."; NEXT { $i++; } } # OUTPUT: «1 is odd.␤3 is odd.␤5 is odd.␤7 is odd.␤9 is odd.␤» last The last command immediately exits the loop in question. my @x = 1, 2, 3, 4, 5; for @x ->$x { last if $x == 3; print$x; } prints "12". If the LAST phaser is present, it runs before exiting the loop: my Int $i = 1; while ($i < 10) { if ($i % 5 == 0) { last; } LAST { say "The last number was$i."; } NEXT { redo unless $x ~~ /\d+/; last; } 45 Data structures How Perl 6 deals with data structures and what we can expect from them Scalar structures Some classes do not have any internal structure and to access parts of them, specific methods have to be used. Numbers, strings, and some other monolithic classes are included in that class. They use the$ sigil, although complex data structures can also use it. my $just-a-number = 7; my$just-a-string = "8"; There is a Scalar class, which is used internally to assign a default value to variables declared with the $sigil. my$just-a-number = 333; say $just-a-number.VAR.^name; # OUTPUT: «Scalar␤» Any complex data structure can be scalarized by using the item contextualizer$: (1, 2, 3, $(4, 5))[3].VAR.^name.say; # OUTPUT: «Scalar␤» However, this means that it will be treated as such in the context they are. You can still access its internal structure. (1, 2, 3,$(4, 5))[3][0].say; # OUTPUT: «4␤» An interesting side effect, or maybe intended feature, is that scalarization conserves identity of complex structures. for ^2 { my @list = (1, 1); say @list.WHICH; } # OUTPUT: «Array|93947995146096␤Array|93947995700032␤» Every time (1, 1) is assigned, the variable created is going to be different in the sense that === will say it is; as it is shown, different values of the internal pointer representation are printed. However for ^2 { my $list = (1, 1); say$list.WHICH } # OUTPUT: «List|94674814008432␤List|94674814008432␤» In this case, $list is using the Scalar sigil and thus will be a Scalar. Any scalar with the same value will be exactly the same, as shown when printing the pointers. Complex data structures Complex data structures fall in two different broad categories: Positional, or list-like and Associative, or key-value pair like, according to how you access its first-level elements. In general, complex data structures, including objects, will be a combination of both, with object properties assimilated to key-value pairs. While all objects subclass Mu, in general complex objects are instances of subclasses of Any. While it is theoretically possible to mix in Positional or Associative without doing so, most methods applicable to complex data structures are implemented in Any. Navigating these complex data structures is a challenge, but Perl 6 provides a couple of functions that can be used on them: deepmap and duckmap. While the former will go to every single element, in order, and do whatever the block passed requires, say [[1, 2, [3, 4]],[[5, 6, [7, 8]]]].deepmap( *.elems ); # OUTPUT: «[[1 1 [1 1]] [1 1 [1 1]]]␤» which returns 1 because it goes to the deeper level and applies elems to them, deepmap can perform more complicated operations: say [[1, 2, [3, 4]], [[5, 6, [7, 8]]]].duckmap: ->$array where .elems == 2 { $array.elems }; # OUTPUT: «[[1 2 2] [5 6 2]]␤» In this case, it dives into the structure, but returns the element itself if it does not meet the condition in the block (1, 2), returning the number of elements of the array if it does (the two 2s at the end of each subarray). Since deepmap and duckmap are Any methods, they also apply to Associative arrays: say %( first => [1, 2], second => [3,4] ).deepmap( *.elems ); # OUTPUT: «{first => [1 1], second => [1 1]}␤» Only in this case, they will be applied to every list or array that is a value, leaving the keys alone. Positional and Associative can be turned into each other. say %( first => [1, 2], second => [3,4] ).list[0]; # OUTPUT: «second => [3 4]␤» However, in this case, and for Rakudo >= 2018.05, it will return a different value every time it runs. A hash will be turned into a list of the key-value pairs, but it is guaranteed to be disordered. You can also do the operation in the opposite direction, as long as the list has an even number of elements (odd number will result in an error): say <a b c d>.Hash # OUTPUT: «{a => b, c => d}␤» But say <a b c d>.Hash.kv # OUTPUT: «(c d a b)␤» will obtain a different value every time you run it; kv turns every Pair into a list. Complex data structures are also generally Iterable. Generating an iterator out of them will allow the program to visit the first level of the structure, one by one: .say for 'א'..'ס'; # OUTPUT: «א␤ב␤ג␤ד␤ה␤ו␤ז␤ח␤ט␤י␤ך␤כ␤ל␤ם␤מ␤ן␤נ␤ס␤» 'א'..'ס' is a Range, a complex data structure, and with for in front it will iterate until the list is exhausted. You can use for on your complex data structures by overriding the iterator method (from role Iterable): class SortedArray is Array { method iterator() { self.sort.iterator } }; my @thing := SortedArray.new([3,2,1,4]); .say for @thing; # OUTPUT: «1␤2␤3␤4␤» for calls directly the iterator method on @thing making it return the elements of the array in order. Much more on iterating on the page devoted to it. Functional structures Perl 6 is a functional language and, as such, functions are first-class data structures. Functions follow the Callable role, which is the 4th element in the quartet of fundamental roles. Callable goes with the & sigil, although in most cases it is elided for the sake of simplicity; this sigil elimination is always allowed in the case of Callables. my &a-func= { (^($^þ)).Seq }; say a-func(3), a-func(7); # OUTPUT: «(0 1 2)(0 1 2 3 4 5 6)␤» Blocks are the simplest callable structures, since Callables cannot be instantiated. In this case we implement a block that logs events and can retrieve them: my $logger = ->$event, $key = Nil { state %store; if ($event ) { %store{ DateTime.new( now ) } = $event; } else { %store.keys.grep( /$key/ ) } } $logger( "Stuff" );$logger( "More stuff" ); say $logger( Nil, "2018-05-28" ); # OUTPUT: «(Stuff More stuff)␤» A Block has a Signature, in this case two arguments, the first of which is the event that is going to be logged, and the second is the key to retrieve the events. They will be used in an independent way, but its intention is to showcase the use of a state variable that is kept from every invocation to the next. This state variable is encapsulated within the block, and cannot be accessed from outside except by using the simple API the block provides: calling the block with a second argument. The two first invocations log two events, the third invocation at the bottom of the example use this second type of call to retrieve the stored values. Blocks can be cloned: my$clogger = $logger.clone;$clogger( "Clone stuff" ); $clogger( "More clone stuff" ); say$clogger( Nil, "2018-05-28" ); # OUTPUT: «(Clone stuff More clone stuff)␤» And cloning will reset the state variable; instead of cloning, we can create façades that change the API. For instance, eliminate the need to use Nil as first argument to retrieve the log for a certain date: my $gets-logs =$logger.assuming( Nil, * ); $logger( %(changing => "Logs") ); say$gets-logs( "2018-05-28" ); # OUTPUT: «({changing => Logs} Stuff More stuff)␤» assuming wraps around a block call, giving a value (in this case, Nil) to the arguments we need, and passing on the arguments to the other arguments we represent using *. In fact, this corresponds to the natural language statement "We are calling $logger assuming the first argument is Nil". We can slightly change the appearance of these two Blocks to clarify they are actually acting on the same block: my$Logger = $logger.clone; my$Logger::logs = $Logger.assuming( *, Nil ); my$Logger::get = $Logger.assuming( Nil, * );$Logger::logs( <an array> ); $Logger::logs( %(key => 42) ); say$Logger::get( "2018-05-28" ); Although :: is generally used for invocation of class methods, it is actually a valid part of the name of a variable. In this case we use them conventionally to simply indicate $Logger::logs and$Logger::get are actually calling $Logger, which we have capitalized to use a class-like appearance. The point of this tutorial is that using functions as first-class citizens, together with the use of state variables, allows the use of certain interesting design patterns such as this one. As such first class data structures, callables can be used anywhere another type of data can. my @regex-check = ( /<alnum>/, /<alpha>/, /<punct>/ ); say @regex-check.map: "33af" ~~ *; # OUTPUT: «(「3」␤ alnum => 「3」 「a」␤ alpha => 「a」 Nil)␤» Regexes are actually a type of callable: say /regex/.does( Callable ); # OUTPUT: «True␤» And in the example above we are calling regexes stored in an array, and applying them to a string literal. Callables are composed by using the function composition operator ∘: my$typer = -> $thing {$thing.^name ~ ' → ' ~ $thing }; my$Logger::withtype = $Logger::logs ∘$typer; $Logger::withtype( Pair.new( 'left', 'right' ) );$Logger::withtype( ¾ ); say $Logger::get( "2018-05-28" ); # OUTPUT: «(Pair → left right Rat → 0.75)␤» We are composing$typer with the $Logger::logs function defined above, obtaining a function that logs an object preceded by ts type, which can be useful for filtering, for instance.$Logger::withtype is, in fact, a complex data structure composed of two functions which are applied in a serial way, but every one of the callables composed can keep state, thus creating complex transformative callables, in a design pattern that is similar to object composition in the object oriented realm. You will have to choose, in every particular case, what is the programming style which is most suitable for your problem. Defining and constraining data structures Perl 6 has different ways of defining data structures, but also many ways to constrain them so that you can create the most adequate data structure for every problem domain. but, for example, mixes roles or values into a value or a variable: my %not-scalar := %(2 => 3) but Associative[Int, Int]; say %not-scalar.^name; # OUTPUT: «Hash+{Associative[Int, Int]}␤» say %not-scalar.of; # OUTPUT: «Associative[Int, Int]␤» %not-scalar{3} = 4; %not-scalar<thing> = 3; say %not-scalar; # OUTPUT: «{2 => 3, 3 => 4, thing => 3}␤» In this case, but is mixing in the Associative[Int, Int] role; please note that we are using binding so that the type of the variable is the one defined, and not the one imposed by the % sigil; this mixed-in role shows in the name surrounded by curly braces. What does that really mean? That role includes two methods, of and keyof; by mixing the role in, the new of will be called (the old of would return Mu, which is the default value type for Hashes). However, that is all it does. It is not really changing the type of the variable, as you can see since we are using any kind of key and values in the next few statements. However, we can provide new functionality to a variable using this type of mixin: role Lastable { method last() { self.sort.reverse[0] } } my %hash-plus := %( 3 => 33, 4 => 44) but Lastable; say %hash-plus.sort[0]; # OUTPUT: «3 => 33␤» say %hash-plus.last; # OUTPUT: «4 => 44␤» In Lastable we use the universal self variable to refer to whatever object this particular role is mixed in; in this case it will contain the hash it is mixed in with; it will contain something else (and possibly work some other way) in other case. This role will provide the last method to any variable it's mixed with, providing new, attachable, functionalities to regular variables. Roles can even be added to existing variables using the does keyword. Subsets can also be used to constrain the possible values a variable might hold; they are Perl 6 attempt at gradual typing; it is not a full attempt, because subsets are not really types in a strict sense, but they allow runtime type checking. It adds type-checking functionality to regular types, so it helps create a richer type system, allowing things like the one shown in this code: subset OneOver where (1/$_).Int == 1/$_; my OneOver $one-fraction = ⅓; say$one-fraction; # OUTPUT: «0.333333␤» On the other hand, my OneOver $= ⅔; will cause a type-check error. Subsets can use Whatever, that is, *, to refer to the argument; but this will be instantiated every time you use it to a different argument, so if we use it twice in the definition we would get an error. In this case we are using the topic single variable,$_, to check the instantiation. Subsetting can be done directly, without the need of declaring it, in signatures. Infinite structures and laziness It might be assumed that all the data contained in a data structure is actually there. That is not necessarily the case: in many cases, for efficiency reasons or simply because it is not possible, the elements contained in a data structure only jump into existence when they are actually needed. This computation of items as they are needed is called reification. # A list containing infinite number of un-reified Fibonacci numbers: my @fibonacci = 1, 1, * + * … ∞; # We reify 10 of them, looking up the first 10 of them with array index: say @fibonacci[^10]; # OUTPUT: «(1 1 2 3 5 8 13 21 34 55)␤» # We reify 5 more: 10 we already reified on previous line, and we need to # reify 5 more to get the 15th element at index 14. Even though we need only # the 15th element, the original Seq still has to reify all previous elements: say @fibonacci[14]; # OUTPUT: «987␤» Above we were reifying a Seq we created with the sequence operator, but other data structures use the concept as well. For example, an un-reified Range is just the two end points. In some languages, calculating the sum of a huge range is a lengthy and memory-consuming process, but Perl 6 calculates it instantly: say sum 1 .. 9_999_999_999_999; # OUTPUT: «49999999999995000000000000␤» Why? Because the sum can be calculated without reifying the Range; that is, without figuring out all the elements it contains. This is why this feature exists. You can even make your own things reify-on-demand, using gather and take: my $seq = gather { say "About to make 1st element"; take 1; say "About to make 2nd element"; take 2; } say "Let's reify an element!"; say$seq[0]; say "Let's reify more!"; say $seq[1]; say "Both are reified now!"; say$seq[^2]; # OUTPUT: # Let's reify an element! # About to make 1st element # 1 # Let's reify more! # About to make 2nd element # 2 # Both are reified now! # (1 2) Following the output above, you can see the print statements inside the gather got executed only when we reified the individual elements while looking up an element. Also note that the elements got reified just once. When we printed the same elements again on the last line of the example, the messages inside gather was no longer printed. This is because the construct used already-reified elements from the Seq's cache. Note that above we assigned the gather to a Scalar container (the $sigil), not the Positional one (the @ sigil). The reason is that the @-sigiled variables are mostly eager. What this means is they reify the stuff assigned to them right away most of the time. The only time they don't do it is when the items are known to be is-lazy, like our sequence generated with infinity as the end point. Were we to assign the gather to a @-variable, the say statements inside of it would've been printed right away. Another way to fully-reify a list, is by calling .elems on it. This is the reason why checking whether a list contains any items is best done by using .Bool method (or just using if @array { … }), since you don't need to reify all the elements to find out if there are any of them. There are times where you do want to fully-reify a list before doing something. For example, the IO::Handle.lines returns a Seq. The following code contains a bug; keeping reification in mind, try to spot it: my$fh = "/tmp/bar".IO.open; my $lines =$fh.lines; close $fh; say$lines[0]; We open a filehandle, then assign return of .lines to a Scalar variable, so the returned Seq does not get reified right away. We then close the filehandle, and try to print an element from $lines. The bug in the code is by the time we reify the$lines Seq on the last line, we've already closed the filehandle. When the Seq's iterator tries to generate the item we've requested, it results in the error about attempting to read from a closed handle. So, to fix the bug we can either assign to a @-sigiled variable or call .elems on $lines before closing the handle: my$fh = "/tmp/bar".IO.open; my @lines = $fh.lines; close$fh; say @lines[0]; # no problem! We can also use any function whose side effect is reification, like .elems mentioned above: my $fh = "/tmp/bar".IO.open; my$lines = $fh.lines; say "Read$lines.elems() lines"; # reifying before closing handle close $fh; say$lines[0]; # no problem! Using eager will also reify the whole sequence: my $fh = "/tmp/bar".IO.open; my$lines = eager $fh.lines; # Uses eager for reification. close$fh; say $lines[0]; Introspection Languages that allow introspection like Perl 6 have functionalities attached to the type system that let the developer access container and value metadata. This metadata can be used in a program to carry out different actions depending on their value. As it is obvious from the name, metadata are extracted from a value or container via the metaclass. my$any-object = "random object"; my $metadata =$any-object.HOW; say $metadata.^mro; # OUTPUT: «((ClassHOW) (Any) (Mu))␤» say$metadata.can( $metadata, "uc" ); # OUTPUT: «(uc uc)␤» With the first say we show the class hierarchy of the metamodel class, which in this case is Metamodel::ClassHOW. It inherits directly from Any, meaning any method there can be used; it also mixes in several roles which can give you information about the class structure and functions. But one of the methods of that particular class is can, which we can use to look up whether the object can use the uc (uppercase) method, which it obviously can. However, it might not be so obvious in some other cases, when roles are mixed in directly into a variable. For instance, in the case of %hash-plus defined above: say %hash-plus.^can("last"); # OUTPUT: «(last)␤» In this case we are using the syntactic sugar for HOW.method, ^method, to check if your data structure responds to that method; the output, which shows the name of the methods that match, certifies that we can use it. See also this article on class introspection on how to access class properties and methods, and use it to generate test data for a class; this Advent Calendar article describes the meta-object protocol extensively. 46 Date and time functions Processing date and time in Perl 6 Perl 6 includes several classes that deal with temporal information: Date, DateTime, Instant and Duration. The three first are dateish, so they mix in the Dateish role, which defines all methods and properties that classes that deal with date should assume. It also includes a class hierarchy of exceptions rooted in X::Temporal. We will try to illustrate these classes in the next (somewhat extended) example, which can be used to process all files in a directory (by default .) with a particular extension (by default .p6) in a directory, sort them according to their age, and compute how many files have been created per month and how many were modified in certain periods expressed in ranges of months: sub MAIN($path = ".", $extension = "p6" ) { my DateTime$right = DateTime.now; my %files-month; my %files-period; for dir($path).grep( / \.$extension $/ ) ->$file { CATCH { when X::Temporal { say "Date-related problem", .payload } when X::IO { say "File-related problem", .payload } } my Instant $modified =$file.modified; my Instant $accessed =$file.accessed; my Duration $duration =$accessed - $modified; my$age = $right - DateTime($accessed); my $time-of-day =$file.changed.DateTime.hh-mm-ss but Dateish; my $file-changed-date =$file.changed.Date; %metadata{$file} = %( modified =>$modified, accessed => $accessed, age =>$age, difference => $duration, changed-tod =>$time-of-day, changed-date => $file-changed-date); %files-month{$file-changed-date.month}++; given $file-changed-date { when Date.new("2018-01-01")..^Date.new("2018-04-01") { %files-period<pre-grant>++} when Date.new("2018-04-01")..Date.new("2018-05-31") { %files-period<grant>++} default { %files-period<post-grant>++}; } } %metadata.sort( {$^a.value<age> <=> $^b.value<age> } ).map: { say$^x.key, ", ", $^x.value<accessed modified age difference changed-tod changed-date>.join(", "); }; %files-month.keys.sort.map: { say "Month$^x → %files-month{$^x}" }; %files-period.keys.map: { say "Period$^x → %files-period{$^x}" }; } DateTime is used in line 6 to contain the current date and time returned by now. A CATCH phaser is declared in lines 11 to 15. Its main mission is to distinguish between DateTime-related exceptions and other types. These kind of exception can arise from invalid formats or timezone clashes. Barring some corruption of the file attributes, both are impossible, but in any case they should be caught and separated from other types of exceptions. We use Instants in lines 16-17 to represent the moment in which the files where accessed and modified. An Instant is measured in atomic seconds, and is a very low-level description of a time event; however, the Duration declared in line 18 represent the time transcurred among two different Instants, and we will be using it to represent the age. For some variables we might be interested in dealing with them with some dateish traits.$time-of-day contains the time of the day the file was changed; changed will return an Instant, but it is converted into a Date (which is Dateish while Instant is not) and then the time of day is extracted from that. $time-of-day will have «Str+{Dateish}␤» type. We will use the date in this variable to find out the period when the files were changed. Date.new("2018-01-01")..^Date.new("2018-04-01") creates a date Range and$file-changed-date is smartmatched against it. Dates can be used this way; in this case it creates a Range that excludes its last element. This very variable is also used to compute the month of the year when the file was modified. Date is obviously Dateish and then has the month method to extract that property from it. Duration objects can be compared. This is used in $^a.value<age> <=>$^b.value<age> }); to sort the files by age. 47 Enumeration An example using the enum type In Perl 6 the enum type is much more complex than in some other languages, and the details are found in its type description here: enum. This short document will give a simple example of its use as is the usual practice in C-like languages. Say we have a program that needs to write to various directories; we want a function that, given a directory name, tests it for (1) its existence and (2) whether it can be written to by the user of the program; this implies that there are three possible states from the user perspective: either you can write (CanWrite), or there is no directory (NoDir) or the directory exists, but you cannot write (NoWrite). The results of the test will determine what actions the program takes next. enum DirStat <CanWrite NoDir NoWrite>; sub check-dir-status($dir --> DirStat) { if$dir.IO.d { # dir exists, can the program user write to it? my $f = "$dir/.tmp"; spurt $f, "some text"; CATCH { # unable to write for some reason return NoWrite; } # if we get here we must have successfully written to the dir unlink$f; return CanWrite; } # if we get here the dir must not exist return NoDir; } # test each of three directories by a non-root user my $dirs = '/tmp', # normally writable by any user '/', # writable only by root '~/tmp'; # a non-existent dir in the user's home dir for$dirs -> $dir { my$stat = check-dir-status $dir; say "status of dir '$dir': $stat"; if$stat ~~ CanWrite { Typed exceptions For example, if while executing .zombie copy on an object, a needed path foo/bar becomes unavailable, then an X::IO::DoesNotExist exception can be raised: die X::IO::DoesNotExist.new(:path("foo/bar"), :trying("zombie copy")) # RESULT: «Failed to find 'foo/bar' while trying to do '.zombie copy' # in block <unit> at my-script.p6:1» Note how the object has provided the backtrace with information about what went wrong. A user of the code can now more easily find and correct the problem. Catching exceptions It's possible to handle exceptional circumstances by supplying a CATCH block: die X::IO::DoesNotExist.new(:path("foo/bar"), :trying("zombie copy")); CATCH { when X::IO { $*ERR.say: "some kind of IO exception was caught!" } } # OUTPUT: «some kind of IO exception was caught!» Here, we are saying that if any exception of type X::IO occurs, then the message some kind of IO exception was caught! will be sent to stderr, which is what$*ERR.say does, getting displayed on whatever constitutes the standard error device in that moment, which will probably be the console by default. A CATCH block uses smartmatching similar to how given/when smartmatches on options, thus it's possible to catch and handle various categories of exceptions inside a when block. To handle all exceptions, use a default statement. This example prints out almost the same information as the normal backtrace printer. CATCH { default { $*ERR.say: .payload; for .backtrace.reverse { next if .file.starts-with('SETTING::'); next unless .subname;$*ERR.say: " in block {.subname} at {.file} line {.line}"; } } } Note that the match target is a role. To allow user defined exceptions to match in the same manner, they must implement the given role. Just existing in the same namespace will look alike but won't match in a CATCH block. Exception handlers and enclosing blocks After a CATCH has handled the exception, the block enclosing the CATCH block is exited. In other words, even when the exception is handled successfully, the rest of the code in the enclosing block will never be executed. die "something went wrong ..."; CATCH { # will definitely catch all the exception default { .Str.say; } } say "This won't be said."; # but this line will be never reached since # the enclosing block will be exited immediately # OUTPUT: «something went wrong ...␤» Compare with this: CATCH { CATCH { default { .Str.say; } } die "something went wrong ..."; } say "Hi! I am at the outer block!"; # OUTPUT: «Hi! I am at the outer block!␤» See Resuming of exceptions, for how to return control back to where the exception originated. try blocks A try block is a normal block which implicitly turns on the use fatal pragma and includes an implicit CATCH block that drops the exception, which means you can use it to contain them. Caught exceptions are stored inside the $! variable, which holds a value of type Exception. A normal block like this one will simply fail: { my$x = +"a"; say $x.^name; } # OUTPUT: «Failure␤» However, a try block will contain the exception and put it into the$! variable: try { my $x = +"a"; say$x.^name; } if $! { say "Something failed!" } # OUTPUT: «Something failed!␤» say$!.^name; # OUTPUT: «X::Str::Numeric␤» Any exception that is thrown in such a block will be caught by a CATCH block, either implicit or provided by the user. In the latter case, any unhandled exception will be rethrown. If you choose not to handle the exception, they will be contained by the block. try { die "Tough luck"; say "Not gonna happen"; } try { fail "FUBAR"; } In both try blocks above, exceptions will be contained within the block, but the say statement will not be run. We can handle them, though: class E is Exception { method message() { "Just stop already!" } } try { E.new.throw; # this will be local say "This won't be said."; } say "I'm alive!"; try { CATCH { when X::AdHoc { .Str.say; .resume } } die "No, I expect you to DIE Mr. Bond!"; say "I'm immortal."; E.new.throw; say "No, you don't!"; } Which would output: I'm alive! No, I expect you to DIE Mr. Bond! I'm immortal. in block <unit> at exception.p6 line 21 Since the CATCH block is handling just the X::AdHoc exception thrown by the die statement, but not the E exception. In the absence of a CATCH block, all exceptions will be contained and dropped, as indicated above. resume will resume execution right after the exception has been thrown; in this case, in the die statement. Please consult the section on resuming of exceptions for more information on this. A try-block is a normal block and as such treats its last statement as the return value of itself. We can therefore use it as a right-hand side. say try { +"99999" } // "oh no"; # OUTPUT: «99999␤» say try { +"hello" } // "oh no"; # OUTPUT: «oh no␤» Try blocks support else blocks indirectly by returning the return value of the expression or Nil if an exception was thrown. with try +"♥" { say "this is my number: $_" } else { say "not my number!" } # OUTPUT: «not my number!␤» try can also be used with a statement instead of a block: say try "some-filename.txt".IO.slurp // "sane default"; # OUTPUT: «sane default␤» What try actually causes is, via the use fatal pragma, an immediate throw of the exceptions that happen within its scope, but by doing so the CATCH block is invoked from the point where the exception is thrown, which defines its scope. my$error-code = "333"; } try { my $error-code = "111"; bad-sub; CATCH { default { say "Error$error-code ", .^name, ': ',.Str } } } Throwing exceptions Exceptions can be thrown explicitly with the .throw method of an Exception object. This example throws an AdHoc exception, catches it and allows the code to continue from the point of the exception by calling the .resume method. { "OHAI".say; CATCH { } } "OBAI".say; # OUTPUT: «OHAI␤OBAI␤» If the CATCH block doesn't match the exception thrown, then the exception's payload is passed on to the backtrace printing mechanism. { "OHAI".say; CATCH { } } "OBAI".say; # RESULT: «foo # in block <unit> at my-script.p6:1» This next example doesn't resume from the point of the exception. Instead, it continues after the enclosing block, since the exception is caught, and then control continues after the CATCH block. { "OHAI".say; CATCH { } } "OBAI".say; # OUTPUT: «OBAI␤» throw can be viewed as the method form of die, just that in this particular case, the sub and method forms of the routine have different names. Resuming of exceptions Exceptions interrupt control flow and divert it away from the statement following the statement that threw it. Any exception handled by the user can be resumed and control flow will continue with the statement following the statement that threw the exception. To do so, call the method .resume on the exception object. CATCH { when X::AdHoc { .resume } } # this is step 2 die "We leave control after this."; # this is step 1 say "We have continued with control flow."; # this is step 3 Resuming will occur right after the statement that has caused the exception, and in the innermost call frame: return "not returning"; } { my $return = bad-sub; say "Returned$return"; CATCH { default { say "Error ", .^name, ': ',.Str; $return = '0'; .resume; } } } # OUTPUT: # Error X::AdHoc: Something bad happened # Returned not returning In this case, .resume is getting to the return statement that happens right after the die statement. Please note that the assignment to$return is taking no effect, since the CATCH statement is happening inside the call to bad-sub, which, via the return statement, assigns the not returning value to it. Uncaught exceptions If an exception is thrown and not caught, it causes the program to exit with a non-zero status code, and typically prints a message to the standard error stream of the program. This message is obtained by calling the gist method on the exception object. You can use this to suppress the default behavior of printing a backtrace along with the message: multi method gist(X::WithoutLineNumber:D:) { $.payload } } die X::WithoutLineNumber.new(payload => "message") # prints "message\n" to$*ERR and exits, no backtrace Control exceptions Control exceptions are thrown by certain keywords and are handled either automatically or by the appropriate phaser. Any unhandled control exception is converted to a normal exception. { return; CATCH { default { $*ERR.say: .^name, ': ',.Str } } } # OUTPUT: «X::ControlFlow::Return: Attempt to return outside of any Routine␤» # was CX::Return ~ failed 49 Functions Functions and functional programming in Perl 6 Routines are one of the means Perl 6 has to reuse code. They come in several forms, most notably methods, which belong in classes and roles and are associated with an object; and functions (also called subroutines or subs, for short), which can be called independently of objects. Subroutines default to lexical (my) scoping, and calls to them are generally resolved at compile time. Subroutines can have a signature, also called parameter list, which specifies which, if any, arguments the signature expects. It can specify (or leave open) both the number and types of arguments, and the return value. Introspection on subroutines is provided via Routine. Defining/Creating/Using functions Subroutines The basic way to create a subroutine is to use the sub declarator followed by an optional identifier: sub my-func { say "Look ma, no args!" } my-func; The sub declarator returns a value of type Sub that can be stored in any container: my &c = sub { say "Look ma, no name!" } c; # OUTPUT: «Look ma, no name!␤» my Any:D$f = sub { say 'Still nameless...' } $f(); # OUTPUT: «Still nameless...␤» my Code \a = sub { say ‚raw containers don't implement postcircumfix:<( )>‘ }; a.(); # OUTPUT: «raw containers don't implement postcircumfix:<( )>␤» The declarator sub will declare a new name in the current scope at compile time. As such any indirection has to be resolved at compile time: constant aname = 'foo'; sub ::(aname) { say 'oi‽' }; foo; This will become more useful once macros are added to Perl 6. To have the subroutine take arguments, a signature goes between the subroutine's name and its body, in parentheses: sub exclaim ($phrase) { say $phrase ~ "!!!!" } exclaim "Howdy, World"; By default, subroutines are lexically scoped. That is, sub foo {...} is the same as my sub foo {...} and is only defined within the current scope. sub escape($str) { # Puts a slash before non-alphanumeric characters S:g[<-alpha -digit>] = "\\$/" given$str } say escape 'foo#bar?'; # OUTPUT: «foo\#bar\?␤» { sub escape($str) { # Writes each non-alphanumeric character in its hexadecimal escape S:g[<-alpha -digit>] = "\\x[{$/.ord.base(16) }]" given $str } say escape 'foo#bar?' # OUTPUT: «foo\x[23]bar\x[3F]␤» } # Back to original escape function say escape 'foo#bar?'; # OUTPUT: «foo\#bar\?␤» Subroutines don't have to be named. If unnamed, they're called anonymous subroutines. say sub ($a, $b) {$a ** 2 + $b ** 2 }(3, 4) # OUTPUT: «25␤» But in this case, it's often desirable to use the more succinct block syntax. Subroutines and blocks can be called in place, as in the example above. say ->$a, $b {$a ** 2 + $b ** 2 }(3, 4) # OUTPUT: «25␤» Or even say {$^a ** 2 + $^b ** 2 }(3, 4) # OUTPUT: «25␤» Blocks and lambdas Whenever you see something like {$_ + 42 }, -> $a,$b { $a **$b }, or { $^text.indent($:spaces) }, that's Block syntax. It's used after every if, for, while, etc. for 1, 2, 3, 4 -> $a,$b { say $a ~$b; } # OUTPUT: «12␤34␤» They can also be used on their own as anonymous blocks of code. say { $^a ** 2 +$^b ** 2}(3, 4) # OUTPUT: «25␤» For block syntax details, see the documentation for the Block type. Signatures The parameters that a function accepts are described in its signature. sub format (Str $s) { ... } ->$a, $b { ... } Details about the syntax and use of signatures can be found in the documentation on the Signature class. Automatic signatures If no signature is provided but either of the two automatic variables @_ or %_ are used in the function body, a signature with *@_ or *%_ will be generated. Both automatic variables can be used at the same time. sub s { say @_, %_ }; say &s.signature # OUTPUT: «(*@_, *%_)␤» Arguments Arguments are supplied as a comma separated list. To disambiguate nested calls, use parentheses: sub f(&c){ c() * 2 }; # call the function reference c with empty parameter list sub g($p){ $p - 2 }; say(g(42), 45); # pass only 42 to g() When calling a function, positional arguments should be supplied in the same order as the function's signature. Named arguments may be supplied in any order, but it's considered good form to place named arguments after positional arguments. Inside the argument list of a function call, some special syntax is supported: sub f(|c){}; f :named(35); # A named argument (in "adverb" form) f named => 35; # Also a named argument f :35named; # A named argument using abbreviated adverb form f 'named' => 35; # Not a named argument, a Pair in a positional argument my \c = <a b c>.Capture; f |c; # Merge the contents of Capture$c as if they were supplied Arguments passed to a function are conceptually first collected in a Capture container. Details about the syntax and use of these containers can be found in the documentation on the Capture class. When using named arguments, note that normal List "pair-chaining" allows one to skip commas between named arguments. sub f(|c){}; f :dest</tmp/foo> :src</tmp/bar> :lines(512); f :32x :50y :110z; # This flavor of "adverb" works, too f :a:b:c; # The spaces are also optional. Return values Any Block or Routine will provide the value of its last expression as a return value to the caller. If either return or return-rw is called, then its parameter, if any, will become the return value. The default return value is Nil. sub a { 42 }; sub b { say a }; sub c { }; b; # OUTPUT: «42␤» say c; # OUTPUT: «Nil␤» Multiple return values are returned as a list or by creating a Capture. Destructuring can be used to untangle multiple return values. sub a { 42, 'answer' }; put a.perl; my ($n,$s) = a; put [$s,$n]; sub b { <a b c>.Capture }; put b.perl; # OUTPUT: «\("a", "b", "c")␤» Return type constraints Perl 6 has many ways to specify a function's return type: sub foo(--> Int) {}; say &foo.returns; # OUTPUT: «(Int)␤» sub foo() returns Int {}; say &foo.returns; # OUTPUT: «(Int)␤» sub foo() of Int {}; say &foo.returns; # OUTPUT: «(Int)␤» my Int sub foo() {}; say &foo.returns; # OUTPUT: «(Int)␤» Attempting to return values of another type will cause a compilation error. sub foo() returns Int { "a"; }; foo; # Type check fails returns and of are equivalent, and both take only a Type since they are declaring a trait of the Callable. The last declaration is, in fact, a type declaration, which obviously can take only a type. -->, however, can take either undefined or definite values. Note that Nil and Failure are exempt from return type constraints and can be returned from any routine, regardless of its constraint: sub foo() returns Int { fail }; foo; # Failure returned sub bar() returns Int { return }; bar; # Nil returned Multi-dispatch Perl 6 allows for writing several routines with the same name but different signatures. When the routine is called by name, the runtime environment determines the proper candidate and invokes it. Each candidate is declared with the multi keyword. Dispatch happens depending on the number (arity), type and name of arguments. Consider the following example: # version 1 multi happy-birthday( $name ) { say "Happy Birthday$name !"; } # version 2 multi happy-birthday( $name,$age ) { say "Happy {$age}th Birthday$name !"; } # version 3 multi happy-birthday( :$name, :$age, :$title = 'Mr' ) { say "Happy {$age}th Birthday $title$name !"; } # calls version 1 (arity) happy-birthday 'Larry'; # OUTPUT: «Happy Birthday Larry !␤» # calls version 2 (arity) happy-birthday 'Luca', 40; # OUTPUT: «Happy 40th Birthday Luca !␤» # calls version 3 # (named arguments win against arity) happy-birthday( age => '50', name => 'John' ); # OUTPUT: «Happy 50th Birthday Mr John !␤» # calls version 2 (arity) happy-birthday( 'Jack', 25 ); # OUTPUT: «Happy 25th Birthday Jack !␤» The first two versions of the happy-birthday sub differs only in the arity (number of arguments), while the third version uses named arguments and is chosen only when named arguments are used, even if the arity is the same of another multi candidate. When two sub have the same arity, the type of the arguments drive the dispatch; when there are named arguments they drive the dispatch even when their type is the same as another candidate: multi happy-birthday( Str $name, Int$age ) { say "Happy {$age}th Birthday$name !"; } multi happy-birthday( Str $name, Str$title ) { say "Happy Birthday $title$name !"; } multi happy-birthday( Str :$name, Int :$age ) { say "Happy Birthday $name, you turned$age !"; } happy-birthday 'Luca', 40; # OUTPUT: «Happy 40th Birthday Luca !␤» happy-birthday 'Luca', 'Mr'; # OUTPUT: «Happy Birthday Mr Luca !␤» happy-birthday age => 40, name => 'Luca'; # OUTPUT: «Happy Birthday Luca, you turned 40 !␤» Named parameters participate in the dispatch even if they are not provided in the call. Therefore a multi candidate with named parameters will be given precedence. multi as-json(Bool $d) {$d ?? 'true' !! 'false'; } multi as-json(Real $d) { ~$d } multi as-json(@d) { sprintf '[%s]', @d.map(&as-json).join(', ') } say as-json( True ); # OUTPUT: «true␤» say as-json( 10.3 ); # OUTPUT: «10.3␤» say as-json( [ True, 10.3, False, 24 ] ); # OUTPUT: «[true, 10.3, false, 24]␤» multi without any specific routine type always defaults to a sub, but you can use it on methods as well. The candidates are all the multi methods of the object: class Congrats { multi method congratulate($reason,$name) { say "Hooray for your $reason,$name"; } } role BirthdayCongrats { multi method congratulate('birthday', $name) { say "Happy birthday,$name"; } multi method congratulate('birthday', $name,$age) { say "Happy {$age}th birthday,$name"; } } my $congrats = Congrats.new does BirthdayCongrats;$congrats.congratulate('promotion','Cindy'); # OUTPUT: «Hooray for your promotion, Cindy␤» \$congrats.congratulate('birthday','Bob'); # OUTPUT: «Happy birthday, Bob␤» Unlike sub, if you use named parameters with multi methods, the parameters must be required parameters to behave as expected. Please note that a non-multi sub or operator will hide multi candidates of the
2019-02-21 18:07:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32975029945373535, "perplexity": 6000.998605870511}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247506094.64/warc/CC-MAIN-20190221172909-20190221194909-00327.warc.gz"}
https://ask.sagemath.org/question/26764/how-to-build-a-matrix-thought-of-as-an-array-of-smaller-matrices/
# How to build a matrix thought of as an array of smaller matrices? Say I am given a data set which looks like $[ (0,2,A), (0,3,B), (1,2,C), (1,4,D) ]$ where $A,B,C,D$ are matrices all of the same dimension say $k$. (the data set will always have unique pairs of integers - as in if (1,2,) tuple occurs then (2,1,) tuple will not occur) Now I want to create a 4x4 matrix say X of dimension $4k$ thought of as a 4x4 array of k-dimensional matrices. The arrays in $X$ are to be defined as $X(0,2) = A, X(2,0) = A^{-1}, X(0,3) = B, X(3,0) = B^{-1}, X(1,2) = C, X(2,1) = C^{-1}, X(1,4) = D, X(4,1) = D^{-1}$ and all other array positions in $X$ are to be filled in with $0$ matrices of dimension $k$. • How can one create such a X on SAGE? X is a matrix of matrices and I am not sure how one can define this on SAGE. Like saying "X(0,3) = B" is not going to make any obvious sense to SAGE. I necessarily need X to be a matrix so that i can later say calculate its characteristic polynomial. [I showed this above example with just $4$ tuples. I want to eventually do it with much larger data sets] edit retag close merge delete Sort by » oldest newest most voted The block_matrix construction will do what you want. An example follows. I first set up a dictionary containing data of the kind you discussed (I'm assuming that the final entry in your data set should be $(1,3,D)$ if you want a $4k$ by $4k$ matrix ) sage: d = {} sage: d[0, 2] = matrix([[5, 11], [1, 2]]) sage: d[0, 3] = matrix([[2, 3], [1, 1]]) sage: d[1, 2] = matrix([[-1, 3], [0, -1]]) sage: d[1, 3] = matrix([[4, 9], [-1, -2]]) Then I defined a 4 by 4 array of zero matrices and put the data matrices in the appropriate positions sage: m = [[matrix(2, 2, 0)]*4 for _ in range(4)] sage: for i in range(4): ....: for j in range(4): ....: if (i, j) in d: ....: m[i][j] = d[i, j] ....: elif (j, i) in d: ....: m[i][j] = d[j, i].inverse() Now sage: block_matrix(m) [ 0 0| 0 0| 5 11| 2 3] [ 0 0| 0 0| 1 2| 1 1] [-----+-----+-----+-----] [ 0 0| 0 0|-1 3| 4 9] [ 0 0| 0 0| 0 -1|-1 -2] [-----+-----+-----+-----] [-2 11|-1 -3| 0 0| 0 0] [ 1 -5| 0 -1| 0 0| 0 0] [-----+-----+-----+-----] [-1 3|-2 -9| 0 0| 0 0] [ 1 -2| 1 4| 0 0| 0 0] All the usual matrix methods are available, e.g., sage: block_matrix(m).rank() 6 more (1) Why does $d[0,2]$ make sense? $d$ is not a matrix but just an empty list. If assigning matrices to $d$'s tuple coordinates make sense then why not just read the data and list and assign the appropriate matrices to d's corresponding positions? ( 2015-05-09 13:04:06 -0500 )edit (2) Can you explain this " m = [[matrix(2, 2, 0)]*4 for _ in range(4)]" ? What exactly is this doing? Is m not defined as a matrix at this step? ( 2015-05-09 13:19:49 -0500 )edit (3) And why is "d={}" different from starting as "d=[]" ? ( 2015-05-09 13:25:54 -0500 )edit This is mostly fairly basic python syntax: (1) d is not an empty list; it is a dictionary. (2) m is a list of four lists of four 2 by 2 zero matrices. It is not a matrix, but (after it is modified) it is used to create a (block) matrix. The command could have been written as m = [[matrix(2, 2, 0) for j in range(4)] for i in range(4)] (3) d = {} creates an empty dictionary; d = [] creates an empty list. ( 2015-05-10 00:45:28 -0500 )edit
2017-07-27 22:52:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5045469999313354, "perplexity": 1420.169285197473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429548.55/warc/CC-MAIN-20170727222533-20170728002533-00382.warc.gz"}
https://socratic.org/questions/lithium-metal-will-react-readily-with-sulfuric-acid-which-of-the-following-is-co
# Lithium metal will react readily with sulfuric acid. Which of the following is correct statement about the quantities of each reactant needed to produce the maximum amount of products without any reactant remaining? ## Answer is: a) The mass ratio of lithium to sulfuric acid is 1:7 Why? Aug 2, 2017 Here's why that is the case. #### Explanation: Lithium metal will react with sulfuric acid to produce lithium sulfate and hydrogen gas as described by the balanced chemical equation $2 {\text{Li"_ ((s)) + "H"_ 2"SO"_ (4(aq)) -> "Li"_ 2"SO"_ (4(aq)) + "H}}_{2 \left(g\right)} \uparrow$ Notice that the reaction consumes $1$ mole of sulfuric acid for every $2$ moles of lithium that take part in the reaction. This means that the two reactants take part in the reaction in a $2 : 1$ mole ratio. As you know, you can convert this mole ratio to a gram ratio by using the molar masses of the two reactants. In this case, you will have • ${M}_{\text{M Li" = "6.941 g mol}}^{- 1}$ • M_ ("M H"_2"SO"_4) = "98.079 g mol"^(-1) You can thus say that the $2 : 1$ mole ratio will be equivalent to "2 moles Li"/("1 mole H"_2"SO"_4) = (2 color(red)(cancel(color(black)("moles Li"))) * "6.941 g"/(1color(red)(cancel(color(black)("mole Li")))))/(1 color(red)(cancel(color(black)("mole H"_2"SO"_4))) * "98.079 g"/(1color(red)(cancel(color(black)("mole H"_2"SO"_4))))) = "13.882 g Li"/("98.079 g H"_2"SO"_4) This, of course, is equal to "13.882 g Li"/("98.079 g H"_2"SO"_4) = "1 g Li"/("7.065 g H"_2"SO"_4) ~~ "1 g Li"/("7 g H"_2"SO"_4) Therefore, you can say that the two reactants take part in this reaction in a $2 : 1$ mole ratio and in a $1 : 7$ gram ratio, or mass ratio, which means that for every gram of lithium that takes part in the reaction, the reaction consumes $\text{7 g}$ of sulfuric acid.
2021-07-29 15:33:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7632209062576294, "perplexity": 3796.0386595492873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153860.57/warc/CC-MAIN-20210729140649-20210729170649-00014.warc.gz"}
https://www.physicsforums.com/threads/do-profs-steal-scoop-ideas-from-their-students.687629/
# Do profs steal ( scoop ) ideas from their students? 1. Apr 24, 2013 ### Geezer Do profs steal ("scoop") ideas from their students? I'm currently a second-year grad student in physics looking for a PhD advisor. This is my second time shopping around for an advisor. I thought I had had a group at the beginning of the year, but within a few months it became clear that that group wasn't a good fit for me and I left. So here I am. Again. During the time between leaving the old group and now, I developed a "theory" (for lack of a better word), researched previous work on the topic, worked through quite a bit of math, and wrote it up in LaTeX. I've also had a couple other grad students read my write-up and both have given me good reviews and encouraged me to get profs to read it, so I'm pretty sure my idea doesn't suck. I'm hoping this paper is sufficiently good to attract a good PhD advisor. That said, I'm worried about getting "scooped." One of the grad students who read my paper specifically said, "Do not show this to Professor X. He'll steal it." He then went on to tell me the story of how Professor X blatantly stole an idea from a former PhD student. So, clearly I shouldn't show my write-up to Professor X, but what else can I do to prevent getting "scooped"? I'd like to do something to establish that this idea is original to me, but don't think it's ready for publication yet. Even putting it up on arXiv seems a tad premature. Any ideas? 2. Apr 24, 2013 ### DrummingAtom That's a scary situation. Professor X will be able to steal any knowledge he wants from you already. Unless you're wearing a helmet equipped with technologies to block his mind probing. Good luck. 3. Apr 24, 2013 ### Andy Resnick This ethical problem is common- not the *actual* stealing of a student's idea, but the *perceived potential* to steal a student's idea. You have many options, actually: 1) keep a signed and dated lab notebook, which provides documentation that you originated the idea, and on what date you originated the idea. 2) discuss your concerns with a neutral party: the Department Chair, another trusted faculty member, etc. 3) submit your results to a peer-reviewed journal etc. etc. 4. Apr 24, 2013 ### Choppy The point of having a supervisor is that he or she will mentor you as a researcher. It's great that you have some ideas and have done some work on your own. Ideally, a PhD supervisor will look at that, assess it, and give you critical feedback and guidance on what you've done. In this sense your supervisor becomes a collaborator who makes an active and substantial contribution to the work. There's should be no need to "steal" the idea, because if the relationship works like this, the supervisor should have his or her name on it when it gets published anyway. That said the real world isn't always ideal. I'm sure there are cases where professors have done exactly that you are worried about. One of the things you do when you're picking a supervisor is figure out which professors you are likely to do the most constructive work with. If a particular professor has a reputation for "stealing" his or her students' ideas - publishing them without giving the student credit - then this is a flag to avoid that professor as a supervisor. 5. Apr 24, 2013 ### Timo My proposition would be You're already worried about professors at your university "scooping" your great ideas. But science, assuming you want to stay in science after your PhD, has many more paranoia-inducing scenarios to offer. Give a presentation about your work? Dozens of potential scoopers listening and even taking notes. Chat with a colleague? Be careful what you say, he/she might get a good idea and get it published before you. Publishing in a journal? Tough luck, that doesn't guarantee you being recognized: Some big-shot in the field may just work on the same idea with several people, add some experimental evidence or crappy Monte-Carlo simulation and publish in a higher-ranking journal. Who do you think will have the impact (*)? If you are afraid of openly sharing your ideas, reconsider going towards an academic scientific career. Open exchange is an important and valuable aspect of university research. That said: If you have a bad feeling about discussing your notes with professor X, then of course you should not do it. (*) On a slightly related issue: I've had a very hard-working colleague who got lots good results. When he was looking around for a junior post-doc (first post-doc outside the university he did his PhD at) his current boss told him to reconsider joining the group of a particular big shot in the field. Not because he'd steal ideas - no senior scientist believes in other senior scientists stealing ideas from their employees. But because everything he'd publish while in this group would be attributed to the big shot in the perception of the science community, not to the no-name post-doc. 6. Apr 24, 2013 ### mathwonk The only way to establish credit for an idea is to publish it. However once you publish it, everyone is free to use it to go further. So preferably it should be somewhat mature when you publish it. E.g. if an idea has significant consequences later, the person who publishes the more significant consequence gets more credit usually. But at least the origin of the idea has been made clear. So you should be working on this idea as hard as you can now. Another possibility is to establish a working relationship with someone more mature who can help flesh it out and with whom you will be willing to share credit. But probably not Professor X. The previous post is also true. I.e. sharing credit with a more famous person reduces the credit for oneself, as it is usually assumed that the more mature or stronger worker had the main idea. It is not at all uncommon for ones ideas to become "shared", but usually you will have more than one idea and at some point you will get credit. However the main remedy is to work hard on your own idea and publish it. Unfortunately students may get in the habit of discussing their ideas openly and not working on them as hard as competitive professionals, because while in school they are somewhat protected. In any case, one should keep working on ones idea, and even if someone else scoops it, work it out in your own way and publish it anyway. Always publish your own work. A less effective method is to send a preliminary writeup of your work to someone honest and sympathetic. Then at least that person will know the work and the priority was yours. What really matters is to focus on the work, not the recognition, although this is hard, in the general competition for jobs, students, and funding. 7. Apr 24, 2013 ### Staff: Mentor 8. Apr 24, 2013 Staff Emeritus In addition to what Mathwonk said, good ideas are commonplace, and really are not worth stealing. It's the development of these ideas where the "value added" comes in. That's more worth stealing, but usually harder to steal. 9. Apr 24, 2013 ### Staff: Mentor This crops up in work too where a colleague will disparage your idea and then sometime later present it as his/her own without your knowledge. Or a boss will strip your name off as the idea goes up the tree and he gets the credit. One time at GE, I got an Ideas award for a database app I wrote to manage report labels for the Honeywell 6000 printers. My reward was computed based on the saving and the resources I used to accomplish it. I got like $100 for it. Later one of my coworkers took my code as is and applied to another older computer system and got$500 for it since he didn't need to use any resources. I felt somehow cheated as I had asked him about including his labels in with the original system and he wasn't interested at the time. I guess I was too green to understand the way things worked at work. With respect to the Prof stealing, sometimes a Prof will have worked on the problem he's proposing to you but doesn't share his work with you initially because he wants you to discover some things for yourself first. That may sometimes be misconstrued as him stealing "your" ideas which were really his that he skillfully planted in you.
2017-10-24 08:54:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3130791485309601, "perplexity": 1363.1259622852626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828189.71/warc/CC-MAIN-20171024071819-20171024091819-00650.warc.gz"}
http://mathhelpforum.com/advanced-algebra/121544-disjoint-sets-print.html
# Disjoint sets • December 24th 2009, 07:32 AM Disjoint sets How can we show that A/B, B/A and A and(Cant do the upside down u symbol) B are disjoint sets? • December 24th 2009, 07:59 AM Plato Quote: How can we show that A/B, B/A and A and(Cant do the upside down u symbol) B are disjoint sets? Is your task to show that $A\setminus B,~B\setminus A,~\&~A\cap B$ are pair-wise disjoint? • December 24th 2009, 10:06 AM Quote: Originally Posted by Plato Is your task to show that $A\setminus B,~B\setminus A,~\&~A\cap B$ are pair-wise disjoint? This is what im confused about as well it says : Show that $A\setminus B,~B\setminus A,~\&~A\cap B$ are disjoint sets • December 24th 2009, 10:19 AM Plato Quote: This is what im confused about as well it says : Show that $A\setminus B,~B\setminus A,~\&~A\cap B$ are disjoint sets Say that $x\in A\setminus B$ that means that $x\in A,~x\notin B$. Thus is it at all possible for $x\in B\setminus A,\text{ or }x\in A\cap B?$ • December 24th 2009, 10:22 AM Dinkydoe If you want to show that 3 sets A,B,C are pairwise disjoint then you can show the following implications: $x\in A \Rightarrow x\notin B, x\in B\Rightarrow x\notin C, x\in C\Rightarrow x\notin A$ For example: $x\in A\setminus B \Rightarrow x\notin B \Rightarrow x\notin B\setminus A$. Thus $A\setminus B$ and $B\setminus A$ are disjoint sets. • December 24th 2009, 10:45 AM
2014-09-02 00:52:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9375682473182678, "perplexity": 1646.2923420334776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909042611-00010-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/i-hate-error-handling.142440/
# I hate error handling 1. Nov 6, 2006 ### 0rthodontist Error handling obfuscates code. For someone reading your code, what your program does when there is an error is usually unimportant--at first they care only what it does when it works properly, and maybe later they'll come back to the errors, but understanding your error handling is a very low priority for someone reading your program. Furthermore it's ugly. Throwing a bunch of trys and catches into a neat, clean block of code completely ruins it, as well as probably tripling its length. So what work has been done on separating error handling from code? 2. Nov 6, 2006 ### chroot Staff Emeritus There's not a whole lot that can be done theoretically. Some errors need to be detected and handled within the bowels of an otherwise very clean algorithm -- and the desire to maintain code locality means that that error-handling code needs to be inserted right there in the algorithm. You would not want it to be hidden in some separate file. On the other hand, it's best to handle errors at the highest level possible. If you can allow exceptions to propagate out of entire blocks of code, you should. There are only a couple of ways to really "separate" error handling code. Validation of arguments is a good way to clean things up -- put all your type- and bounds-checking up at the start of each block. The rest of your code can then tacitly assume no errors due to invalid inputs are possible. Also, contractual assertions can also be grouped together at the bottom of blocks of code. These contractual assertions pretty much just make sure the block above them did what it was intended to do. Both of these techniques require a very good design skills, though; few people outside of academia really write their code with such foresight. You might be interested in some of the "cleanroom" software engineering principles, which can be used to mathematically prove that code does what it's intended to do, and thus eliminates many of the possibilities of exceptions. They require a significant investment of labor, however. - Warren Last edited: Nov 6, 2006 3. Nov 6, 2006 ### NoTime Yes, I think what you said is a good overview. If would be my experience that very few people fresh out of school address the problem at all, let alone "with foresight" 4. Nov 6, 2006 ### Hurkyl Staff Emeritus You can always look at it as a presentation issue. Use an editor that lets you fold away things that you don't want to look at. (Admittedly, I've been unhappy with what folding I've seen available, but I haven't explored them much. Maybe you should make a project to invent a good editor with useful folding ability. ) 5. Nov 6, 2006 ### -Job- You know, you can have multiple catches per try. :) I actually like try/catch statements, they're better than checking for error conditions with if statements, for example. 6. Nov 7, 2006 ### 0rthodontist Yes, I was thinking along the lines of an editor feature. But some programmers prefer to work with plain text in a plain text editor--me not among them--so I was also wondering if anyone's done research on, maybe, applying an exception-handling "schema" to code that doesn't have error checking. I once talked to someone who had the opinion that error handling should not be used as program control--that it should be used for only the things you can't detect otherwise, because of the overhead of throwing and catching an exception. At the time I disagreed, on the grounds that I'd rather write the "mainstream" of my code, then toss in some easy error handling for the cases where it doesn't work. I still believe, in regard to both exceptions and "unusual cases," that code should be written as if it worked and then patched for the cases where it doesn't work. But I think that the patch should be external--that someone should be able to read only the mainstream of your code, which gives them the gist of what it does, and only afterwards read what happens in special cases. Also, thinking about this just now, separating special cases from "mainstream" code might help a programmer write the program, too. By separating special cases into a different section, it might enable the programmer to review the special cases as an entity all their own and decide whether any error is still unchecked more easily. I think that a clue towards making something like that happen is finding ways to restrict what "special-case" or exception code should be able to do. Last edited: Nov 7, 2006 7. Nov 7, 2006 ### chroot Staff Emeritus This is a very bad idea. You generally want to promote code locality -- you want everything involved in a single task to be in the same place, where it can all be read together. Besides, there are so many distinct ways that a complex piece of code can fail that separating the error handling in the fashion you desire would make it extremely difficult for a reader to piece it all together. What you want -- a program without any kind of interwoven exception handling -- is a pipe dream. There are ways to hide or minimize the error handling, using editor features or design techniques like contracts. There are zero-defect techniques that can be used to formally prove that a piece of code has no exceptions which need to be handled -- but you want to be able to look at an arbitrary piece of complex code, written without the burden of formal verification -- without any of the "clutter" of error handling. I think this is a pipe dream, because what truly separates a good programmer from an excellent programmer is his/her awareness of the failure modes of each line of code written. A programmer who intends to write a piece of code as if nothing can go wrong, and only later go in and "insert" error handling as an afterthought, is a poor programmer. - Warren 8. Nov 7, 2006 ### 0rthodontist Nonsense--planning the main idea of your code first before writing the special cases is just another incarnation of top down design. It is a good idea to focus on the big picture before worrying about the details. This goes when you're writing code as well as when you're reading it. Why not reflect that as a language feature? I agree that repeatedly referring to an outside section of code for exception handling is a bad idea. The challenge would be to make the syntax for an exception handling schema simple and clear enough so that you can read it once and don't often need to refer back to it, but powerful enough so that it can do the things it needs to do. I don't believe that failure modes of a particular line of code is a high priority. It's certainly important, but it's secondary. Understanding how the code actually works, when it does work, is the essential step 1 for making or reading it. Surely you are not proposing that it is more important, on a first reading, to understand the 1% of the time when flawed code fails than the 99% when it works? Even if your goal is to fix the flawed code, do you really want to look at the failures before you understand how the program's even intended to run? There is another advantage to separating exceptions and special cases from the mainstream code: doing so promotes the concept of exceptions and special cases. It creates a clear, abstract distinction between exceptions and special cases, and the main code. A change to code can then be looked at as either a "special case" alteration, or a "mainstream" alteration. Special cases tend to be local and trivial, and easier to change without affecting mainstream code. Mainstream code, being the actual design of the application, is much harder to change, and the programmer should be aware of which he is dealing with at a given time. Separating special cases introduces that abstraction. 9. Nov 7, 2006 ### Hurkyl Staff Emeritus I posit that writing code is considerably more important than reading code. 10. Nov 7, 2006 ### chroot Staff Emeritus I mean no offense, Orthodontist, but you're a student, right? It seems every student goes through a phase where they think they can design a better language... - Warren 11. Nov 7, 2006 ### Sane I don't understand where your issue with readability in error-handling applies. In the function, raising the error will be very clean, and actually provide insight in to the limitations/restraints/and conditions of the procedure, hence more readable code. On the other hand, in the implementation where some obfuscation may occur, as an error excepting handle appears for every function call... this all occurs to the extent of the programmer's needs. If it ever becomes obfuscated beyond the point of readability, either it's complicated enough to disregard the possibility of language-related syntactical anomalies, or the programmer who's implementing the functions is being redundant or over-protective of the performance of his code. Last edited: Nov 7, 2006 12. Nov 8, 2006 ### -Job- I agree with Sane on this. 13. Nov 8, 2006 ### 0rthodontist It[color="#black"][/color]is[color="#black"][/color]atrue[color="#black"][/color]fact[color="#black"][/color]that[color="#black"][/color]most[color="#black"][/color]programming[color="#black"][/color]work[color="#black"][/color]is[color="#black"][/color]code[color="#black"][/color]maintenance,[color="#black"][/color]not[color="#black"][/color]creation. I[color="#black"][/color]don't[color="#black"][/color]think[color="#black"][/color]it's[color="#black"][/color]a[color="#black"][/color]very[color="#black"][/color]common[color="#black"][/color]interest.[color="#black"][/color][color="#black"][/color]Anyway[color="#black"][/color]there[color="#black"][/color]are[color="#black"][/color]a[color="#black"][/color]lot[color="#black"][/color]of[color="#black"][/color]languages,[color="#black"][/color]and[color="#black"][/color]a[color="#black"][/color]lot[color="#black"][/color]of[color="#black"][/color]fairly[color="#black"][/color]new[color="#black"][/color]languages,[color="#black"][/color]so[color="#black"][/color]it's[color="#black"][/color]not[color="#black"][/color]an[color="#black"][/color]unrealistic[color="#black"][/color]goal. Consider[color="#black"][/color]the[color="#black"][/color]following[color="#black"][/color]function,[color="#black"][/color]not[color="#black"][/color]written[color="#black"][/color]by[color="#black"][/color]me: Code (Text): def[color="#black"][/color]wget_link(self,url,flag,parent_url="",ignore_pr=False): """ Download[color="#black"][/color]url[color="#black"][/color]and[color="#black"][/color]all[color="#black"][/color]requisite[color="#black"][/color]pages First[color="#black"][/color]Create[color="#black"][/color]new[color="#black"][/color]dir[color="#black"][/color]or[color="#black"][/color]use[color="#black"][/color]old[color="#black"][/color]parent[color="#black"][/color]dir """ dir=md5dir(self.subdir,url,parent_url) if[color="#black"][/color]dir==None: print[color="#black"][/color]'md5dir[color="#black"][/color]came[color="#black"][/color]back[color="#black"][/color]with[color="#black"][/color]none!' sys.exit() #print[color="#black"][/color]"\twgetting[color="#black"][/color]%s"[color="#black"][/color]%[color="#black"][/color]url #[color="#black"][/color]Retrieval[color="#black"][/color]of[color="#black"][/color]images[color="#black"][/color]from[color="#black"][/color]yahoo[color="#black"][/color]doesn't[color="#black"][/color]use[color="#black"][/color]the[color="#black"][/color]printer[color="#black"][/color]ready[color="#black"][/color]flag,[color="#black"][/color]html[color="#black"][/color]does if[color="#black"][/color]ignore_pr: pr='' wget_command[color="#black"][/color]=[color="#black"][/color]('/usr/bin/wget[color="#black"][/color][color="#black"][/color]-E[color="#black"][/color]-nv[color="#black"][/color]-H[color="#black"][/color]-k[color="#black"][/color]-p[color="#black"][/color]%s[color="#black"][/color]--no-host-directories[color="#black"][/color]--no-directories[color="#black"][/color]-P%s[color="#black"][/color]\"%s%s\"'[color="#black"][/color]%[color="#black"][/color](flag,dir,url,pr)) (status,output)=commands.getstatusoutput(wget_command) if[color="#black"][/color](status>0): self.pr[color="#black"][/color]("Wget[color="#black"][/color]failed![color="#black"][/color]\n\t%s"[color="#black"][/color]%[color="#black"][/color]output) return() #[color="#black"][/color]see[color="#black"][/color]what[color="#black"][/color]wget[color="#black"][/color]renamed[color="#black"][/color]the[color="#black"][/color]file[color="#black"][/color]as try: link=output.split('[color="#black"][/color]URL:')[1] except: self.pr[color="#black"][/color]("WGET[color="#black"][/color]ERROR:\ncommand:\n%s\nURL:%s\n--\n%s\n--\nCould[color="#black"][/color]not[color="#black"][/color]determine[color="#black"][/color]new[color="#black"][/color]wget[color="#black"][/color]link:[color="#black"][/color]%s"[color="#black"][/color]%[color="#black"][/color](wget_command,url,output,sys.exc_info()[0])) return('') Consider[color="#black"][/color]its[color="#black"][/color]equivalent[color="#black"][/color]with[color="#black"][/color]no[color="#black"][/color]error[color="#black"][/color]checking: Code (Text): """ Download[color="#black"][/color]url[color="#black"][/color]and[color="#black"][/color]all[color="#black"][/color]requisite[color="#black"][/color]pages First[color="#black"][/color]Create[color="#black"][/color]new[color="#black"][/color]dir[color="#black"][/color]or[color="#black"][/color]use[color="#black"][/color]old[color="#black"][/color]parent[color="#black"][/color]dir """ dir=md5dir(self.subdir,url,parent_url) #[color="#black"][/color]Retrieval[color="#black"][/color]of[color="#black"][/color]images[color="#black"][/color]from[color="#black"][/color]yahoo[color="#black"][/color]doesn't[color="#black"][/color]use[color="#black"][/color]the[color="#black"][/color]printer[color="#black"][/color]ready[color="#black"][/color]flag,[color="#black"][/color]html[color="#black"][/color]does wget_command[color="#black"][/color]=[color="#black"][/color]('/usr/bin/wget[color="#black"][/color][color="#black"][/color]-E[color="#black"][/color]-nv[color="#black"][/color]-H[color="#black"][/color]-k[color="#black"][/color]-p[color="#black"][/color]%s[color="#black"][/color]--no-host-directories[color="#black"][/color]--no-directories[color="#black"][/color]-P%s[color="#black"][/color]\"%s%s\"'[color="#black"][/color]%[color="#black"][/color](flag,dir,url,pr)) (status,output)=commands.getstatusoutput(wget_command) #[color="#black"][/color]see[color="#black"][/color]what[color="#black"][/color]wget[color="#black"][/color]renamed[color="#black"][/color]the[color="#black"][/color]file[color="#black"][/color]as link=output.split('[color="#black"][/color]URL:')[1] It's[color="#black"][/color]clearer.[color="#black"][/color][color="#black"][/color]Actually[color="#black"][/color]the[color="#black"][/color]essence[color="#black"][/color]of[color="#black"][/color]this[color="#black"][/color]function[color="#black"][/color]could[color="#black"][/color]be[color="#black"][/color]2[color="#black"][/color]lines[color="#black"][/color]without[color="#black"][/color]error[color="#black"][/color]checking[color="#black"][/color]and[color="#black"][/color]if[color="#black"][/color]you[color="#black"][/color]ask[color="#black"][/color]the[color="#black"][/color]caller[color="#black"][/color]to[color="#black"][/color]make[color="#black"][/color]the[color="#black"][/color]directory,[color="#black"][/color]namely[color="#black"][/color]just[color="#black"][/color]a[color="#black"][/color]wget[color="#black"][/color]and[color="#black"][/color]a[color="#black"][/color]return[color="#black"][/color]of[color="#black"][/color]the[color="#black"][/color]URL,[color="#black"][/color]vastly[color="#black"][/color]reducing[color="#black"][/color]the[color="#black"][/color]time[color="#black"][/color]needed[color="#black"][/color]to[color="#black"][/color]figure[color="#black"][/color]out[color="#black"][/color]what[color="#black"][/color]the[color="#black"][/color]hell[color="#black"][/color]is[color="#black"][/color]going[color="#black"][/color]on.[color="#black"][/color][color="#black"][/color]Also[color="#black"][/color]you[color="#black"][/color]really[color="#black"][/color]only[color="#black"][/color]need[color="#black"][/color]2[color="#black"][/color]arguments,[color="#black"][/color]dir[color="#black"][/color]and[color="#black"][/color]url,[color="#black"][/color]not[color="#black"][/color]4. Last edited: Nov 8, 2006 14. Nov 8, 2006 ### 0rthodontist Why can't I edit? Edit: I mean, why can't I post any Python code? Edit: I can if I replace the spaces with black-colored 's... Last edited: Nov 8, 2006 15. Nov 8, 2006 ### Sane Now yet again, you are splitting hairs ... Focusing on that Python code you posted, that is a poor example to prove a point. The error handling is not filtering what kind of error it handles. Using the except command without any statement following it will direct any error to the excepted code block. If the code were written properly, it would be clear and concise as to what error it's excepting and handling. This is definitely more readable. To think otherwise is either ignorant, or just plain silly. 16. Nov 8, 2006 ### 0rthodontist It would be a bit clearer if the type of the error were included, but that's not the main problem. Consider this function: Code (Text): def retrieve (self): try: mod=self.current.modified except: mod='' self.new = feedparser.parse(self.url,modified=mod) try: if self.new.status == 304: return() except: self.pr('No status variable... no internet access?') self.new.status=304 return() if self.new.bozo>0: self.pr('Failed to retrieve (proper) rss feed:\n\t%s' % (self.new.bozo_exception)) if self.new.feed.has_key('lastbuilddate'): try: self.new.lbd = time.strptime(self.new.feed['lastbuilddate'],"%a, %d %b %Y %H:%M:%S %Z") self.pr('LBD of new feed %s' % (time.mktime(self.new.lbd))) except: pass else: try: if time.mktime(self.current.lbd) == time.mktime(self.new.lbd): self.pr ('\tLBD unchanged %s==%s' % (time.mktime(self.current.lbd),time.mktime(self.new.lbd))) self.new.status=304 return() except: self.pr ('\tNo LBD key in current: %s' % sys.exc_info()[0]) #self.pr ('\tstatus: %d ' % self.new.status) self.pr ('\tEntries retrieved: %d' % len(self.new.entries)) # number the entries for renaming the link in the updates subroutine ind=0 for x in self.new.entries: x.index=ind ind+=1 Without error handling (and removing some extraneous code), this is one line (whited out so you can read the function as written first): return feedparser.parse(self.url,modified=mod) Is it easier to understand if you see that one line, or see all the error handling? Last edited: Nov 8, 2006 17. Nov 8, 2006 ### chroot Staff Emeritus So... why doesn't the author just put that "one-liner" in his comments above the code? Then you know exactly what the "gist" of the block is -- that's what this is all about, right? -- and the error-handling code stays put, interwoven with the code it protects, as it should be. And, besides, the code you pasted does NOT just download a page -- it checks server status codes, timestamps, and does other things. I agree it's poorly written code, but even without error-handling, it won't be just one line. You're being dishonest. - Warren Last edited: Nov 8, 2006 18. Nov 8, 2006 ### Sane In my opinion ... The error handling is very important. It'll tell whoever's reading your code what crap not to throw its way. If you really feel like achieving a sense of readability, I'll tell you what... At the top of the function, include a """ multiline comment """ that contains the code without error handling. Otherwise, your intentions in compromising length of code for a secure algorithm is unjustified. Edit : Wow. A pleasant surprise: chroot and I both suggested the exact same thing. Comment yer "readable" code if it makes such a difference. 19. Nov 8, 2006 ### chroot Staff Emeritus Bingo. That's two of us, who have independently said the same thing. - Warren 20. Nov 8, 2006 ### 0rthodontist No, it really is. There's no need to check the last build date because if there wasn't a 304, it will always be different. Also the index code doesn't actually do anything since the index already contains those values (not that you need them anyway since you can just iterate through the feeds). All the rest is just error handling--necessary, but obfuscating. Well, there's also the "mod" variable, but still the only reason you had to make that is because you didn't know if the current feed had a modified field, so its purpose is still entirely error handling. Last edited: Nov 8, 2006 21. Nov 8, 2006 ### turbo I wrote custom specialty software for small businesses back in the day when a 286 was a potent machine and 386's were servers. I wrote code in modules that had specific functions, and commented them heavily, so that when I went to my library of modules, I didn't have to parse the code every time to remember what functions they performed. That way, when a customer had a complex problem, I could just scan my modules with a text editor and decide which ones to use, and which ones might be usable with minor modification. Error-handling was built into the modules and was commented. Showing my age, I wrote these applications in Ashton-Tate dBase and as soon as FoxBase came out with a compiler, I compiled them to run under FoxRun. Pretty primitive, but it worked great. In this system, error -handling had to accompany the relevant code. 22. Nov 8, 2006 ### Sane It's necessary, I'm happy you acknowledged that. Your nested indentations of exceptions do lead to obfuscation, however it doesn't have to. The obfuscation lends itself to being the result of your programming. You don't state which errors to except, and you can turn the excepted blocks in to methods in your class. That way you can label each resulting event of an error, respectively, with a function name. Additionally, you've limited your obfuscation to simple nesting. It's quite simple really, but only worth it if you do really care that much (which, by the size of this thread, illustrates just that). 23. Nov 8, 2006 ### chroot Staff Emeritus You must check the date, because some servers might be broken or non-compliant. Such servers might not return a 304, even when the dates are the same. The 304 status code is not even required by the HTTP/1.1 RFC -- it is simply suggested. Do you really trust a server you didn't write, running on a computer you don't control? Do you really not feel that this is a justified thing to check? It sounds to me like your gripe is really that people don't write (or comment) their code cleanly enough to make it easy for a novice reader to immediately understand what is being done in a block of code. The solution to this problem is better commenting and better conventions. The solution is absolutely not to split the error handling off to some other file where it can be "hidden." Error handling is an integral and necessary part of writing a decent program -- it is not just some dirty, menial task that has to be grudingly done after having written the "pristine" one-line version. - Warren 24. Nov 8, 2006 ### 0rthodontist Why should a programmer have to remember to put shorter, exception-free code in a comment? It would feel like busywork, and nobody's going to do that. And nobody does. If it's useful, and if you think it should be in a comment then we agree that it is useful, it has to be a language feature. It is not absolutely necessary to check the dates, because the worst that will happen is that the server (the one that's running this code) will spend an extra couple seconds writing stuff to disk that it already has. It is a slight efficiency optimization, not an issue of correctness (and remember that this is written in Python, which is very slow anyway). Now that I look at it again, the whole thing with making the "lbd" is actually unnecessary since you could have just compared self.current.lastbuilddate with self.new.lastbuilddate. I certainly object to calling me a "novice" programmer. I'm not a novice programmer. Also, I certainly never suggested that the error handling should be in "some other file." It should be separate, but near. 25. Nov 8, 2006 ### 0rthodontist The examples of obfuscation are not my code.
2018-07-17 04:36:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38088956475257874, "perplexity": 1487.4581369499188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00601.warc.gz"}
https://www.physicsforums.com/threads/confusion-regarding-the-partial_-mu-operator.920946/
# A Confusion regarding the $\partial_{\mu}$ operator I'm trying to derive the Klein Gordon equation from the Lagrangian: $$\mathcal{L} = \frac{1}{2}(\partial_{\mu} \phi)^2 - \frac{1}{2}m^2 \phi^2$$ $$\partial_{\mu}\Bigg(\frac{\partial \mathcal{L}}{\partial (\partial_{\mu} \phi)}\Bigg) = \partial_{t}\Bigg(\frac{\partial \mathcal{L}}{\partial (\partial_{t} \phi)}\Bigg) + \partial_{x}\Bigg(\frac{\partial \mathcal{L}}{\partial (\partial_{x} \phi)}\Bigg)$$ But if $$\frac{\partial \mathcal{L}}{\partial (\partial_{t} \phi)} = \partial_{t} \phi = \partial^{t} \phi$$ And $$\frac{\partial \mathcal{L}}{\partial (\partial_{x} \phi)} = -\partial_{x} \phi = \partial^{x} \phi$$ Then $$\partial_{\mu}\Bigg(\frac{\partial \mathcal{L}}{\partial (\partial_{\mu} \phi)}\Bigg) = \partial_{t} \partial^{t} \phi + \partial_{x} \partial^{x} \phi$$ We seem to missing a minus sign here. Where's the mistake? I'm supposed to get $$\partial_{\mu}\partial^{\mu}\phi$$ for this term. Related High Energy, Nuclear, Particle Physics News on Phys.org #### George Jones Staff Emeritus Gold Member I'm trying to derive the Klein Gordon equation from the Lagrangian: $$\mathcal{L} = \frac{1}{2}(\partial_{\mu} \phi)^2 - \frac{1}{2}m^2 \phi^2$$ What does "$\frac{1}{2}(\partial_{\mu} \phi)^2$" mean? Also, why are you splitting things into time and space components? What does "$\frac{1}{2}(\partial_{\mu} \phi)^2$" mean? Also, why are you splitting things into time and space components? $$\frac{1}{2}(\partial_{\mu} \phi)^2 = \frac{1}{2}(\partial_{\mu} \phi)(\partial^{\mu} \phi)$$ #### George Jones Staff Emeritus Gold Member $$\frac{1}{2}(\partial_{\mu} \phi)^2 = \frac{1}{2}(\partial_{\mu} \phi)(\partial^{\mu} \phi)$$ and $$\left(\partial_{\mu} \phi)(\partial^{\mu} \phi\right) = g^{\mu \alpha} \left(\partial_{\mu} \phi)(\partial_{\alpha} \phi\right)$$ Now, find $$\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)}.$$ Note that I introduced a new index $\beta$, because an index different than the dummy summation indices $\mu$ and $\alpha$ is needed. and $$\left(\partial_{\mu} \phi)(\partial^{\mu} \phi\right) = g^{\mu \alpha} \left(\partial_{\mu} \phi)(\partial_{\alpha} \phi\right)$$ Now, find $$\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)}.$$ Note that I introduced a new index $\beta$, because an index different than the dummy summation indices $\mu$ and $\alpha$ is needed. Thank you. Is this correct? $$\mathcal{L} =\frac{1}{2}g^{\mu \alpha} (\partial_{\mu} \phi)(\partial_{\alpha} \phi) - \frac{1}{2}m^{2} \phi^{2}$$ So $$\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)} = \frac{1}{2}g^{\mu\alpha}(\partial_{\mu} \phi \delta^{\alpha}_{\beta} + \partial_{\alpha} \phi \delta^{\beta}_{\mu})$$ Hence, $$\partial_{\beta}\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)} = \frac{1}{2}\partial_{\beta}(g^{\mu \beta} \partial_{\mu} \phi + g^{\beta \alpha} \partial_{\alpha} \phi)$$ $$\partial_{\beta}\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)} = \frac{1}{2}(\partial^{\mu}\partial_{\mu} + \partial^{\alpha}\partial_{\alpha})$$ Since the first and second terms are the same, we can get rid of the half. And thus $$\partial_{\beta}\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)} = \partial^{\mu}\partial_{\mu}$$ #### George Jones Staff Emeritus Gold Member I haven't had a chance to look really closely (I will though) Thank you. Is this correct? $$\mathcal{L} =\frac{1}{2}g^{\mu \alpha} (\partial_{\mu} \phi)(\partial_{\alpha} \phi) - \frac{1}{2}m^{2} \phi^{2}$$ So $$\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)} = \frac{1}{2}g^{\mu\alpha}(\partial_{\mu} \phi \delta^{\alpha}_{\beta} + \partial_{\alpha} \phi \delta^{\beta}_{\mu})$$ There is a small mistake in the placement of indices in the first term on the right side of the second equation. From the original post: Then $$\partial_{\mu}\Bigg(\frac{\partial \mathcal{L}}{\partial (\partial_{\mu} \phi)}\Bigg) = \partial_{t} \partial^{t} \phi + \partial_{x} \partial^{x} \phi$$ We seem to missing a minus sign here. Where's the mistake? I'm supposed to get $$\partial_{\mu}\partial^{\mu}\phi$$ for this term. Now that I look more closely, I don't see a missing minus sign. I haven't had a chance to look really closely (I will though) There is a small mistake in the placement of indices in the first term on the right side of the second equation. From the original post: Now that I look more closely, I don't see a missing minus sign. The $\mu$ is contravariant and the $\beta$ covariant, right? Shouldn't $$\partial_{x}\partial^{x}$$ and $$\partial_{t}\partial^{t}$$ have opposite signs, since we are working with four vectors? #### George Jones Staff Emeritus Gold Member The $\mu$ is contravariant and the $\beta$ covariant, right? In the first term on the right side, the $\delta^\alpha_\beta$ should be $\delta^\beta_\alpha$. Since the summation index $\alpha$ is upstairs on the $g^{\mu \alpha}$ and downstairs on $\partial_\alpha \phi$, it must be downstairs in the $\delta$; roughly, since the the index $\beta$ is downstairs in the "denominator" of $$\frac{\partial \mathcal{L}}{\partial (\partial_{\beta} \phi)},$$ $\beta$ should be upstairs in the $\delta$. Shouldn't $$\partial_{x}\partial^{x}$$ and $$\partial_{t}\partial^{t}$$ have opposite signs, since we are working with four vectors? No. Remember, $$A_\mu A^\mu = A_0 A^0 + A_1 A^1 + A_2 A^2 + A_3 A^3 = A^0 A^0 - A^1 A^1 - A^2 A^2 - A^3 A^3.$$ "Confusion regarding the $\partial_{\mu}$ operator" ### Physics Forums Values We Value Quality • Topics based on mainstream science • Proper English grammar and spelling We Value Civility • Positive and compassionate attitudes • Patience while debating We Value Productivity • Disciplined to remain on-topic • Recognition of own weaknesses • Solo and co-op problem solving
2019-10-17 17:38:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8132793307304382, "perplexity": 1106.7184791321827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00436.warc.gz"}
https://zbmath.org/?q=an:1344.46037
## The cone and cylinder algebras.(English)Zbl 1344.46037 Summary: In this exposition-type note we present detailed proofs of certain assertions concerning several algebraic properties of the cone and cylinder algebras. These include a determination of the maximal ideals, the solution of the Bézout equation, and a computation of the stable ranks by elementary methods. ### MSC: 46J10 Banach algebras of continuous functions, function algebras 46J15 Banach algebras of differentiable or analytic functions, $$H^p$$-spaces 46J20 Ideals, maximal ideals, boundaries 30H50 Algebras of analytic functions of one complex variable Full Text:
2022-08-15 21:31:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7486225962638855, "perplexity": 1460.5230508887928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00225.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-zeros-real-and-imaginary-of-y-8x-2-3x-2-using-the-quadratic-
# How do you find the zeros, real and imaginary, of y= 8x^2+3x-2 using the quadratic formula? Dec 28, 2016 We have real zeros $\frac{- 3 + \sqrt{73}}{16}$ and $\frac{- 3 - \sqrt{73}}{16}$ #### Explanation: Quadratic formula gives the zeros of a quadratic function $y = a {x}^{2} + b x + c$ as $\frac{- b \pm \sqrt{{b}^{2} - 4 a c}}{2 a}$. Hence, zeros of a function $y = 8 {x}^{2} + 3 x - 2$ are $\frac{- 3 \pm \sqrt{{3}^{2} - 4 \times 8 \times \left(- 2\right)}}{2 \times 8}$ or $\frac{- 3 \pm \sqrt{9 + 64}}{16}$ or $\frac{- 3 \pm \sqrt{73}}{16}$ i.e. $\frac{- 3 + \sqrt{73}}{16}$ and $\frac{- 3 - \sqrt{73}}{16}$ i.e. we have real zeros.
2019-11-18 13:16:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893510699272156, "perplexity": 1234.6218374995397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00472.warc.gz"}
http://icesp.org.br/d6qexm/ba7ba8-derived-set-of-integers
On the other hand, the negative numbers are like the naturals but with a "minus" before: $$-1, -2, -3, -4,\ldots$$ However, with the inclusion of the negative natural numbers (and importantly, 0), ℤ, unlike the natural numbers, is also closed under subtraction.[11]. Nevertheless, the "plus" of the positive numbers does not need to be be written. mdjahirabbas17 mdjahirabbas17 2 hours ago Math Secondary School +5 pts. A complex number z is said to be algebraic if there are integers a 0;:::;a n not all zero, such that a 0z n + a 1z n 1 + + a n 1z + a n = 0: Prove that the set of all algebraic numbers is countable. Natural numbersare those used to count the elements of a set and to perform elementary calculation operations. In mathematics, a (real) interval is a set of real numbers that contains all real numbers lying between any two numbers of the set. The integers are made up of positive numbers, negative numbers and zero. The integers form the smallest group and the smallest ring containing the natural numbers. ,what is the derived set of the set {2} in the discrete topology on the set of integers Z ? Math 140a - HW 2 Solutions Problem 1 (WR Ch 1 #2). $$80$$ is a natural number and therefore it is integer. Integral data types may be of different sizes and may or may not be allowed to contain negative values. if x and y are any two integers, x + y and x − y will also be an integer. The following is a number line showing integers from -7 to 7. The set S of Pisot (or Pisot-Vijayaraghavan) numbers is the set of real algebraic integers 9 > 1 all of whose remaining conjugates lie strictly within the unit circle. Integers are: natural numbers, zero and negative numbers: 1. The positive numbers are like the naturals, but with a "plus" before: $$+1, +2, +3, +4, \ldots$$. Commutative 3. The technique for the construction of integers presented above in this section corresponds to the particular case where there is a single basic operation pair $$6.2$$ is not natural, therefore it is not an integer. [15] Therefore, in modern set-theoretic mathematics, a more abstract construction[16] allowing one to define arithmetical operations without any case distinction is often used instead. {\displaystyle \mathbb {Z} } Every equivalence class has a unique member that is of the form (n,0) or (0,n) (or both at once). mdjahirabbas17 mdjahirabbas17 2 hours ago Math Secondary School +5 pts. The smallest field containing the integers as a subring is the field of rational numbers. Let P(a, b, c; z) = za + zb + za+c - zb+c for integers a, b, c. Then \P(a, b, c; z)\2 = \za + zb\2 + (zc + z-c)(z"-b - zb-") + \za - zb\2 < 8, for \z\ = 1, since we can combine the first and last terms and use the parallelogram law. Another familiar fact capable of topological formulation is THEOREM 7. ℤ is a subset of the set of all rational numbers ℚ, which in turn is a subset of the real numbers ℝ. The natural number n is identified with the class [(n,0)] (i.e., the natural numbers are embedded into the integers by map sending n to [(n,0)]), and the class [(0,n)] is denoted −n (this covers all remaining classes, and gives the class [(0,0)] a second time since −0 = 0. In the first set where the range is -9 to 9, the difference between the two numbers is always 1. {\displaystyle (x,y)} Fractions, decimals, and percents are out of this basket. In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Ask your question. Whole numbers greater than zero are called positive integers. The integer zero is neither positive nor negative, and has no sign. It can also be implemented in many different ways. Ask your question. Again, in the language of abstract algebra, the above says that ℤ is a Euclidean domain. For example, the following numbers are integers: $$3, -76, 0, 15, -22.$$. However, integer data types can only represent a subset of all integers, since practical computers are of finite capacity. Set Theory \A set is a Many that allows itself to be thought of as a One." De nition 1.1.3. [From Latin, whole, complete; see tag- in Indo-European roots .] -1, -2, -3 and so on. This implies that ℤ is a principal ideal domain, and any positive integer can be written as the products of primes in an essentially unique way. The Cartesian product AxB of the sets A and B is the set of all ordered pairs ( a,b) where a A and b B. Then he pushes the button for the floor $$-1$$, the floor beneath the ground floor. However, this style of definition leads to many different cases (each arithmetic operation needs to be defined on each combination of types of integer) and makes it tedious to prove that integers obey the various laws of arithmetic. Examples of Integers – 1, 6, 15. Its basic concepts are those of divisibility, prime numbers, and integer solutions to equati… ). :... −3 < −2 < −1 < 0 < 1 < 2 < 3 < ... In the previous drawing, we can see, for example, that: $$-2$$ is smaller than $$4$$, that $$-5$$ is smaller than $$-1$$, and that $$0$$ is smaller than $$3$$. To learn how to order integers among them, it is first necessary to know what the absolute value of a number is, a concept that will help us to clear up many doubts.. Absolute Value. , and returns an integer (equal to It is within the two sets because they belong to natural numbers, but this set is contained in integers, so, in other words, natural numbers are a subset of integers. Keith Pledger and Dave Wilkins, "Edexcel AS and A Level Modular Mathematics: Core Mathematics 1" Pearson 2008. The set of whole numbers is a subset of the set of integers and both of them are subsets of the set of rational numbers. Unlike real analysis and calculus which deals with the dense set of real numbers, number theory examines mathematics in discrete sets, such as N or Z. Find an answer to your question What is the derived set of the set {2} in the discrete topology on the set of integers ? Log in. Whole numbers less than zero are called negative integers. To prove these are the only elements of the derived set we need to show that the shape of the derived set can only be $\frac{1}{n}$ or $0$. $$5$$ is a natural number, therefore it is also an integer. Numbers, integers, permutations, combinations, functions, points, lines, and segments are just a few examples of many mathematical objects. Lesson Summary. {\displaystyle x-y} When a counting number is subtracted from itself, the result is zero. The set of integers is often denoted by a boldface letter 'Z' ("Z") or blackboard bold (Unicode U+2124 ℤ) standing for the German word Zahlen ([ˈtsaːlən], "numbers"). The zero is drawn. Find an answer to your question What is the derived set of the set {2} in the discrete topology on the set of integers ? The positive numbers are like the naturals, but with a "plus" before: + 1, + 2, + 3, + 4, …. The ordering of ℤ is given by: the derived set of the primes is the integers.") Nevertheless, the "plus" of the positive numbers does not need to be be written. Thus, if / - 1 > 2V2 m and «,.n3m are arbitrary x The number zero is special, because it is the only one that has neither a plus nor a minus, showing that it is neither positive nor negative. However, the arrows at both ends show that the numbers do not stop after 7 or -7 but the pattern continues. Although ordinary division is not defined on ℤ, the division "with remainder" is defined on them. One has three main ways for specifying a set. x The in nite sets we use are derived from the natural and real numbers, about which we have a direct intuitive understanding. The notation Z \mathbb{Z} Z for the set of integers comes from the German word Zahlen, which means "numbers". Here we will examine the key concepts of number theory. It is also a cyclic group, since every non-zero integer can be written as a finite sum 1 + 1 + … + 1 or (−1) + (−1) + … + (−1). You may have noticed that all numbers on the right of zero are positive. Log in. Real Numbers – A set consisting of rational and irrational numbers. Among the various properties of integers, closure property under addition and subtraction states that the sum or difference of any two integers will always be an integer i.e. 1. Negative numbers are less than zero and represent losses, decreases, among othe… So they are 1, 2, 3, 4, 5, ... (and so on). A set that has only one element is called a singleton set. Join now. Because you can't \"count\" zero. The cardinality of the set of integers is equal to ℵ0 (aleph-null). Asked By Wiki User. So let’s take 2 positive integers from the set: 2, 9. The set of integers consists of zero (0), the positive natural numbers (1, 2, 3,...), also called whole numbers or counting numbers, and their additive inverses (the negative integers, i.e., −1, −2, −3,...). Some authors use ℤ* for non-zero integers, while others use it for non-negative integers, or for {–1, 1}. Whole numbers less than zero are called negative integers. In elementary school teaching, integers are often intuitively defined as the (positive) natural numbers, zero, and the negations of the natural numbers. Proof. There are three Properties of Integers: 1. [13] This is the fundamental theorem of arithmetic. Now open sets in R are open intervals and union of open intervals. Rational numbers 23 2.3. The set of rational numbers is denoted as Q, so: Q = { p q | p, q ∈ Z } The result of a rational number can be an integer ( − 8 4 = − 2) or a decimal ( 6 5 = 1, 2) number, positive or negative. 2, and √ 2 are not. It is called Euclidean division, and possesses the following important property: given two integers a and b with b ≠ 0, there exist unique integers q and r such that a = q × b + r and 0 ≤ r < | b |, where | b | denotes the absolute value of b. This operation is not free since the integer 0 can be written pair(0,0), or pair(1,1), or pair(2,2), etc. To learn integer addition with like and unlike signs. The negative integers are those less than zero (–1, –2, –3, and so on); the positive integers are those greater than zero (1, 2, 3, … The intuition is that (a,b) stands for the result of subtracting b from a. Like the natural numbers, ℤ is countably infinite. Ask your question. Join now. . Z The “set of all integers” is often shown like this: Integers = {… -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, …} The dots at each end of the set mean that you can keep counting in either direction. If you are unsure about sets, you may wish to revisit Set theory. Log in. All the rules from the above property table (except for the last), when taken together, say that ℤ together with addition and multiplication is a commutative ring with unity. As such, a List> object would be similar to a two-dimensional array, only without a defined order in the second dimension. But $$11.2$$ is not a natural number, therefore it is not an integer. And back, starting from an algebraic number field (an extension of rational numbers), its ring of integers can be extracted, which includes ℤ as its subring. However, the arrows at both ends show that the numbers do not stop after 7 or -7 but the pattern continues. This universal property, namely to be an initial object in the category of rings, characterizes the ring ℤ. ℤ is not closed under division, since the quotient of two integers (e.g., 1 divided by 2) need not be an integer. Find the derived set of all integer point Get the answers you need, now! Ask your question. 1. In fact, ℤ under addition is the only infinite cyclic group—in the sense that any infinite cyclic group is isomorphic to ℤ. Real numbers: ordering properties 26 ... 1.1.1. (c) Is the set A closed? Fixed length integer approximation data types (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.). Because you can't \"count\" zero. An integer is often a primitive data type in computer languages. How integers are ordered. The whole numbers, plus their counterparts less than zero, and zero. Rational Numbers - are the quotient terms of two integers with single non-zero denominators. (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Join now. {\displaystyle x} It appears unlikely that a complete topological proof of Dirichlet's theorem can be given along these lines without the introduction of powerful new ideas and methods. (b) Determine the derived set A' (the set of limit points of A). Other definitions (ĭn′tĭ-jər) n. Mathematics. Integers which are not and negative number all integer point 1 a special set of the division with ''... Strictly less than zero are positive. [ from Latin, whole, complete ; see tag- Indo-European... All the integers are the set of whole numbers and zero integer, namely 1, 2 9... Store any integer that fits in the discrete topology on the right zero. Can be applied to the statement that any Noetherian valuation ring integers being compared wish to revisit set.. Incorporates material from integer on PlanetMath, which in turn is a natural number, the between! Integers as a division between two integers with single non-zero denominators 1,,! 5 ( k ) denote the kth … the word integer originated from Latin. The natural numbers, negative whole numbers less than zero are positive integers, or for –1. The study of the positive numbers does not need to be be written to order a set consisting rational! Although ordinary division is not an integer used to count the elements of a set of limit points called... Is defined on ℤ, the above says that ℤ under addition is the derived set of numbers! '', Book 2, 3, 4, 5,... ( and so on ) is differentiable,! Go down because that is where the parking is the remarkable fact that S is a datum integral! 3, -76, 0, and Cartesian … integers 22 2.2 algebra the! - HW 2 Solutions problem 1 ( WR Ch 1 # 2 ) result for negative.! And their opposites, someone gets into an elevator on the set of negative numbers are whole numbers positive /! 15, -22. -2 5,... ( and so on ) numbers than! Is, however, integers like 1 or 2 are both rational numbers are those numbers which can applied. Many different ways the number that results from removing its sign, positive numbers zero... And division of a number is subtracted from itself, the division with! Of this basket the button for the inductive step we assume that P ( k ) denote kth... Is where the parking is computing greatest common divisors works by a minus, it,. The kth … the word integer originated from the integers can be expressed as a group of digits! Wish to revisit set theory: unions, intersections, complements, and no. Their counterparts less than zero are called negative integers. '' by b of number theory, the numbers. Variable-Length representations of integers, but without the zero ( denoted with Z ) consists of all objects of algebraic... Terms of two types: • negative integers are algebraic integers that are also applicable to the... Mimicked to form the field of fractions of any integral domain field—or discrete... Then is not defined on them is f1 ; 2gand it contains an integer are qualified! Primes is the fundamental theorem of arithmetic concepts of number theory, the result is.... From -7 to 7 expressed as a subring is the prototype of all numbers... Kth … the derived set property asserts that any infinite subset of the as... Optimal expected-time complexity only of positive numbers, plus their counterparts less than zero, positive negative! The division of numbers are also rational numbers are whole numbers, but the! Two integers. '' think the same result for negative integers. '' may have noticed that all on! Of up to 5 is allowed, which are true in any unital commutative.. Your question Find the derived set property asserts that any infinite subset the...: 2, 3, -76, 0, n m < |A m \ [ 0 15! Infinitely many derived sets distinct from each other of whole numbers positive integers from to! Is -9 to 9, the result of subtracting b from a the Latin word “ integer which... Whole numbers, and has no sign one has three main ways for specifying a set whole. You can directly access the private member via its address in memory Indo-European roots. for greatest. Or lower bound of signed integers. '' under addition is the theorem. Question Find the derived set of signed integers. '' they are 1, 6 15! Called derived set discrete valuation ring is either a field—or a discrete valuation ring either! Null value number that represents that there is no number or element to count negative, −2048... Expressed as a subring is the derived set of whole numbers, is. Of integers Z -7, 9 to the second function the other two positive... Provers and term rewrite engines are negative integers. '' derived from number..., we have a direct intuitive understanding embedding mentioned above ), this convention creates no.! If you are unsure about sets, you may have noticed that all numbers the. Assume that P ( k ) denote the kth … the word integer originated from more! Provides optimal expected-time complexity ( it is, however, integer data types can only represent a of. Neither positive nor negative, and −2048 are integers: $. Counterparts less than zero, positive or negative, from the number, plus their counterparts than! Such constructions of signed integers. '', multiplication and division of are... Are well-ordered as rational integers to distinguish them from the integers can be mimicked form! To contain negative values R are open intervals always 1 grouping varies so the set of integers commonly. Other two are positive. always 1 give the answer just by looking to open interval of integral! Addition, subtraction, multiplication and division of a set of negative numbers negative... Consisting of rational numbers - are the set of integers – 1, 2, 3,,... Numbers are also applicable to all the integers. '' Advanced Mathematics '', Book 2 3... Largest range, a data type, a data type that represents some range of mathematical integers. ). Of rational derived set of integers are the set of all integer point 1 see Please... Also derived set of integers integer set property asserts that any infinite subset of the positive does! In ℤ for all values of variables, which in turn is a of... Souravnaskar51P6Gtac souravnaskar51p6gtac 31.03.2018 Math Secondary School Find the derived set … set of real! Set and to perform elementary calculation operations limit points of a set of all integers since. One element is called derived set of all integers, x + y and x − y will be! Science, an integer value derived set of integers a subset of all integer point 1 see answer Please integers... = 11 which is licensed under the Creative Commons Attribution/Share-Alike License intervals and union of open intervals and union open. Hashing ( henceforth hashset ): it provides optimal expected-time complexity is often a primitive data type represents..., while others use it for non-negative integers, is one of the grouping so! Mdjahirabbas17 2 hours ago Math Secondary School Find the derived set unsure about sets you. 10 ), however, certainly possible for a computer as a group of binary digits –1, 1.., 21, 4, 5,... ( and so on ) may seem bit., -7, 9$ $-1$ $is integer true for positive. Key concepts of number theory and it is a datum of integral data types can only a!, ℤ is a commutative monoid the set of the division with remainder '' is defined them... Common divisors works by a sequence of Euclidean divisions, b ) determine the derived set a (... Go up, rather he wants to go up, rather he to! Nite sets we use are derived from the integers. '' numbers – a set of integers are only., 6, 15 and identities for addition, subtraction, multiplication and of. You can directly access the private member via its address in memory floor$ $is not on. 0 and 11$ is not natural, , then -31 is defined. Integers with single non-zero denominators derived sets distinct from each other, that divides the other integer namely! Those that result from subtracting a natural number with a minus, it is divided into segments! Since it does not want to go down because that is where the is! Arrows at both ends show that the numbers do not stop after 7 or -7 but the continues... Signed integers. '' is no number or element to count calculation operations a pre-requisite for this lesson also since. Beneath the ground floor the remainder of the primes is the fundamental theorem of arithmetic stands... Are both rational numbers other approaches for the result is zero or 2 are rational... Concepts of number theory, the study of the set of integers 1... Be expressed as a division between two integers with single non-zero denominators } in the first set the! Math Secondary School +5 pts nor negative, from the number signed numbers from least greatest... Then is a special set of signed numbers from least to greatest, and has no.. Incorporates material from integer on PlanetMath, which in turn is a pre-requisite for this.. $-6$ $-11.2$ \$ is natural, therefore it is negative which we have various... Drawn on the right of zero, positive or negative, from the set of all point...
2021-10-27 05:16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7338092923164368, "perplexity": 452.3383894544327}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00682.warc.gz"}
https://physics.aps.org/articles/v13/181
News Feature # Designing a Green Accelerator Physics 13, 181 Physicists are developing an array of technologies to limit the enormous energy usage of high-energy particle accelerators. Energy sustainability is a big deal for Erk Jensen, but it wasn’t always. Like other physicists who design machinery for particle accelerators, he spent much of his career chasing higher energies and brighter beams. But a decade ago, an alarming estimate about the power needs of a proposed high-energy accelerator for Europe changed his perspective. “If you have the power consumption of a small town for just this accelerator—which does nothing but produce some numbers for physicists—you have to do something about it,” says Jensen, who leads a large group of scientists and engineers at CERN, the European particle physics lab near Geneva. At that time, Jensen says, he and his collaborators “were not even considering the questions of energy efficiency or environmental impact.” But he says that awareness about these issues has increased in recent years, as physicists have realized that their facilities’ energy usage can affect public support. Jensen is now one of a growing number of scientists in leadership positions who are speaking out about physicists’ responsibility to design more energy-efficient accelerators. Research and development will help. Thanks to materials advances and clever new concepts, certain mainstays of accelerator machinery, such as radio-frequency cavities, could see leaps in efficiency. More radical changes could come from recycling the energy of accelerated particles or from repurposing waste heat to warm people’s homes. And just as accelerator technology has trickled outside of basic research into medical treatments, sanitation, and the semiconductor industry, physicists hope that their environmentally friendly strategies might also spin off to other industries. ##### The Energy-Hungry Machine In 1931, Ernest Lawrence and Stanley Livingston constructed the first cyclotron, a circular particle accelerator and ancestor to many current designs. That first device was a mere 5 inches across and accelerated protons to 80,000 eV. Today there are roughly 30,000 accelerators around the world, mostly for medical use, with the largest—the Large Hadron Collider (LHC) at CERN—measuring 8.5 km in diameter and accelerating protons to 6.5 trillion eV. The most powerful accelerators require huge amounts of energy. Running the LHC, for example, requires about 200 megawatts of power. Jensen says that early estimates for a post-LHC collider predicted that the machine would need up to 600 megawatts and thus could use as much power as all of Geneva. “Particle physics always wants to reach higher energy and luminosity,” says Mike Seidel, who heads the Large Research Facilities division at the Paul Scherrer Institute in Switzerland and who is an active organizer of a biennial conference on sustainable science. “Unfortunately, that’s a direct function, in many cases, of the power that you draw from the grid.” Plans for future accelerators pose a contradiction. Most scientists believe in sustainable energy usage, but the public might rightly ask, “How come those scientists now get a toy that uses so much energy?” says Thomas Roser, who chairs the Collider-Accelerator Department at Brookhaven National Laboratory (BNL) in New York. ##### Rebooting a World War II Technology The main driver to increase accelerator efficiency has traditionally been cost, since a large electric bill quickly eats up an operations budget. Hefty power requirements may also overtax the existing grid. “Some of these really large machines that we’re dreaming about—they’re barely feasible with current technology,” says Matthias Liepe of Cornell University. “You’d have to build a nuclear plant to run them, and that’s certainly not within what’s possible or [what] can be funded.” So researchers have always tried to maximize the amount of energy from the grid that goes into a particle beam, rather than being lost as heat. In most high-energy accelerators, the particles are driven as they move through a sequence of several-meters-long metal enclosures, or cavities. Each cavity is connected to a device called a klystron that produces intense radio-frequency (rf) electromagnetic waves that build up in the cavity. These oscillations are carefully timed so that when a group (or “bunch”) of charged particles enters the cavity, it receives a forward push (see explanatory videos from CERN and Fermilab). An unexpected opportunity for saving energy resulted from recent work with klystrons, which were invented in the 1930s and were first used to generate radio waves during World War II. These rf devices can consume about half of an accelerator’s energy budget, explains Igor Syratchev, an accelerator physicist at CERN. But until recently, researchers assumed that the 70% efficiency of commercial klystrons was as high as it could get (efficiency here is the ratio of the output rf power to the input dc power from the grid). Starting in 2010, Syratchev and Chiara Marrelli, a postdoc in Jensen’s group who is now at the European Spallation Source (ESS) in Sweden, set out to study and improve klystron efficiency. In a chance encounter with a Russian team’s work at a 2013 conference, Syratchev learned of a new, high-efficiency technology for handling electrons that could be applied to a klystron’s internal electron beam. Over the next five years, Syratchev, Marrelli, and their colleagues used computer simulation tools to design various klystrons that harnessed the new technology and that could have greater than 80% efficiency [1]. Syratchev estimates that increasing the klystron efficiency from 70 to 85% would save CERN’s proposed Compact Linear Collider (CLIC) 2.5 terawatt-hours over 10 years—enough to power about 50,000 Swiss households over the same time period. He expects testing of the first prototypes within a few years. Frank Zimmermann, an accelerator physicist at CERN, was surprised by the klystron improvements. “This is amazing: You have a technology; you thought it’s at the end, and after 80 years, suddenly some innovation.” ##### New Materials for Accelerator Cavities Cooling the cavities, which are heated by the rf fields, is another major contributor to an accelerator’s energy bills. Superconducting rf (SRF) cavities, for example, are commonly used because they dissipate little energy from the rf field, so that the field is minimally distorted. But SRF cavities are made of superconducting niobium, which must be chilled to 2 K, and heat extraction at this temperature is very inefficient and requires expensive equipment. One way to reduce these cooling costs is to improve niobium’s properties in order to cut the already tiny field energy losses in an SRF cavity. About eight years ago, researchers at Jefferson Laboratory (J-Lab) in Virginia and Fermilab in Illinois discovered that they could double a cavity’s “quality factor”—a measure of the degree to which losses have been suppressed—by embedding small amounts of titanium or nitrogen in the niobium. Pashupati Dhakal of J-Lab estimates that the improvement could cut cavity-cooling electricity bills by up to 4 times. Cavities produced with the nitrogen-enhanced material will get their first use at the upgraded version of the Linac Coherent Light Source (LCLS-II) at the SLAC National Accelerator Laboratory in California, now under construction. Another option for reducing cooling costs is to replace niobium with a superconducting material that can be used at a higher temperature, where heat extraction would be more efficient. Niobium became the standard decades ago because it has some useful properties. But in 2014, Liepe and his colleagues demonstrated a cavity made from niobium tin ( ${\text{Nb}}_{3}\text{Sn}$) that would require cooling to only about 4 K. Researchers working with ${\text{Nb}}_{3}\text{Sn}$ are “getting very close to the first applications for accelerators,” Liepe says. ##### After Reduce: Re-Use and Recycle Before the community was so concerned with cutting energy for social reasons, improvements in rf accelerator efficiency had already been pushed pretty far by the “money argument,” says Mats Lindroos, who heads the accelerator group at ESS, under construction outside of Lund, Sweden. To take a bigger stab at reducing total energy usage, ESS is one of several facilities that are trying to re-use the large amount of energy that would normally be lost. ESS is a neutron source that was initially pitched as a carbon neutral research center—an early concept even had a dedicated windfarm. But even with a less-ambitious plan, the facility is being built on Swedish farmland, which allowed planners to completely rethink the infrastructure design. “It started from scratch,” says Lindroos. As Lindroos explains, only a quarter of the energy input goes into the beam itself; the rest becomes heat that is absorbed by cooling water. Hot water isn’t good for much, other than heating things, but that’s just fine for a cold climate like Sweden’s. In 2018, ESS signed an agreement with the German utility E.ON, in which the company promised to provide cooling water to ESS and to work with the local energy company Kraftringen to recoup the hot water for heating homes in Lund. The plan began its first phase earlier this year. And in a testament to Nordic bicycle culture, Kraftringen intends to use some warm water to de-ice bicycle paths. “It’s a societal benefit,” says Lindroos; winter biking accidents lead to “a big influx of people to the hospital with broken legs.” Heat re-use isn’t an option for facilities that don’t have Sweden’s cold climate or the right municipal infrastructure. Another re-use concept, called energy recovery, extracts energy from the unused part of a particle beam. In circular colliders, for instance, two beams circulate in opposite directions around a loop to collide with each other, but very few particles interact in any given circuit. So after the collision point, the timing of the rf oscillations can be shifted with respect to the particle bunches so that when the beam passes through a cavity, the particles give their energy back to the field instead of taking it away. A number of facilities around the world already use energy recovery, but the idea of using the technique explicitly to reduce energy consumption is a “newer development,” says Roser. Earlier this year, he and BNL colleagues Maria Chamizo-Llatas and Vladimir Litvinenko proposed a concept for a circular electron-positron collider that would reduce power consumption dramatically by recovering not only the particle energy but also the particles themselves [2]. With “one third of the power consumption, it’s possible to reach higher energy and productivity of a facility,” says Litvinenko. ##### The Right Time to Innovate Seidel, Jensen, Roser, and other scientists have worked to make sustainability part of the planning process for new accelerator facilities by organizing and speaking at conferences for the past decade. For example, last year in Granada, Spain, where researchers and policy makers gathered to work on the European Strategy for Particle Physics (ESPP), Jensen told a crowded room that particle physics has a “duty to society” to design efficient machines [3]. The US version of the ESPP is Snowmass 2021, a year-long series of planning workshops, and earlier this year, Roser convinced the accelerator groups involved to make energy efficiency a goal of these meetings. All of the physicists interviewed for this article emphasized that no solution can come at the expense of performance, or else particle physicists won’t get behind it. But with big accelerator projects on the horizon in Europe, China, Japan, and the US, researchers say that the timing is right to innovate. The latest ESPP, reported in June (see Research News: Europeans Decide on Particle Physics Strategy), states that any future project must provide a detailed plan for “saving and re-use of energy.” On September 9, CERN released its first public environmental report, promising to limit a rise in energy consumption to 5% through 2024, even as the LHC ramps up in performance. And in the US, researchers are gearing up for Snowmass 2021 (see Opinion: Exploring Futures for Particle Physics) and the design phase for the Electron Ion Collider, the successor to the Relativistic Heavy Ion Collider at BNL. “The scientific community is brainstorming and discussing how to address the next challenges in particle physics,” says Chamizo-Llatas, “so it is a good time for out-of-the-box ideas.” –Jessica Thomas Jessica Thomas is the Editor of Physics. ## References 1. J. Cai and I. Syratchev, “KlyC: 1.5-D large-signal simulation code for kystrons,” IEEE Trans. Plasma Sci. 47, 1734 (2019). 2. V. N. Litvinenko et al., “High-energy high-luminosity ${e}^{+}{e}^{-}$ collider using energy-recovery linacs,” Phys. Lett. B 804, 135394 (2020). ## Recent Articles Acoustics ### Manipulating Objects Using Air Bubbles and Sound Waves Centimeter-scale objects in liquid can be manipulated using the mutual attraction of two arrays of air bubbles in the presence of sound waves. Read More » Astrophysics
2022-06-26 14:42:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4246143102645874, "perplexity": 2606.809396435649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103269583.13/warc/CC-MAIN-20220626131545-20220626161545-00525.warc.gz"}
http://googology.wikia.com/wiki/Trigintation
## FANDOM 10,815 Pages Trigintation refers to the 30th hyperoperation starting from addition. It is equal to $$a \uparrow^{28} b$$, using Knuth's up-arrow notation.[1] Trigintation can be written in array notation as $$\{a,b,28\}$$, in chained arrow notation as $$a \rightarrow b \rightarrow 28$$ and in Hyper-E notation as E(a)1#1#1#1...1#1#1#b (27 ones). Trigintational growth rate is equivalent to $$f_{29}(n)$$ in the fast-growing hierarchy. ### Sources Edit 1. Hyper Operators by Aarex Tiaokhiao
2017-08-16 15:01:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9822943210601807, "perplexity": 3565.2555203161714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102307.32/warc/CC-MAIN-20170816144701-20170816164701-00283.warc.gz"}
http://mathoverflow.net/api/userquestions.html?userid=4800&page=1&pagesize=10&sort=votes
26 Questions 1 12 0 464 views Endomorphisms rings of elliptic curves and congruences of $j$ mar 10 12 at 12:01 Tommaso Centeleghe 1,874412 3 11 2 639 views When do the sizes of conjugacy classes and squares of degrees of irreps give the same partition for a finite group? may 25 12 at 0:18 Glasby 12314 1 10 4 778 views Factorizing polynomials in $\mathbf{Z}[[x]]$ feb 25 11 at 11:58 François Brunault 7,63411437 1 8 3 570 views Reference request for projective representations of finite groups over a non-problematic field oct 30 11 at 18:58 Boris Novikov 1,401139 2 8 1 543 views Finite dimensional automorphic representations of a definite quaternion with prime discriminant and Hecke action oct 12 11 at 0:06 Tommaso Centeleghe 1,874412 2 7 3 576 views A question on liftings of supersingular elliptic curves over the prime fields mar 25 10 at 17:34 AVS 56038 7 1 351 views On simple factors of modular jacobians: endomorphism ring and simplicity of mod p reduction oct 22 at 12:38 François Brunault 7,63411437 6 0 100 views How to construct Weil numbers in a given CM quartic field? nov 2 at 1:42 Tommaso Centeleghe 1,874412 4 6 3 557 views Supersingular elliptic curves and their “functorial” structure over F_p^2 mar 24 10 at 14:26 Alexey Zaytsev 1535 6
2013-06-19 10:11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5449978709220886, "perplexity": 3579.587705779498}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708664942/warc/CC-MAIN-20130516125104-00058-ip-10-60-113-184.ec2.internal.warc.gz"}
https://jgaa.info/getPaper?id=591
Special Issue on Selected Papers from the 15th International Conference and Workshops on Algorithms and Computation, WALCOM 2021 On Compatible Matchings Oswin Aichholzer, Alan Arroyo, Zuzana Masárová, Irene Parada, Daniel Perz, Alexander Pilz, Josef Tkadlec, and Birgit Vogtenhuber Vol. 26, no. 2, pp. 225-240, 2022. Regular paper. Abstract A matching is compatible to two or more labeled point sets of size $n$ with labels $\{1,\dots,n\}$ if its straight-line drawing on each of these point sets is crossing-free. We study the maximum number of edges in a matching compatible to two or more labeled point sets in general position in the plane. We show that for any two labeled sets of $n$ points in convex position there exists a compatible matching with $\lfloor \sqrt {2n+1} -1\rfloor$ edges. More generally, for any $\ell$ labeled point sets we construct compatible matchings of size $\Omega(n^{1/\ell})$. As a corresponding upper bound, we use probabilistic arguments to show that for any $\ell$ given sets of $n$ points there exists a labeling of each set such that the largest compatible matching has $O(n^{2/(\ell+1)})$ edges. Finally, we show that $\Theta(\log n)$ copies of any set of $n$ points are necessary and sufficient for the existence of labelings of these point sets such that any compatible matching consists only of a single edge.  This work is licensed under the terms of the CC-BY license. Submitted: April 2021. Reviewed: August 2021. Revised: October 2021. Accepted: December 2021. Final: January 2022. Published: June 2022. Communicated by Seok-Hee Hong, Subhas C. Nandy, and Ryuhei Uehara article (PDF) BibTeX
2022-08-14 14:50:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740161418914795, "perplexity": 675.4108694231738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00596.warc.gz"}
http://math.stackexchange.com/tags/integration/new
# Tag Info 0 Leave out $1/x^2$, to be reinserted later: \begin{align} \int xe^x\,dx &=\sum_{k=0}^{\infty}\frac{x^{k+2}}{(k+2) k!}\\ &=\sum_{k=0}^{\infty}\frac{(k+1)x^{k+2}}{(k+2)!}\\ &=\sum_{k=2}^{\infty}\frac{(k-1)x^k}{k!}\\ &=\sum_{k=2}^{\infty}\frac{kx^k}{k!}-\sum_{k=2}^{\infty}\frac{x^k}{k!}\\ ... 6 We have $$\int_0^t e^{\lambda x}dx = \dfrac{e^{\lambda t}-1}{\lambda}$$ Differentiating with respect to $\lambda$, we obtain $$\int_0^t xe^{\lambda x}dx = \dfrac{\lambda t e^{\lambda t} - e^{\lambda t }+1}{\lambda^2}$$ Set $\lambda = 1$, to obtain $$\int_0^t xe^{x}dx = t e^{t} - e^{t }+1$$ EDIT To complete your approach, note that \sum_{k=0}^{\infty} ... 0 This is an alternative inspired by the technique used in this answer. I may or may not delete this answer depending on your answer to my comment in the question. Take the ansatz \displaystyle \int \cos(x)e^{-x}\mathrm dx=(A\cos(x)+B\sin(x))e^{-x} with A,B yet to be determined. Differentiating both sides yields, for all x\in \mathbb R, ... 0 Letting u = e^{-x} and dv = \cos x \, dx so that du = -e^{-x} \, dx and v = \sin x, we obtain: \begin{align*} \int e^{-x}\cos x \, dx &= (e^{-x})(\sin x) - \int (\sin x)(-e^{-x} \, dx) \\ &= e^{-x}\sin x + \int e^{-x}\sin x \, dx \end{align*} Letting u = e^{-x} and dv = \sin x \, dx so that du = -e^{-x} \, dx and v = -\cos x, we ... 0 Using e^{-x}=dv and cos(x)=u:-e^{-x}cos(x)-\int e^{-x}sin(x)\int e^{-x}sin(x)=-e^{-x}sin(x)-\int -e^{-x}cos(x)$$Be careful with the signs, and you will get an expression like this:$$2\int e^{-x}cos(x)=-e^{-x}cos(x)+e^{-x}sin(x)$$Divide by two, and that is your solution. 2 First, set u = \cos x and dv = e^{-x} dx, so du = - \sin x \,dx and v = - e^{-x}. We get$$\int \cos (x) e^{-x} \,dx = (\cos (x))(-e^{-x}) - \int (-\sin (x))(-e^{-x})\, dx$$. Now, set u = - \sin x and dv = -e^{-x}\,dx to get du = - \cos x\, dx and v = e^{-x}. This gives us$$\int \cos (x) e^{-x} \,dx = (\cos(x))(-e^{-x}) - (-\sin(x))(e^{-x}) ... 1 $$\int\cos xe^{-x}dx=Re\int e^{ix}e^{-x}dx=Re\int e^{x(i-1)}dx$$ 0 1 You're almost there. Let $\displaystyle h(x) = g(x) \int_a^b f(t) \ dt$. As $g$ is continuous, $h$ is also continuous. Without loss of generality, let $x_1 < x_2$. By what you've shown above, $\int_a^b f(x)g(x) \ dx$ is a number between $h(x_1)$ and $h(x_2)$. As $h$ is continuous, by the IVP there must be a value $x_0 \in (x_1, x_2)$ such that ... 3 The ML inequality is (essentially) a real inequality. It holds for all (sufficiently regular, e.g. piecewise differentiable) curves and [again, sufficiently regular so that the integral is defined] functions [or vector fields] in any $\mathbb{R}^n$, $\mathbb{C}^n$ or more generally, Riemannian manifold. Its proof uses a) the inequality for real intervals ... 1 Given only the information stated, the only reason we can assume that 800-p isn't negative is that we are taking its logarithm. This makes sense in terms of the model; 800 is functioning as a population ceiling, as the rate of increase slows down as $p$ approaches 800. 1 I think the answer is likely to be due to the fact that $800-p$ cannot be negative. I don't quite know what 800 represents but it seems likely that $800-p$ cant be a negative number otherwise it doesn't work out. Hope this makes sense. Would have added this as a comment but I cant presently. 0 We can use the following Taylor expansion of $\ln(1+x)$ to evaluate \begin{eqnarray} \ln(1+x)&=&\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}x^n. \end{eqnarray} Then \begin{eqnarray} \int_0^{\pi/2}\frac{1+\sin\phi}{\sin\phi}d\phi&=&\int_0^{\pi/2}\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}\sin^{n-1}\phi d\phi\\ ... 1 A possible contour is the semi circle in the upper half plane with a semi circle around the origin. Let $\int_{\Gamma}$ be large semi circle and $\int_{\gamma}$ be the small semi circle. Let $R$ be the radius of the $\Gamma$ and $\epsilon$ the radius of $\gamma$. Now as $R\to\infty$, $\int_{\Gamma}\to 0$ and similarly as $\epsilon\to 0$, $\int_{\gamma}\to 0$ ... Top 50 recent answers are included
2014-11-26 00:26:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997268319129944, "perplexity": 2220.6352177519407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004885.2/warc/CC-MAIN-20141125155644-00009-ip-10-235-23-156.ec2.internal.warc.gz"}
http://clay6.com/qa/10096/if-f-x-left-begin-cos-x-x-2-sin-x-x-2-cos-x-x-2-sin-x-2-x-cos-x-2-x-sin-x-2
# If function $f(x)$ is given by $\small\left|\begin {array} {c,c,c} -cos(x+x^2) & -sin(x+x^2) & cos(x+x^2)\\sin(x^2-x) & -cos(x^2-x) & sin(x^2-x)\\sin2x &0 &sin2x^2 \end {array}\right|$, then find $f(0)$ $\begin{array}{1 1} 2 \\ 1 \\ 0 \\ -2 \end{array}$ Put $x=0$ in $f(x)$, then we get $f(0)=\left|\begin {array}{ccc}-1 & 0 &1 \\0 & -1 & 0\\0 & 0 & 0\end {array}\right|$ Since all the entries of $R_3$ of the deterrminent are $0$, the determinent =0 $f(0)=0$
2017-12-18 05:21:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831599593162537, "perplexity": 599.2686589459291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948608836.84/warc/CC-MAIN-20171218044514-20171218070514-00109.warc.gz"}
https://docs.analytica.com/index.php?title=Wilcoxon_Distribution&oldid=52359
# Wilcoxon Distribution (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Release: 4.6  •  5.0  •  5.1  •  5.2  •  5.3  •  5.4  •  6.0  •  6.1  •  6.2  •  6.3 ## CumWilcoxonInv(p, m, n, exact) The Wilcoxon distribution is a discrete, bell-shaped, non-negative distribution, which describes the distribution of the U-statistic in the Mann-Whitney-Wilcoxon Rank-Sum test when comparing two unpaired samples drawn from the same arbitrary distribution. The rank-sum test is perhaps the most commonly used non-parametric significance test in statistics to detect when one distribution is stochastically greater (or not-equal) to another without making assumption that the underlying distributions are normally distributed. The Wilcoxon distribution function in Analytica returns a random sample from the Wilcoxon distribution (or the Mid-value when evaluated in Mid-mode. When performing a rank-sum statistical test, the related functions CumWilcoxon can be used to compute the p-Value, or CumWilcoxonInv to compute the rejection threshold for a given significance level. ProbWilcoxon gives the probability density analytically (i.e., without using a Monte Carlo sample). Random(Wilcoxon(m, n)) can be used to generate single random variates. The distribution is parameterized by two non-negative numbers: «m» and «n». In a rank-sum test, these correspond to the sample sizes of the data measured from each of the two populations. ## Library Distributions Library (all Wilcoxon functions are built-in functions) ## The U-Statistic Suppose you are given «m» observations from one population and «n» observations from a second population. It is assumed that the observations are ordinal (i.e., have a natural ordering, or less-than relationship). Because they are ordered, you can determine the rank of every observation among all m + n observations. The smallest observation is assigned a rank of 1, and the largest a rank of m + n. For example, suppose your observations consistent of numeric measurements, and you have observed the following measurements: From Population 1: [12.3, 2.3, 8.3] From Population 2: [2.4, 18.1, 1.3, 5.5] The ranks would be: Population 1 ranks: [6, 2, 5] Population 2 ranks: [3, 7, 1, 4] The U-statistic is based entirely on the ranks, rather than on the actual observed values. This eliminates any dependence on a specific distribution type. Let R1 be the sum of the ranks in Population 1. The U-statistic is defined as: $\displaystyle{ U=R1 - {{m(m+1)}\over 2} }$ In the example, $\displaystyle{ R_1=13 }$ and $\displaystyle{ U=7 }$. An equivalent method of obtaining U is to count, for each rank in Population 1, the number of observations in Population 2 that have a smaller rank (using 0.5 for ties). The sum of these counts is U. This second method is more difficult to implement or carry out, but makes it easier to interpret what U represents. If the distributions are the same, then the average count would be n/2, and hence U would be m*n/2. When U differs from this, it is evidence that the two distributions are not equal. The Analytica expressions that can be used to compute U from sample D1 indexed by I1 and sample D2 indexed by I2 are as follows: Variable Sample1Ranks := Index I := Concat(@I1, @I2); Var allRanks := Rank(Concat(D1, D2, I1, I2, I), I, type: 0); allRanks[I = @I1] Variable U := Var R1 := Sum(Sample1Ranks, I1); R1 - m*(m + 1)/2 ## Mann-Whitney-Wilcoxon Rank-Sum Test Suppose you have the hypothesis that the distribution of a measurable quantity in population 1 is stochastically less than the distribution of the same quantity in population 2. Do test this, you carry out an experiment, taking «m» measurements from Population 1 and «n» measurements from Population 2. The U-statistic for these is a bit less than m*n/2. Does this mean that your hypothesis is correct? To determine whether this experimental evidence provides statistically significant confirmation for your hypothesis, compute CumWilcoxon(u, m, n). The result is the probability that you would see a U-value as small or smaller than the one observed if the populations were not different. This probability is known as the p-Value. Typically, when this p-Value is less than 5%, then one says that there is statistically significant support for the hypothesis. You can also compute the U-threshold using CumWilcoxonInv(1 - p, m, n), where «p» is the statistical significance level (e.g., 1 - p = 5% when you want a 95% confidence level. When your measured U-statistic is less than or equal to this value, then you would conclude that the hypothesis is supported as a statistically significant level. In statistical parlance, the MWW rank-sum test is a non-parametric test for comparing two unpaired samples. ## Relationship to Parametric Tests The Student's t-Test is the best-known parametric test for determining whether one distribution is stochastically less than (or not-equal) to a second distribution, when both distributions are known to be normally distributed, or at least approximately so. Thus, the key distinction between the rank-sum test and the t-test is the distributional assumption. The rank-sum test is said to be non-parametric since it does not make an assumption about the underlying distributions. Because of the additional assumption, the t-test is usually more powerful, meaning that statistical significance can often be detected with fewer measurements. However, the rank-sum test tends to do pretty well in this regard, and isn't dramatically less powerful than the t-test. However, it is far more robust than the t-test, which can make it preferable when outliers are present or your distributions aren't really Normal. ## Computation Time and Memory The Wilcoxon distribution can require large amounts of time and memory to compute (this is true of all the functions, Wilcoxon, ProbWilcoxon, CumWilcoxon and CumWilcoxonInv), especially when «m» and «n» get large. However, at the same time, as «m» and «n» get large, the distribution approaches a Normal distribution. Hence, the functions automatically switch over to a Normal-approximation when the sum of «m» and «n» exceeds 100. At that point, the accuracy of the error for ProbWilcoxon or CumWilcoxon tends to be 0.1% or less (this is just by observation, not a proven bound). You can explicitly control when the exact or approximate computation is used by specifying the boolean «exact» parameter. When specified as true, the exact algorithm is used (which can easily exhaust memory or take an exorbitant amount of time for very large values). You can switch over to the approximation sooner to save on time and memory by specifying an expression for the «exact» parameter, such as: ProbWilcoxon(m, n, exact: m + n > 50) ## History Introduced in Analytica 4.5.
2023-03-27 07:35:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8447940945625305, "perplexity": 1254.0382256087364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00120.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-1-equations-and-graphs-section-1-1-the-coordinate-plane-1-1-exercises-page-92/11
College Algebra 7th Edition RECALL: The point $(x, y)$ has: $x$ = directed distance of the point from the y-axis (negative when to the left of the y-axis, positive when to the right) $y$ = directed distance of the point from the x-axis (negative when below the x-axis, positive when above) Thus. the set $\left\{(x, y)|x=-4\right\}$ represents the set of points that are 4 units to the left of the y-axis. These points form a horizontal line whose equation is $x=-4$. (refer to the attached image in the answer part)
2018-04-26 02:37:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7406499981880188, "perplexity": 293.46689218416105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948047.85/warc/CC-MAIN-20180426012045-20180426032045-00527.warc.gz"}
https://math.stackexchange.com/questions/2932027/proving-the-the-x-axis-is-closed-and-not-open
# Proving the the x-axis is closed and not open We have the x-axis $$X=\{ (x,y): y =0 \}$$ . I want to prove it is closed. In other words, We can show the complement is open. That is, we want to find a ball that is completely contained in $$\mathbb{R}^2 \setminus X$$. Let $$(a,b) \in \mathbb{R}^2 \setminus X$$ be arbitrary. A ball of radius $$|b|$$ can do ? Since it is never zero and $$|b| >0$$ then $$B_{\epsilon}((a,b))$$ must be in $$\mathbb{R}^2 \setminus X$$. Indeed. If $$(t,z)$$ is in the ball, then $$(t-a)^2 + (z-b)^2 < |b|^2$$ We want to prove that $$(t,z)$$ must be in $$\mathbb{R}^2 \setminus X$$ as well. In other words, we need to prove that $$z \neq 0$$. Notice that from above after distributing we obtain $$(t-a)^2 + z^2 - 2zb + b^2 < b^2 \implies (t-a)^2<2zb-z^2=z(2b-z)$$ Here is where I get stuck. How can we show that the above is less than $$z$$? that way we get $$z > 0$$ and so proving the result. Is my approach correct? To show it is not open, isnt the previous part showing this result as well? • You have a mistake in 5th line.$(a, b)$ should not be in $X$ . – dmtri Sep 26 '18 at 19:00 • Do a proof by contradiction. If $z =0$ what happens? You get... $(t-a)^2 < 0$. Is that possible? – fleablood Sep 26 '18 at 19:04 Your approach is correct, but quite complicated. When you have $$(t-a)^2 + (z-b)^2 < |b|^2$$ you can immediately tell that $$z \neq 0$$. Indeed, if $$z=0$$, you would have $$(t-a)^2 + b^2 < b^2$$ i.e. $$(t-a)^2 < 0$$ which is absurd. You may consider two different cases for your point $$(a,b)$$ One case where $$b>0$$ and the other case when $$b<0$$ Then it is more straight forward to show that $$z\ne 0$$ Notice $$p=(a,b) \not \in X \iff b \ne 0$$. So take $$\epsilon = |b|$$. Let $$(x,y) \in B(p,\epsilon)$$ Prove that if $$d((x,y),(a,b)) = \sqrt {(x-a)^2 + (y-b)^2} < |b|$$, then $$y \ne 0$$. And that's fairly easy to do as $$y = 0\implies \sqrt{(x-a)^2 + (y-b)^2} = \sqrt{(x -a)^2 + (-b)^2} \ge \sqrt{(-b)^2} = |b|$$. Do you see that that proves $$X^c$$ is open?
2019-07-20 05:00:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 31, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475470781326294, "perplexity": 115.73571421252102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00362.warc.gz"}
https://dlmf.nist.gov/25.17
# §25.17 Physical Applications Analogies exist between the distribution of the zeros of $\zeta\left(s\right)$ on the critical line and of semiclassical quantum eigenvalues. This relates to a suggestion of Hilbert and Pólya that the zeros are eigenvalues of some operator, and the Riemann hypothesis is true if that operator is Hermitian. See Armitage (1989), Berry and Keating (1998, 1999), Keating (1993, 1999), and Sarnak (1999). The zeta function arises in the calculation of the partition function of ideal quantum gases (both Bose–Einstein and Fermi–Dirac cases), and it determines the critical gas temperature and density for the Bose–Einstein condensation phase transition in a dilute gas (Lifshitz and Pitaevskiĭ (1980)). Quantum field theory often encounters formally divergent sums that need to be evaluated by a process of regularization: for example, the energy of the electromagnetic vacuum in a confined space (Casimir–Polder effect). It has been found possible to perform such regularizations by equating the divergent sums to zeta functions and associated functions (Elizalde (1995)).
2018-06-20 20:56:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 1, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397085070610046, "perplexity": 496.0459713841292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00379.warc.gz"}
https://cs.stackexchange.com/questions/26214/need-an-algorithm-to-find-the-input-factors-that-are-most-affecting-the-output
# Need an algorithm to find the input factors that are most affecting the output I apologize if this question is already answered and appreciate any pointers to existing answers. I'm not familiar with statistical or data mining terms so my search was limited to basic words used in the title of this question. I did look at some algorithms but could not decide if they serve the purpose. I'm looking for an algorithm that can give me % probability or % confidence that out of given set of input factors which inputs are affecting the output most. Let us take an example below and assume that input and output will be recorded 4 times a day for a month. Output: • Phone's battery drainage • Phone heating up (temperature). Input: • Ambient temperature • Which apps were used most when the output was measured (multiple input points) • Number of calls received and call length • Screen brightness • GPS connectivity enabled? • Bluetooth enabled? • Cellular signal strengh If we measure this data for month, we will find that Cellular Signal Strength and screen brightness probably will be linked to Battery Drainage most. On the other hand 3D resource intensive games will be linked to Phones heating up the most. Can I deduce such relationship using any existing algorithm(s)? ## Fundamental limitations on what you can achieve The principled answer is: No, you can't reliably deduce such causal connections. When you have only observations of the data, you can use statistical analysis to look for correlations, but you can't deduce causation. Correlation does not imply causation. You are looking for causality, but you cannot infer causality from field data (from external, uncontrolled observations of a system). To test causality you need a controlled experiment. This is a problem not just in principle, but often in practice as well. In practice, we might have correlations among your "independent variables" or confounding factors you haven't accounted for, which causes correlations that aren't an indication of causality. For instance, you might find that ambient temperature is correlated to display brightness (because at night it is cooler and darker, so phones turn down display brightness, whereas at day it is warmer and brighter, so phones turn up display brightness to compensate for the daytime light). Now we'd expect that when the display brightness is higher, the battery will be drained faster. Can you deduce this causal connection from statistical inference applied to some observations? Well, you might observe a correlation between display brightness and battery drain rate, which sounds good. But if display brightness is correlated to ambient temperature, then when you do a statistical analysis, you might also discover a correlation between ambient temperature and display brightness -- and that's a spurious correlation that doesn't indicate any causal connection. That said, the pragmatic answer is to use statistical analysis of your data to look for correlations, and then hope that at least some of the correlations will represent something real. You have to go into it with eyes open knowing that some of the correlations will be spurious, for the reasons explained above, but maybe that's OK. Maybe statistical analysis can still be useful to you. So, how should you do the statistical analysis? There are many possible ways. It's best to say what is the best; that probably depends upon what relationships between the input and output you think are plausible. But a very simple approach that I suspect will work well for you is to use linear regression. Try to find a linear model that expresses an output variable (e.g., the battery drain rate) as a weighted linear combination of the input variables. Linear regression can find the best weights for you that minimizes the prediction error. Then, the size of those weights tells you exactly what you want to know. Suppose you are predicting the battery drain rate, and in your linear model the weight for the screen brightness input variable is 0.7. This means that if you increase the screen brightness by adding $\Delta$, then the battery drain rate is predicted to increase by adding $0.7 \Delta$. ## Tricky details There are a few things about your specific problem that might be a bit tricky. One tricky aspect is the business about apps, and how to code that into a linear regression framework. One way is to pick a few categories of apps, and then have a separate indicator input variable for each category (0 if that app is not present, 1 if it is present). However, you don't want to have too many input variables, because that will require a lot more data to reliably form a linear model. A few dozen app categories might be fine, but thousands of input variables for the thousands of different possible apps probably would not be. Another tricky aspect arises if you expect there to be non-linear dependencies, and you want to find them. For instance, you mention in a comment the possibility that low cellphone signal strength combined with running a 3D game might increase battery drain rate in a way that neither alone would. If you expect those kinds of non-linearities, one standard way to deal with them is to introduce new input variables for "interaction effects": e.g., you introduce a new input variable which is the product of the signal strength times the indicator variable that indicates whether a 3D game is running. (These new input variables are derived from the other input variables, but in a non-linear way.) You can introduce one new "interaction" input variable for each pair of original input variables that you think might exhibit a combined effect of this sort (where the effect of their combination is greater than the sum of their individual effects). This is a standard technique that you can find described in good tutorials on linear regression. Ultimately, there's lots more to say on statistical data analysis, linear regression, and applying it to practice -- more than I can write in a single answer. I recommend you read a good tutorial or two on linear regression and maybe the relevant chapters of some statistical textbooks. If you still have more questions after all of that, it's also worth knowing about Cross Validated, a Stack Exchange site on statistics. Well, you are asking for derivatives! You want to measure how the change of an input affects the output. That's exaclty a derivative. Call "Cellular signal strengh" x and "Phone's battery drainage" f(x). Let's say that you fix the cellular cellular signal strengh to a level of 3 ($x_0=3$). You measure a corresponding phone's battery drainage of 10% / hour ($f(x_0) = f(3) =$ 10% /hour). Then you change the signal strengh to a level of 4($x_1=4$). The phone's battery drainage dropped to 7% / hour ($f(x_1) = f(4) =$ 7% / hour). This way you estimate a corresponding change of the output with respect of the input (aka derivative) of $\frac{f(x_1) - f(x_0)}{x_1 - x_0} = \frac{7 - 10}{4-3}$ % /hour = - 3 % / hour = $f'(\xi)$ with $\xi \in [x_0, x_1]$. You estimate that a change of signal level of 1 gives you a battery drainage 3% less than before. Note that the information it gives you information is just "local": that value is calculate in the proximity of level 3, so close to level 1 you could get a very different change. How is that useful? Let's say that you repeat the same measurement, but this time regarding temperature. You might find out that a temperature change of 1 degree raises your battery drainage of 0.1% / hour. Now you can compare the different effects: a small change in the signal level gives you a big change in the battery drainage, while a small change in temperature gives you a very small change. Is it what you were asking for? A final remark: here it is very important how you gather data, how fine (=resolution) the data is and how often you get it. If you want to gather data from a mobile phone that is under actual use and is not being tested in controlled environment, you have to make sure that the polling rate is high enough: ambient temperature might change slowly (minutes), but signal strength might change pretty fast (seconds). Thus you have to make sure that you can resolve these fast changes. Let's formalize a little bit. We say that the output is a vector $\vec{x} = (x_1, x_2, x_3, x_4, ...)$, $x \in S$ where S is the set of the possible values of the input. Then the output is $f = f(x)$, a function that takes the touple $\vec{x} = (x_1, x_2, x_3, x_4, ...)$ and gives you the corresponding output. What you are computing here are the partial derivatives of $f(\vec{x})$: $\frac{\partial f(\vec{x})}{\partial x_1}|_{\vec{x} = (x_1, x_2, x_3, x_4, ...)}$, $\frac{\partial f(\vec{x})}{\partial x_2}|_{\vec{x} = (x_1, x_2, x_3, x_4, ...)}$, etc. The partial derivatives are computed at particular values of the input. Look at the partial derivatives as the corresponding change of the output caused by a little variation of the input. The bigger the partial derivative, the bigger the effect. Note that the partial derivatives are calculated at a particular combination of the input. That is: at a particular screen brightness, at a particular battery level, at a particular temperature, you estimate the effect of changing the phone level by a little. If you are lucky you will see that to some extent some variables are influential to the partial derivative: for example the partial derivative might not change with gps connectivity enabled or not (as probably will). • Really appreciate the detailed explanation. Would this approach give me the combination of inputs that are affecting the output? After I posted the question I realized that it is never going to be a single input that will always affect the output. Its going to be a certain combination of inputs. For example Low Cellphone Signal and 3D games will drain the battery and Raise the temperature, but individually they might not affect the output significantly. Also can it rank the inputs/input groups with in the order of it weight (ability to affect the output)? – MVCNinja May 30 '14 at 16:27 • Using derivatives requires a controlled experiment. If you're just gathering data from a phone that's in actual use, you won't have the opportunity to compute derivatives exactly (you probably won't observe two time periods where all the input variables are the same except for just one). Therefore, in that setting, something like linear regression or some other statistical estimation technique is probably going to be more effective. – D.W. May 30 '14 at 16:39 • @D.W. It depends on polling rate and the size of the state space. If it's not too big and the polling rate is high enough you could estimate the gradient almost everywhere without too much hassle. Furthermore, probably here everything is almost linear, so a regression analysis is definitely the point. – Lelesquiz May 30 '14 at 16:51
2019-10-22 06:55:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4719884693622589, "perplexity": 518.085883756854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987803441.95/warc/CC-MAIN-20191022053647-20191022081147-00312.warc.gz"}
http://mathhelpforum.com/calculus/142580-continuity.html
# Math Help - Continuity 1. ## Continuity The function Abs[x - 2]/(x^2 - 5x + 6) is not defined for x=2. Is there a value that could be given for f(2) that would make the function continuous at 2? 2. You dropped an x on the denominator I think. Try factoring. You dropped an x on the denominator I think. Try factoring. yea my bad. You factorise it and simplify to get 1/(x-3) but then what would that mean? 4. Are you sure that's how it reduces? Make sure to consider both x>2 and x<2.... Are you sure that's how it reduces? Make sure to consider both x>2 and x<2.... |x-2|/(x-3)(x-2) = 1/(x-3) or cant i cancel them cause its abs? 6. You tell me. Are both sides of that equation actually equal for all x? What about x=1? 7. Because is... $f(x)= \frac{|x-2|}{x^{2}- 5 x + 6} = \left\{\begin{array}{ll} \frac{1}{x-3} ,\,\, x >2\\{}\\ \frac{1}{3-x} ,\,\, x<2\end{array}\right.$ (1) ... is... $\lim_{x \rightarrow 2 +} f(x)= -1$ $\lim_{x \rightarrow 2 -} f(x)= 1$ (2) ... so that f(x) isn't continous for $x=2$ no matter which is the value You assign to it in that point... Kind regards $\chi$ $\sigma$ 8. ^ Thank you You tell me. Are both sides of that equation actually equal for all x? What about x=1? time waster... 9. Then they're not equal. Anyway, chisigma posted a solution now. edit: yea, cause its such a waste of time to know that 1/2 is not -1/2. 10. Maddas' point when he wrote "You tell me. Are both sides of that equation actually equal for all x?" was that you cannot cancel "x- 2" because at x= 2, that would be dividing by 0. [tex]\frac{(x- 3)(x- 2)}{x- 2}= x- 3[/math for all x except x= 2. That's why the original function, $\frac{|x- 2|}{x^2- 5x+ 6}$ is not defined at x= 2. Fortunately, the definition of " $\lim_{x\to a} f(x)= L$" says that "Given $\epsilon> 0$, there exist $\delta> 0$ such that if 0< $|x- a|< \delta$, then $|f(x)- L|< \epsilon$". That "0< |x- a|" (which many people forget to write) means that the value of f(x) at x= a has no effect whatsoever on the lim as x goes to a. Although [tex]\frac{(x- 3)(x- 2)}{x- 2}[tex] is not defined at x= 2, it has a limit there and you can make the function continuous by defining f(2) to be that limit.
2015-02-01 01:24:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625112175941467, "perplexity": 1316.5925162879905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115857200.13/warc/CC-MAIN-20150124161057-00154-ip-10-180-212-252.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3036140/prove-all-points-lie-on-a-common-circle
# Prove all points lie on a common circle Question: Let P, Q, R be points on the sides AB, BC, CA, respectively, of a triangle ABC. Assume that the circumscribed circles of the triangles PBQ and QCR intersect at points Q and S. Prove that the points A, P, S, R lie on a common circle Here is what I have so far: The points APSR form a quadrilateral. The points of this quadrilateral all lie on a common circle if and only if its opposite angles add up to 180 degrees. To prove they all lie on a common circle we can look at the circumscribed circle of 3 of the points and try to prove somehow that the 4th point lies on the same circle. The circle that seems best for this is the circumscribed circle of ARP. I am not sure how to go about this. Maybe one can look at the angles subtended by various arcs on the circle and come to a conclusion somehow? For example angle ASP is equal to angle ARP if the points lie on the same circle as the angles are subtended by the same arc. Any help is appreciated. You were in the right direction: The opposite angles of a cyclic quadrilateral add up to $$180$$. So: $$\angle PBQ + \angle PSQ = 180 \text{ and } \angle RCQ + \angle RSQ = 180.$$ That is $$\angle PBQ + \angle PSQ + \angle RCQ + \angle RSQ = 360,$$ and so $$(\angle PBQ + \angle RCQ + \angle PAR) + (\angle PCQ + \angle RSQ + \angle PSR) = 360 + \angle PAR + \angle PSR.$$ Now note that in the above identity, $$\angle PBQ + \angle RCQ + \angle PAR = 180 \text{ and } \angle RSQ + \angle RSQ + \angle PSR = 360.$$
2021-11-27 23:47:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.564070999622345, "perplexity": 90.63361427581887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00127.warc.gz"}
https://mirandahinkleylmt.com/flcplf7/what-is-mass-percent-composition-why-is-it-useful-32f8ae
x H2O for ex. ) 100% - 40.9% - 4.5% = 54.6% is Oxygen . Percentage Composition – YouTube. What is mass percent composition? The FORMULA for calculating PERCENT COMPOSITION is : Mass Example 1. The percent was calculated to give an exact difference, along with considering the quantities of solution. Percentage is also used to express composition of a mixture by mass percent and mole percent. For instance, if you had a 80.0 g sample of a compound that was 20.0 g element X and 60.0 g element y then the percent composition of each element would be: Basically, if you thought that you had enough stuff to make 5 grams but only got 4 grams, your percent yield would be 80%. Percent composition is the percentage by mass of each element in a compound. Percent composition in chemistry typically refers to the percent each element is of the compound's total mass.. A video contrasting the differences between molecular mass and molar mass with examples on how to calculate each. August 08, 2009 0. Finally, the mole ratio is determined. Percent composition is simply the proportion of something in the total bulk material. CHEMISTRY: MOLECULAR APPROACH&S/G&SSM PKG (2nd Edition) Edit edition. In the case of a hydrate, and assuming you know the formula of the associated anhydrous ionic compound, do you think it is more useful to have the mass percent of water in the hydrate or the percentage composition? therefore finding the molecular formula. So, we need to convert the number of grams to number of moles. Why is it useful? If you know this, you can easily derive its empirical formula. That means separating the water, protein, minerals and fat. Once you’ve metered the shot, determined the right combination of shutter speed & aperture, you then create the composition. Answer: The difference in mass does not deal with the proportional aspect of the solutions, making the results less accurate. X ( 00 Example 2. Space is limited so join now! Visualisation of 1%, 1‰, 1‱, 1 pcm and 1 ppm as fractions of the large block . I would say that it provides more information if you know the mass percentage composition of the hydrate. If you are studying a chemical compound, you may want to find the percent composition of a certain element within that chemical compound. For a solution, mass percent equals the mass of an element in one mole of the compound divided by the molar mass of the compound, multiplied by 100%. Answer Save. Step-by-Step Solution: Step 1 of 3. The ratio is. K (000/0 Example 3. i think it would be exactly the same cuz if u have mass percent of water or percent composition..u can obtain percent of anhydrous ionic compound and use that to find x in (CuSO4 . Body composition is a method of describing what the body is made of. Furthermore, we can determine the mass percentage of water in salt by dividing the mass of water … Since the sample contains C, H, and O, then the remaining. • • For More Practice 3.12 • Exercises 59 It is abbreviated as w/w%. Using mass percent as a conversion can be useful in this type of problem. Why is it useful? x H2O. This can help validate services like personal training, patient care, and corporate wellness. Percent composition tells you by mass what percent of each element is present in a compound.A chemical compound is the combination of two or more elements. Problem 14RQ from Chapter 3: What is mass percent composition? Mass percent composition describes the relative quantities of elements in a chemical compound. Body composition, including body fat percentage, matters a lot. Mass/Volume Percent: Another version of a percentage concentration is mass/volume percent, which measures the mass or weight of solute in grams (e.g., in grams) vs. the volume of solution (e.g., in mL). In the laboratory, density can be used to identify an element, while percent composition is used to determine the amount, by mass, of each element present in a chemical compound. What is Composition and Why is it Important? The key difference between mass percent and percent composition is that the mass percent gives the ratio between the mass of a component in a mixture and the total mass of the mixture, whereas the percent composition gives the mass percentages of each chemical element in a mixture. Favorite Answer. For example, butane has a molecular … Sometimes you may want to make up a particular mass of solution of a given percent by mass and need to calculate what mass of the solute to use. What is the percent composition of EACH ELEMENT in Copper (Il) Sulphide? Chemistry (2nd Edition) Edit edition. The downside of the body composition is that it is not as simple as the BMI and is not as useful or practical for larger groups. and then solve for x How to Calculate Mass Percent Composition Mass percent composition is also known percent by weight. Explain your answer. Having information about the percent composition of a compound will allow you to determine the mole:mole ratio of elements present in a compound. percent yield is the proportion of intended product that results from a process, relative to the ideal. Example: The percent composition of water is 20% hydrogen and 80% oxygen. Percent composition by mass is the percentage by mass of each element in a compound.. It includes fat, protein, minerals and body water. $\% \: \text{by mass} = \dfrac{\text{mass of element}}{\text{mass of compound}} \times 100\%$ The sample problem below shows the calculation of the percent composition of a compound based on mass data. Problem 23E from Chapter 4: What is mass percent composition? Is it more useful to have mass percent of water in hydrate or percentage composition of hydrate? It is calculated in a similar way to that of the composition of the peanut butter. 1 C for 2 H for 1 O. Body composition is a much more accurate representation of a person's leanness than scale weight or Body Mass Index (BMI), because it does not rely on height and weight alone to measure leanness. 4. Answer to What is mass percent composition? Divide each molar amount by the smallest molar amount. Thanks! What is percent … Using Mass Percent in Calculations. Relevance. Body composition versus body mass index Body composition is quite literally measuring what your body is composed of. Apr 26, 2013 . Percent Composition can be broken down into three easy steps! Percent composition is calculated from a molecular formula by dividing the mass of a single element in one mole of a compound by the mass of one mole of the entire compound. Determining Percent Composition requires knowing the mass of entire object or molecule and the mass of its components. I am really confused with this question and I'd … The percent composition tells you how much of each element is present: Why use 100 g? Why did you calculate the percent change in mass rather than simply using the change in mass? Can someone please explain what's the meaning of this question, I'm confuse. Because it is convenient. Body composition analysis can accurately show changes in fat mass, muscle mass, and body fat percentage. As an example, I'm going to use the molecule glucose (#"C"_6"H"_12"O"_6"#).I will demonstrate how to determine its molar mass, and then the percent composition by mass of each element in glucose. . It also describes weight more accurately than BMI. mass percent = (grams of solute / grams of solution) x 100 mass percent = (6 g NaOH / 56 g solution) x 100 mass percent = (0.1074) x 100 answer = 10.74% NaOH Example 3 : Find the masses of sodium chloride and water required to obtain 175 g of a 15% solution. Do you think it is more useful to have the percent by mass of water in hydrate or the percentage composition, assuming that you know the formula for the associated dehydrated compound? This value is presented as a percentage. The basic equation = mass of element / mass of compound X 100%. 3 Answers. The percent composition is the percent by mass of each element in a compound. 1 decade ago. The concept of percent composition is also discussed as an application of molar mass. Enroll in one of our FREE online STEM summer camps. First we need to calculate the mass percent of Oxygen. For Practice 3.12 • For More Practice 3.12 • Exercises 59–64 Calculating Mass Percent Composition (3.8) • Example 3.13 • For Practice 3.13 • For More Practice 3.13 • Exercises 65–70 Using Mass Percent Composition as a Conversion Factor (3.8) • Example 3.14 • For Practice 3.14 • For More Practice 3.14. 1 0 1,404; cazzy. Get solutions Measuring Fat Mass & Fat Free Mass Body Mass Index (BMI) is a score that results from measuring a person’s mass (weight) and height. Dennis M. Lv 6. Why is it useful? Percent Composition by Mass. Why is it useful? Once you have this, you can find the ratio of ions in a compound. Percent composition is very simple. The formula is determined by the mole ratio of the elements, not the mass ratios. Mass Percent Composition. Why? Related units. Calculating Percent Composition: Percent composition is calculated by dividing the mass of an element by the mass of the sample and then multiplying that number by 100. Calculating percent composition by mass is a multi-step process. This video shows how to calculate the percent composition of a compound. Wiat is mass percent composition? An example would be a 0.9%( w/v) $$NaCl$$ solution in medical saline solutions that contains 0.9 g of $$NaCl$$ for every 100 mL of solution (see figure below). What is the percent composition of IRON in Iron (Ill) chloride? PERCENT COMPOSITION It is sometimes useful to know the percentage, by mass, of a particular element within a chemical compound. Now we need to express the composition in grams and determine the number of moles of each element: Next we divide by the smallest number of moles to obtain the mole ratio which is also the atom ratio. The term “create” is intentional. My own philosophy when capturing a photo is not to focus on a subject, but to focus on the image. Percent composition is a starting point which can be used to find empirical formulas which can lead to molecular formulas for compounds with covalent bonding. 3. i think it's done by going % CuSO4 = 100% - H2O % %CuSO4 = MCusO4 / MCuSO4 . Percentage composition of the hydrate refers to the percent composition tells you how much of each element a. % % CuSO4 = MCusO4 / MCusO4 the mass of each element is present: Why 100... In this type of problem, 1 pcm and 1 ppm as fractions the. Going % CuSO4 = MCusO4 / MCusO4 % = 54.6 % is.! To calculate the percent composition describes the relative quantities of solution composition by mass is the percentage by mass entire! A multi-step process 4.5 % = 54.6 % is Oxygen by the mole ratio of what is mass percent composition why is it useful in similar... In hydrate or percentage composition of each element is present: Why 100. As fractions of the solutions, making the results less accurate mass rather than simply using the in! Il ) Sulphide Answer to what is the percentage by mass of compound 100. Each molar amount by the mole ratio of ions in a compound i think it 's done going... In chemistry typically refers to the percent composition in chemistry typically refers to the percent was calculated to give exact... Bulk material services like personal training, patient care, and O, then the remaining C, H and! The basic equation = mass of each element is present: Why use 100?... Contains C, H, and body water of compound X 100 % - 4.5 % 54.6! 100 % - 40.9 % - H2O % % CuSO4 = 100 % - 4.5 % = %... If you know the mass percent composition is also known percent by weight MOLECULAR APPROACH & S/G & SSM (! Shows how to calculate mass percent composition in chemistry typically refers to the percent calculated. C, H, and corporate wellness with examples on how to calculate each calculated... Ve metered the shot, determined the right combination of shutter speed aperture... Number of moles multi-step process and mole percent & S/G & SSM PKG 2nd... Compound X 100 % - H2O % % CuSO4 = 100 % 4.5! % is Oxygen ( Ill ) chloride and body water when capturing a photo is not to focus a. Approach & S/G & SSM PKG ( 2nd Edition ) Edit Edition video contrasting the differences between MOLECULAR and... Copper ( Il ) Sulphide studying a chemical compound is of the hydrate = of! Mass does not deal with the proportional aspect of the hydrate, protein, minerals body... Chapter 3: what is mass percent as a conversion can be broken down into three easy steps along considering... Of shutter speed & aperture, you can find the ratio of ions in a way... Iron in IRON ( Ill ) chloride it includes fat, protein, minerals and body water each! Deal with the proportional aspect of the large block 1 pcm and 1 ppm as fractions of the butter... Shutter speed & aperture, you can find the percent composition describes the relative of... Number of grams to number of grams to number of grams to number of.... In the total bulk material of molar mass with examples on how to calculate the change. %, 1‰, 1‱, 1 pcm and 1 ppm as fractions of the compound 's total mass the. The quantities of elements in a similar way to that of the hydrate, but to focus on the.! Muscle mass, and corporate wellness changes in fat mass, muscle mass, muscle,! Ill ) chloride three easy steps % = 54.6 what is mass percent composition why is it useful is Oxygen useful in this type problem... The total bulk material one of our FREE online STEM summer camps composition tells you how much of each in. Create the composition of a certain element within that chemical compound molar amount of IRON in IRON ( )! Validate services like personal training, patient care, and corporate wellness, and wellness. Shows how to calculate each know this, you can easily derive its formula! = 54.6 % is Oxygen of percent composition describes the relative quantities of solution more information you! Body mass index body composition, including body fat percentage, matters a lot muscle! We need to convert the number of moles is not to focus on the.. A subject, but to focus on a subject, but to focus on the image formula for percent! Similar way to that of the large block simply using the change mass! Can find the ratio of ions in a compound 23E from Chapter 4: what is mass percent is... What 's the meaning of this question, i 'm confuse services like personal,... In a compound of describing what the body is composed of services personal! The large block molar amount photo is not to focus on the image metered the,! Why did you calculate the percent composition is also discussed as an application of molar mass Example the..., we need to convert the number of grams to number of grams to number moles. Amount by the smallest molar amount its empirical formula % = 54.6 % is Oxygen 23E from 3... By weight is composed of / MCusO4 or molecule and the mass percentage of! Includes fat, protein, minerals and body fat percentage, matters a.! Easy steps in IRON ( Ill ) chloride speed & aperture, you may to! Derive its empirical formula and fat 100 % body water composition describes the relative quantities of elements in a way! A conversion can be useful in this type of problem ppm as fractions of the elements, not mass... 'S total mass, you can find the percent composition is also known percent by weight have... 2Nd Edition ) Edit Edition the formula is determined by the smallest molar amount say that it provides more if. - 40.9 % - 40.9 % - H2O % % CuSO4 = MCusO4 / MCusO4 40.9 % - H2O %... It 's done by going % CuSO4 = MCusO4 / MCusO4 considering the quantities elements. I would say that it provides more information if you know this, you can the. Composition requires knowing the mass percentage composition of the solutions, making the results less accurate what is mass percent composition why is it useful its components express. S/G & SSM PKG ( 2nd Edition ) Edit Edition 2nd Edition Edit. Mass ratios can accurately show changes in fat mass, muscle mass, muscle mass, muscle,... Composition is the percentage by mass of its components information if you are studying a chemical.! Of molar mass in a chemical compound % % CuSO4 = 100 % - 40.9 -... Is of the hydrate what is mass percent composition of hydrate ) Edit Edition mixture by mass compound. To the percent composition describes the relative quantities of solution a certain element within that chemical compound Why. What the body is made of deal with the proportional aspect of the large block as an application of mass! How much of each element in Copper ( Il ) Sulphide in one of our FREE STEM... % hydrogen and 80 % Oxygen can someone please explain what 's the meaning of question. To convert the number of moles then the remaining compound, you then create the composition of a mixture mass! % Oxygen of percent composition of a certain element within that chemical compound 80 % Oxygen of water hydrate... The smallest molar amount & aperture, you then create the composition the. Percentage, matters a lot percent change in mass an exact difference, along with considering quantities. Mass rather than simply using the change in mass question, i 'm confuse that chemical compound question, 'm. Total bulk material to find the percent was calculated to give an exact difference, along with considering quantities... Percent composition in chemistry typically refers to the percent composition is the percentage by mass of each element in (! 1‱, 1 what is mass percent composition why is it useful and 1 ppm as fractions of the compound 's total mass this type problem. Discussed as an application of molar mass with examples on how to calculate each quantities of.. Studying a chemical compound, you then create the composition results less accurate an exact difference, along with the! The proportion of something in the total bulk material the water, protein, minerals and body percentage... The elements, not the mass percentage composition of the peanut butter change in mass does not deal with proportional... Shot, determined the right combination of shutter speed & aperture, you can easily derive its empirical formula percent... Percent as a conversion can be useful in this type of problem molar.... Made of this can help validate services like personal training, patient care, and corporate.. Iron in IRON ( Ill ) chloride meaning of this question, i 'm confuse of hydrate the... Is a method of describing what the body is made of to the percent of... 23E from Chapter 4: what is the percentage by mass of each element in a similar way that! Can find the percent composition of a compound as an application of molar mass composition tells you how much each. Broken down into three easy steps capturing a photo is not to focus on a subject but! 'S the meaning of this question, i 'm confuse a photo is not to focus a! Ssm PKG ( 2nd Edition ) Edit Edition was calculated to give an exact difference, with... Ppm as fractions of the peanut butter what is mass percent composition why is it useful describes the relative quantities of solution of molar mass examples. Number of grams to number of moles by mass of its components a mixture by mass is a multi-step.... More useful to have mass percent composition in chemistry typically refers to the percent composition is used... Percentage, matters a lot 2nd Edition ) Edit Edition from Chapter 4: what is the percentage by is... Along with considering the quantities of solution it is calculated in a compound, minerals and body water the... 1 %, 1‰, 1‱, 1 pcm and 1 ppm as fractions of the large block is.
2021-02-27 12:20:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.573664128780365, "perplexity": 1405.7690740282846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358956.39/warc/CC-MAIN-20210227114444-20210227144444-00062.warc.gz"}
https://brilliant.org/problems/an-algebra-problem-by-kaito-einstein-2/
5 digits number Algebra Level 1 What $$5$$ digit number has the following property? If we put $$1$$ on the left of the number we get a number $$3$$ times smaller than if we put the number $$1$$ on the right of this number . ×
2016-10-23 22:07:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6785142421722412, "perplexity": 216.26189838593015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719437.30/warc/CC-MAIN-20161020183839-00532-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.jbstudies.com/2022/09/class-12-maths-miscellaneous-exercise-chapter11.html
# NCERT Solutions Class 12 Maths (Three Dimensional Geometry) Miscellaneous Exercise NCERT Solutions Class 12 Maths from class 12th Students will get the answers of Chapter-11 (Three Dimensional Geometry) Miscellaneous Exercise This chapter will help you to learn the basics and you should expect at least one question in your exam from this chapter. We have given the answers of all the questions of NCERT Board Mathematics Textbook in very easy language, which will be very easy for the students to understand and remember so that you can pass with good marks in your examination. Q1. Show that the line joining the origin to the point (2, 1, 1) is perpendicular to the line determined by the points (3, 5, – 1), (4, 3, – 1). Answer. Let OA be the line joining the origin, O (0, 0, 0) and the point A(2, 1, 1) Also, let BC be the line joining the points, B (3, 5, -1) and C (4, 3, -1). The direction ratios of OA are 2, 1, and 1 and of BC are (4 - 3) = 1, (3 - 5) = -2 and (-1 + 1) = 0 OA is perpendicular to BC, if ${a}_{1}{a}_{2}+{b}_{1}{b}_{2}+{c}_{1}{c}_{2}=0$ $\therefore {a}_{1}{a}_{2}+{b}_{1}{b}_{2}+{c}_{1}{c}_{2}=2×1+1\left(-2\right)+1×0=2-2=0$ Thus, OA is perpendicular to BC. Q2. If  are the direction cosines of two mutually perpendicular lines, show that the direction cosines of the line perpendicular to both of these are ${m}_{1}{n}_{2}-{m}_{2}{n}_{1},{n}_{1}{l}_{2}-{n}_{2}{l}_{1},{l}_{1}{m}_{2}-{l}_{2}{m}_{1}$. Answer. It is given that  are the direction cosines of two mutually perpendicular lines. Therefore, $\begin{array}{l}{l}_{1}{l}_{2}+{m}_{1}{m}_{2}+{n}_{1}{n}_{2}=0\\ {l}_{1}^{2}+{m}_{1}^{2}+{n}_{1}^{2}=1\\ {l}_{2}^{2}+{m}_{2}^{2}+{n}_{2}^{2}=1\end{array}$ Let l, m, n be the direction cosines of the line which is perpendicular to the line with direction cosines $\begin{array}{l}\therefore ‖{}_{1}+m{m}_{1}+m{n}_{1}=0\\ ‖{}_{2}+m{m}_{2}+{m}_{2}=0\end{array}$ $\therefore \frac{l}{{m}_{1}{n}_{2}-{m}_{2}{n}_{1}}=\frac{m}{{n}_{1}{l}_{2}-{n}_{2}{I}_{1}}=\frac{n}{{l}_{1}{m}_{2}-{l}_{2}{m}_{l}}$ $⇒\frac{{l}^{2}}{{\left({m}_{1}{n}_{2}-{m}_{2}{n}_{1}\right)}^{2}}=\frac{{m}^{2}}{{\left({n}_{1}{l}_{2}-{n}_{2}{I}_{1}\right)}^{2}}=\frac{{n}^{2}}{{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{i}\right)}^{2}}$ $⇒\frac{{l}^{2}}{{\left({m}_{1}{n}_{2}-{m}_{2}{n}_{1}\right)}^{2}}=\frac{{m}^{2}}{{\left({n}_{1}{l}_{2}-{n}_{2}{I}_{1}\right)}^{2}}=\frac{{n}^{2}}{{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{2}\right)}^{2}}$ $=\frac{{t}^{2}+{m}^{2}+{n}^{2}}{{\left({m}_{1}{n}_{2}-{m}_{2}{n}_{1}\right)}^{2}+{\left({n}_{1}{l}_{2}-{n}_{2}{l}_{1}\right)}^{2}+{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{i}\right)}^{2}}$ l, m, n are the direction cosines of the line. $\therefore {l}^{2}+{m}^{2}+{n}^{2}=1\dots \left(5\right)$ It is known that, $\left({l}_{1}^{2}+{m}_{1}^{2}+{n}_{1}^{2}\right)\left({l}_{2}^{2}+{m}_{2}^{2}+{n}_{2}^{2}\right)-{\left({l}_{1}{l}_{2}+{m}_{1}{m}_{2}+{n}_{1}{n}_{2}\right)}^{2}$ $={\left({m}_{1}{n}_{2}-{m}_{2}{n}_{1}\right)}^{2}+{\left({n}_{1}{l}_{2}-{n}_{2}{l}_{1}\right)}^{2}+{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{1}\right)}^{2}$ From (1), (2), and (3), we obtain $⇒1.1-0={\left({m}_{1}{n}_{2}+{m}_{2}{n}_{1}\right)}^{2}+{\left({n}_{1}{l}_{2}-{n}_{2}{l}_{1}\right)}^{2}+{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{1}\right)}^{2}$ ${\left({m}_{1}{n}_{2}-{m}_{2}{n}_{1}\right)}^{2}+{\left({n}_{1}{l}_{2}-{n}_{2}{I}_{1}\right)}^{2}+{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{1}\right)}^{2}=1$ Substituting the values from equations (5) and (6) in equation (4), we obtain $\begin{array}{l}\frac{{l}^{2}}{{\left({m}_{1}{n}_{2}-{m}_{2}{n}_{1}\right)}^{2}}=\frac{{m}^{2}}{{\left({n}_{2}{I}_{2}-{n}_{2}{l}_{1}\right)}^{2}}=\frac{{n}^{2}}{{\left({l}_{1}{m}_{2}-{l}_{2}{m}_{1}\right)}^{2}}=1\\ ⇒l={m}_{1}{n}_{2}-{m}_{2}{n}_{1},m={n}_{1}{l}_{2}-{n}_{2}{I}_{11},n={l}_{1}{m}_{2}-{l}_{2}{m}_{1}\end{array}$ Thus, the direction cosines of the required line are . Q3. Find the angle between the lines whose direction ratios a, b, c and $b-c,c-a,a-b$. Answer. The angle Q between the lines with direction cosines a, b, c and b-c, c-a, a-b, is given by, $\begin{array}{l}\mathrm{cos}Q=|\frac{a\left(b-c\right)+b\left(c-a\right)+c\left(a-b\right)}{\sqrt{{a}^{2}+{b}^{2}+{c}^{2}}+\sqrt{\left(b-c{\right)}^{2}+\left(c-a{\right)}^{2}+\left(a-b{\right)}^{2}}}|\\ ⇒Q={\mathrm{cos}}^{-1}0\\ ⇒Q={90}^{\circ }\end{array}$ Thus, the angle between the lines is ${90}^{\circ }$ Q4. Find the equation of a line parallel to x-axis and passing through the origin. Answer. The line parallel to x-axis and passing through the origin is x-axis itself. Let A be a point on x-axis . Therefore, the coordinates of A are given by (a, 0, 0) where $a\in R$ Direction ratios of OA are (a - 0) = a, 0, 0 The equation of OA is given by, $\begin{array}{l}\frac{x-0}{a}=\frac{y-0}{0}=\frac{z-0}{0}\\ ⇒\frac{x}{1}=\frac{y}{0}=\frac{z}{0}=a\end{array}$ Thus, the equation of line parallel to x-axis and passing through origin is Q5. If the coordinates of the points A, B, C, D be (1, 2, 3), (4, 5, 7), (– 4, 3, – 6) and (2, 9, 2) respectively, then find the angle between the lines AB and CD. Answer. The coordinates of A, B, C, D be (1, 2, 3), (4, 5, 7), (– 4, 3, – 6) and (2, 9, 2) respectively. The direction ratios of AB are (4, -1) = 3, (5 - 2)=3, and (7 - 3) = 4 The direction ratios of CD are  It can be seen that, $\frac{{a}_{1}}{{a}_{2}}=\frac{{b}_{1}}{{b}_{2}}=\frac{{c}_{1}}{{c}_{2}}=\frac{1}{2}$ Therefore, AB is parallel to CD. Thus, the angle between AB and CD is either . Q6. If the lines  are perpendicular, find the value of k. Answer. The direction of ratios of the lines,  are -3, 2k, 2 and 3k, 1, -5 respectively. It is known that two lines with direction ratios,  are perpendicular, if ${a}_{1}{a}_{2}+{b}_{1}{b}_{2}+{c}_{1}{c}_{2}=$ = 0. $\begin{array}{l}\therefore -3\left(3k\right)+2k×1+2\left(-5\right)=0\\ ⇒-9k+2k-10=0\\ ⇒7k=-10\\ ⇒k=\frac{-10}{7}\end{array}$ Therefore, for $k=-\frac{10}{7}$, the given lines are perpendicular to each other. Q7. Find the vector equation of the line passing through (1, 2, 3) and perpendicular to the plane $\stackrel{\to }{r}\cdot \left(\stackrel{^}{i}+2\stackrel{^}{j}-5\stackrel{^}{k}\right)+9=0$. Answer. The position vector of the point (1, 2, 3) is ${\stackrel{\to }{r}}_{1}=\stackrel{^}{i}+2\stackrel{^}{j}+3\stackrel{^}{k}$ The direction ratios of the normal to the plane, $\stackrel{\to }{r}\cdot \left(\stackrel{^}{i}+2\stackrel{^}{j}-5\stackrel{^}{k}\right)+9=0$ are 1, 2, and -5 and the normal vector is $\overline{N}=\stackrel{^}{i}+2\stackrel{^}{j}-5\stackrel{^}{k}$. The equation of a line passing through a point and perpendicular to the given plane is given by, $\stackrel{\to }{l}=\stackrel{\to }{r}+\lambda \stackrel{\to }{N},\lambda \in R$ Q8. Find the equation of the plane passing through (a, b, c) and parallel to the plane $\stackrel{\to }{r}\cdot \left(\stackrel{^}{i}+\stackrel{^}{j}+\stackrel{^}{k}\right)=2$. Answer. Any plane parallel to the plane, ${\stackrel{\to }{r}}_{1}\cdot \left(\stackrel{^}{i}+\stackrel{^}{j}+\stackrel{^}{k}\right)=2$ is of form $\stackrel{\to }{r}\cdot \left(\stackrel{^}{i}+\stackrel{^}{j}+\stackrel{^}{k}\right)=\lambda$ The plane passes through the point (a, b, c). Therefore, the position vector of this point is $\stackrel{\to }{r}=a\stackrel{^}{i}+b\stackrel{^}{j}+c\stackrel{^}{k}$ Therefore, equation (1) becomes $\begin{array}{l}\left(a\stackrel{^}{i}+b\stackrel{^}{j}+c\stackrel{^}{k}\right)\cdot \left(\stackrel{^}{i}+\stackrel{^}{j}+\stackrel{^}{k}\right)=\lambda \\ ⇒a+b+c=\lambda \end{array}$ Substituting $\lambda =a+b+c$ in equation (1), we obtain $\stackrel{\to }{r}\cdot \left(\stackrel{^}{i}+\stackrel{^}{j}+\stackrel{^}{k}\right)=a+b+c$ This is the vector equation of the required plane. Substituting $\stackrel{\to }{r}=x\stackrel{^}{i}+y\stackrel{^}{j}+z\stackrel{^}{k}$ in equation (2), we obtain Q9. Find the shortest distance between lines
2022-09-28 16:17:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6026043891906738, "perplexity": 204.9677059400303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00562.warc.gz"}
https://math.stackexchange.com/questions/1128590/area-of-a-square-inscribed-in-a-circle-of-radius-r-if-area-of-the-square-inscri
# Area of a square inscribed in a circle of radius r, if area of the square inscribed in the semicircle is given. If a square is inscribed in a semicircle of radius r and the square has an area of 8 square units, find the area of a square inscribed in a circle of radius r. I started by assuming that the side of the square is 2(root2). But I did not know how this relates to what it's dimensions were to be if it was inscribed in a full circle. Could someone help? Thank you. Given the small inscribed square has an area of $8$, so a side length of $2\sqrt2$ The radius $r$ of the semicircle and circle is equal to the distance between the midpoint of the bottom side of the small inscribed square and one of the top vertices. This forms a right triangle with side lengths $2\sqrt2$, $\sqrt2$, and hypotenuse $r$. Using the Pythagorean theorem $$(2\sqrt2)^2+(\sqrt2)^2=r^2$$ $$8+2=r^2$$ $$r=\sqrt{10}$$ Now that we have the radius of the circle, we know that the side length of the large inscribed square is $\frac{2r}{\sqrt2} = r\sqrt2$, ($2r$ is the diagonal of the large inscribed square, also the diameter of circle). The side length of the large inscribed square is $\sqrt{20}=2\sqrt5$, so its area is $20$ square units.
2020-01-23 23:12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275200963020325, "perplexity": 56.877714426880594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614086.44/warc/CC-MAIN-20200123221108-20200124010108-00039.warc.gz"}
https://socratic.org/questions/how-do-you-find-the-slope-and-y-intercept-for-3x-2y-10
# How do you find the slope and y intercept for 3x + 2y = 10? Jun 5, 2015 The equation of a line having a slope m and y-intercept c is represented by : $y = m x + c$ --------------(1) we have, 3x+2y=10 $\implies$2y=10-3x y=(10-3x)/2 $y = 5 - \left(\frac{3}{2}\right) x$ on comparing the above equation with (1) m=-3/2 c=5
2020-02-25 00:40:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6263629794120789, "perplexity": 1152.877597715046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145989.45/warc/CC-MAIN-20200224224431-20200225014431-00504.warc.gz"}
https://projecteuclid.org/euclid.ade/1366031029
### On the Cauchy problem for a Boussinesq-type system Jaime Angulo Pava #### Abstract The Cauchy problem for the following Boussinesq system, $$\begin{cases} u_t + v_x + uu_x = 0\\ v_t - u_{xxx} + u_{x} + (uv)_x = 0 \end{cases}$$ is considered. It is showed that this problem is locally well-posed in $H^s(\mathbb{R}) \times H^{s-1}(\mathbb{R})$ for any $s>3/2$. The proof involves parabolic regularization and techniques of Bona-Smith. It is also determined that the special solitary-wave solutions of this system are orbitally stable for the entire range of the wave speed. Combining these facts we can extend globally the local solution for data sufficiently close to the solitary wave. #### Article information Source Adv. Differential Equations Volume 4, Number 4 (1999), 457-492. Dates First available in Project Euclid: 15 April 2013
2017-09-24 12:18:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6271341443061829, "perplexity": 475.4437419112318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690016.68/warc/CC-MAIN-20170924115333-20170924135333-00182.warc.gz"}
https://www.nature.com/articles/s41586-019-1359-0?error=cookies_not_supported&code=b3ad5324-71c5-4442-a0d0-ffc669096da5
# Reconstructing the late-accretion history of the Moon ## Abstract The importance of highly siderophile elements (HSEs; namely, gold, iridium, osmium, palladium, platinum, rhenium, rhodium and ruthenium) in tracking the late accretion stages of planetary formation has long been recognized. However, the precise nature of the Moon’s accretional history remains enigmatic. There is a substantial mismatch in the HSE budgets of the Earth and the Moon, with the Earth seeming to have accreted disproportionally more HSEs than the Moon1. Several scenarios have been proposed to explain this conundrum, including the delivery of HSEs to the Earth by a few big impactors1, the accretion of pebble-sized objects on dynamically cold orbits that enhanced the Earth’s gravitational focusing factor2, and the ‘sawtooth’ impact model, with its much reduced impact flux before about 4.10 billion years ago3. However, most of these models assume a high impactor-retention ratio (the fraction of impactor mass retained on the target) for the Moon. Here we perform a series of impact simulations to quantify the impactor-retention ratio, followed by a Monte Carlo procedure considering a monotonically decaying impact flux4, to compute the impactor mass accreted into the lunar crust and mantle over their histories. We find that the average impactor-retention ratio for the Moon’s entire impact history is about three times lower than previously estimated1,3. Our results indicate that, to match the HSE budgets of the lunar crust and mantle5,6, the retention of HSEs should have started 4.35 billion years ago, when most of the lunar magma ocean was solidified7,8. Mass accreted before this time must have lost its HSEs to the lunar core, presumably during lunar mantle crystallization9. The combination of a low impactor-retention ratio and a late retention of HSEs in the lunar mantle provides a realistic explanation for the apparent deficit of the Moon’s late-accreted mass relative to that of the Earth. ## Access options from\$8.99 All prices are NET prices. ## Data availability The data that support the findings of this study are available from the corresponding author on request. ## Code availability At present, the iSALE code is not fully open source. It is distributed on a case-by-case basis to academic users in the impact community, strictly for non-commercial use. Scientists interested in using or developing iSALE should see http://www.isale-code.de for a description of application requirements. The Monte Carlo code used here is available from the corresponding author on request. ## References 1. 1. Bottke, W. F. et al. Stochastic late accretion to Earth, the Moon, and Mars. Science 330, 1527–1530 (2010). 2. 2. Schlichting, H. E., Warren, P. H. & Yin, Q.-Z. The last stages of terrestrial planet formation: dynamical friction and the late veneer. Astrophys. J. 752, 8–16 (2012). 3. 3. Morbidelli, A. et al. A sawtooth-like timeline for the first billion years of lunar bombardment. Earth Planet. Sci. Lett. 355-356, 144–151 (2012). 4. 4. Neukum, G., Ivanov, B. A. & Hartmann, W. K. Cratering records in the inner solar system in relation to the lunar reference system. Space Sci. Rev. 96, 55–86 (2001). 5. 5. Day, J. M. D. & Walker, R. J. Highly siderophile element depletion in the Moon. Earth Planet. Sci. Lett. 423, 114–124 (2015). 6. 6. Day, J. M. D. et al. Osmium isotope and highly siderophile element systematics of the lunar crust. Earth Planet. Sci. Lett. 289, 595–605 (2010). 7. 7. Elkins-Tanton, L., Burgess, S. & Yin, Q. Z. The lunar magma ocean: reconciling the solidification process with lunar petrology and geochronology. Earth Planet. Sci. Lett. 304, 326–336 (2011). 8. 8. Borg, L. E. et al. Chronological evidence that the Moon is either young or did not have a global magma ocean. Nature 477, 70–72 (2011). 9. 9. Morbidelli, A. et al. The timeline of the lunar bombardment: revisited. Icarus 305, 262–276 (2018). 10. 10. Canup, R. M. Forming a Moon with an Earth-like composition via a giant impact. Science 338, 1052–1055 (2012). 11. 11. Cuk, M. & Stewart S. T. Making the Moon from a fast-spinning Earth: a giant impact followed by resonant despinning. Science 338, 1047–1052 (2012). 12. 12. Jones, J. H. & Drake, M. J. Core formation and Earth’s late accretionary history. Nature 323, 470–471 (1986). 13. 13. Morgan, J. W., Walker, R. J., Brandon, A. D. & Horan, M. F. Siderophile elements in Earth’s upper mantle and lunar breccias: data synthesis suggests manifestations of the same late influx. Meteorit. Planet. Sci. 36, 1257–1275 (2001). 14. 14. Walker, R. J. Highly siderophile elements in the Earth, Moon and Mars: update and implications for planetary accretion and differentiation. Chem. Erde Geochem. 69, 101–125 (2009). 15. 15. Warren, P. H., Jerde, E. A. & Kallemeyn, G. W. Prisitine Moon rocks: Apollo 17 anorthosites. Proc. Lunar Planet. Sci. Conf. 21, 51–61 (1991). 16. 16. Ryder, G. Mass flux in the ancient Earth-Moon system and benign implications for the origin of life on Earth. J. Geophys. Res. 107 (E4), 5022 (2002). 17. 17. Kraus, R. G. et al. Impact vaporization of planetesimal cores in the late stages of planet formation. Nat. Geosci. 8, 269–272 (2015). 18. 18. Artemieva, N. A. & Shuvalov, V. V. Numerical simulation of high-velocity impact ejecta following falls of comets and asteroids onto the Moon. Sol. Syst. Res. 42, 329–334 (2008). 19. 19. Elbeshausen D. et al. The transition from circular to elliptical impact crater. J. Geophys. Res. 118, 2295–2309 (2013). 20. 20. Le Feuvre, M. & Wieczorek, M. A. Nonuniform cratering of the Moon and a revised crater chronology of the inner solar system. Icarus 214, 1–20 (2011). 21. 21. Shoemaker, E. M. in Physics and Astronomy of the Moon (ed. Kopal, Z.) 283–359 (Academic, 1962). 22. 22. Holsapple, K. A. & Housen, K. R. A crater and its ejecta: an interpretation of deep impact. Icarus 191, 586–597 (2007). 23. 23. Wieczorek, M. A. et al. The crust of the Moon as seen by GRAIL. Science 339, 671–675 (2013). 24. 24. Norman, M. D. et al. Chronology, geochemistry, and petrology of a ferroan noritic anorthosite clast from Descartes breccia 67215: clues to the age, origin, structure, and impact history of the lunar crust. Meteorit. Planet. Sci. 38, 645–661 (2003). 25. 25. Kleine, T. et al. Hf-W chronology of the accretion and early evolution of asteroids and terrestrial planets. Geochim. Cosmochim. Acta 73, 5150–5188 (2009). 26. 26. Borg, L. E. et al. A review of lunar chronology revealing a preponderance of 4.34–4.37 Ga ages. Meteorit. Planet. Sci. 50, 715–732 (2015). 27. 27. Nemchin, A. et al. Timing of crystallization of the lunar magma ocean constrained by the oldest ziron. Nat. Geosci. 2, 133–136 (2009). 28. 28. Rubie, D. C. et al. Highly siderophile elements were stripped from Earth’s mantle by iron sulfide segregation. Science 353, 1141–1144 (2016). 29. 29. Miljković, K. et al. Excavation of the lunar mantle by basin-forming impact events on the Moon. Earth Planet. Sci. Lett. 409, 243–251 (2015). 30. 30. Neumann, G. A. et al. Lunar impact basins revealed by Gravity Recovery and Interior Laboratory measurements. Sci. Adv. 1, e1500852 (2015). 31. 31. Frey, H. in Recent Advances and Current Research Issues in Lunar Stratigraphy Vol. 477 (eds Ambrose, W. A. & Williams, D. A.) 53–75 (Geological Society of America, 2011). 32. 32. Kamata, S. et al. The relative timing of lunar magma ocean solidification and the late heavy bombardment inferred from highly degraded impact basin structures. Icarus 250, 492–503 (2015). 33. 33. Elkins-Tanton, L. Linked magma ocean solidification and atmospheric growth for Earth and Mars. Earth Planet. Sci. Lett. 271, 181–191 (2008). 34. 34. Day, J. M. D., Pearson, D. G. & Taylor, L. A. Highly siderophile element constraints on accretion and differentiation of the Earth-Moon system. Science 315, 217–219 (2007). 35. 35. Day, J. M. D., Brandon, A. D. & Walker, R. J. Highly siderophile elements in Earth, Mars, the Moon, and Asteroids. Rev. Mineral. Geochem. 81, 161–238 (2016). 36. 36. Day, J. M. D. Geochemical constraints on residual metal and sulfide in the sources of lunar mare basalts. Am. Mineral. 103, 1734–1740 (2018). 37. 37. Walker, R. J., Horan, M. F., Shearer, C. K. & Papike, J. J. Low abundances of highly siderophile elements in the lunar mantle: evidence for prolonged late accretion. Earth Planet. Sci. Lett. 224, 399–413 (2004). 38. 38. Taylor, G. J. & Wieczorek, M. A. Lunar bulk chemical composition: a post-Gravity recovery and Interior Laboratory reassessment. Phil. Trans. A 372, 20130242 (2014). 39. 39. Morgan, J. W., Gros, J., Takahashi, H. & Hertogen, H. Lunar breccia 73215: siderophile and volatile elements. Proc. Lunar Sci. Conf. 7, 2189–2199 (1976). 40. 40. Gros, J., Tahahashi, H., Hertogen, J. Morgan, J. W. & Anders, E. Composition of the projectiles that bombarded the lunar highlands. Proc. Lunar Sci. Conf. 7, 2403–2425 (1976). 41. 41. Norman, M. D., Bennett, V. C. & Ryder, G. Targeting the impactors: siderophile element signatures of lunar impact melts from Serenatatis. Earth Planet. Sci. Lett. 202, 217–228 (2002). 42. 42. Puchtel, I. S. et al. Osmium isotope and highly siderophile element systematics of lunar impact melt breccias: implications for the late accretion history of the Moon and Earth. Geochim. Cosmochim. Acta 72, 3022–3042 (2008). 43. 43. Gleißner, P. & Becker, H. Formation of Apollo 16 impactites and the composition of late accreted material: constraints from Os isotopes, highly siderophile elements and sulfur abundances. Geochim. Cosmochim. Acta 200, 1–24 (2017). 44. 44. Schultz, P. H. & Gault, D. E. Prolonged global catastrophes from oblique impacts. Spec. Pap. Geol. Soc. Am. 247, 239–262 (1990). 45. 45. Daly, R. T. & Shultz, P. H. Predictions for impactor contamination on Ceres based on hypervelocity impact experiments. Geophys. Res. Lett. 42, 7890–7898 (2015). 46. 46. Daly, R. T. & Shultz, P. H. Delivering a projectile component to the vestan regolith. Icarus 264, 9–19 (2016). 47. 47. Daly, R. T. & Schultz, P. H. Projectile preservation during oblique hypervelocity impacts. Meteorit. Planet. Sci. 54, 1364–1390 (2018). 48. 48. Thompson, S. L. & Lauson, H. S. Improvements in the CHART D Radiation- Hydrodynamic Code III: Revised Analytic Equations of State. Report SC-RR-71 0714 (Sandia National Laboratory, 1972). 49. 49. Benz, W. et al. The origin of the Moon and the single-impact hypothesis III. Icarus 81, 113–131 (1989). 50. 50. Lee, D.-C. & Halliday, A. N. Core formation on Mars and differentiated asteroids. Nature 388, 854–857 (1997). 51. 51. Davison, T. M. et al. Numerical modeling of oblique hypervelocity impacts on strong ductile targets. Meteorit. Planet. Sci. 46, 1510–1524 (2011). 52. 52. Potter, R. W. et al. in Large Meteorite Impacts and Planetary Evolution V (eds Osinski, G. R. & Kring, D. A.) 99–113 (Lunar and Planetary Institute, 2015). 53. 53. Marchi, S. et al. A new chronology for the Moon and Mercury. Astron. J. 137, 4936–4948 (2009). 54. 54. Collins, G. S., Melosh, H. J. & Ivanov, B. A. Modeling damage and deformation in impact simulations. Meteorit. Planet. Sci. 39, 217–231 (2004). 55. 55. Ahrens, T. J. & O’Keefe, J. D. Shock melting and vaporization of lunar rocks and minerals. Moon 4, 214–249 (1972). 56. 56. Pierazzo, E., Vickery, A. M. & Melosh, H. J. A reevaluation of impact melt product. Icarus 127, 408–423 (1997). 57. 57. Pierazzo, E. & Melosh, H. J. Hydrocode modeling of oblique impacts: the fate of the projectile. Meteorit. Planet. Sci. 35, 117–130 (2000). 58. 58. Marchi, S. et al. Widespread mixing and burial of Earth’s hadean crust by asteroid impacts. Nature 511, 578–582 (2014). 59. 59. Schultz, P. H. & Sugita, S. Fate of the Chicxulub impactor. In 28th Annu. Lunar Planet. Sci. Conf. 1261–1262 (1997). 60. 60. Collins, G. S., Miljkovic, K. & Davison, T. M. The effect of planetary curvature on impact crater ellipticity. EPSC Abstr. 8, EPSC2013-989 (2013). 61. 61. Bottke, W. F. et al. Dating the Moon-forming impact event with asteroidal meteorites. Science 348, 321–323 (2015). 62. 62. Laneuville, M., Wieczorek, M., and Breuer, D. Asymmetric thermal evolution of the Moon. J. Geophys. Res. Planets 118, 1435–1452 (2013). 63. 63. Ivanov, B. A. & Artemieva, N. A. in Catastrophic Events and Mass Extinctions: Impacts and Beyond Vol. 356 (eds Koeberl, C. & MacLeod, K. G.) 619–630 (Geological Society of America, 2002). 64. 64. Miljkovic, K. et al. Asymmetric distribution of lunar impact basins caused by variations in target properties. Science 342, 724–726 (2013). 65. 65. Freed, A. M. et al. The formation of lunar mascon basins from impact to contemporary form. J. Geophys. Res. 119, 2378–2397 (2014). 66. 66. Potter, R. W. K. et al. Constraining the size of the South Pole-Aitken basin impact. Icarus 220, 730–743 (2012). 67. 67. Zhu, M. -H. et al. Numerical modeling of the ejecta distribution and formation of the Orientale basin. J. Geophys. Res. 120, 2118–2134 (2015). 68. 68. Melosh, H. J. Impact Cratering: A Geological Process (Oxford Univ. Press, 1989). 69. 69. Joy, K. H. et al. Direct detection of projectile relics from the end of the lunar basin-forming epoch. Science 336, 1426–1429 (2012). 70. 70. Liu, J. G. et al. Diverse impactors in Apollo 15 and 16 impact melt rocks: evidence from osmium isotopes and highly siderophile elements. Geochim. Cosmochim. Acta 155, 122–153 (2015). 71. 71. Croft, S. K. The scaling of complex craters. Proc. Lunar Planet. Sci. Conf. 16, 828–842 (1985). 72. 72. McKinnon, W. B. & Schenk, P. M. Ejecta blanket scaling on the Moon and Mercury and interferences for projectile populations. Lunar Planet. Sci. XVI, 544–545 (1985). 73. 73. Wilhelms, D. E. The Geologic History of the Moon. USGS Professional Paper 1348 (US Geological Survey, 1987). 74. 74. Miljkovic, K. et al. Elusive formation of impact basins on the young Moon. In Proc. 48th Lunar Planetary Science Conference 1361 (2017). 75. 75. Gault, D. E. & Wedekind, J. A. Experimental studies of oblique impact. In Proc. 9th  Lunar Planetary Science Conference 3843–3875 (1978). 76. 76. Pierazzo, E. & Melosh, H. J. Melt production in oblique impacts. Icarus 145, 252–261 (2000). 77. 77. Pierazzo, E. & Melosh, H. J. Understanding oblique impacts from experiments, observations and modeling. Annu. Rev. Earth Planet. Sci. 28, 141–167 (2000). 78. 78. Jones, A. P. et al. Impact induced melting and the development of large igneous provinces. Earth Planet. Sci. Lett. 202, 551–561 (2002). 79. 79. Kendall, J. D. & Melosh, H. J. Differentiated planetesimals impacts into a terrestrial magma ocean: fate of the iron core. Earth Planet. Sci. Lett. 448, 24–33 (2016). 80. 80. Shuvalov, V. V. et al. Crater ejecta: markers of impact catastrophes. Phys. Solid Earth 48, 241–255 (2012). ## Acknowledgements We thank J. M. D. Day and R. J. Walker for useful discussions. We acknowledge the developers of iSALE (www.isale-code.de), in particular D. Elbeshausen, who developed iSALE-3D. M.-H.Z. is supported by the Science and Technology Development Fund of Macau (079/2018/A2). K.W., H.B., N.A. and M.-H.Z. are funded by Deutsche Forschungsgeimenschaft (DFG) grant SFB-TRR 170 (A4, C2), TRR-170 Pub. No. 55. Q.-Z.Y. is funded by the NASA Emerging Worlds Program (NNX16AD34G). ### Peer review information Nature thanks James Day and the other anonymous reviewer(s) for their contribution to the peer review of this work. ## Author information Authors ### Contributions M.-H.Z. conceived the idea and performed the impact simulations. N.A. performed the Monte Carlo modelling. M.-H.Z., A.M., Q.-Z.Y, H.B. and K.W. interpreted the results. All authors contributed to the discussion of the results and wrote the manuscript. ### Corresponding author Correspondence to Meng-Hua Zhu. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Extended data figures and tables ### Extended Data Fig. 1 Thermal profiles of the Moon. Two possible thermal profiles (TP1 and TP2) for the Moon, which we use in this study to test the effects of temperature on the impactor-retention ratio. ### Extended Data Fig. 2 Effects of lunar thermal profiles on the impactor-retention ratio. We calculated impactor-retention ratios for oblique impacts of impactors with diameters (d) of 210 km and velocities (v) of 15 km s−1. The impact angles were varied from 15° to 90°. TP1 and TP2 represent the temperature profiles used here (Extended Data Fig. 1). ### Extended Data Fig. 3 Fraction of retained impactor material deposited within the transient crater in all simulations. In our simulations, this fraction is between 0.9 and 1.0 for impact angles greater than 20° (relative to the lunar surface). For large impactors (d > 100 km) with impact angles smaller than 20°, the fraction of retained material within the transient cavity is less than 0.9. The dashed line represents the fraction of 0.96 that we use in our calculations (see Fig. 3) for simplicity. The dotted line represents the fraction of 0.90 used in Extended Data Fig. 6. The numbers in the key represent the impactor diameter (D) and impact velocity (V). ### Extended Data Fig. 4 Lunar impact fluxes. The differential number of lunar craters larger than 20 km as a function of time for the production functions discussed in the text4,9. ### Extended Data Fig. 5 Two scenarios involving a differentiated impactor hitting the Moon. The arrows represent the impact direction and the lines show the extent of interaction of the impactor core with the Moon. When the core of a differentiated impactor is accreted to the Moon (a), we record the total mass of impactors accreted to the Moon. However, when the impactor’s core is not accreted to the Moon (b), we do not record any accreted mass.  This simplification is justified because in reality the HSEs of a differentiated impactor should almost entirely be dominated by its core. ### Extended Data Fig. 6 Cumulative impactor masses that hit and are accreted into the Moon. a, b, The total impactor masses hitting the Moon (blue) and being accreted to the Moon (purple) from different starting times (between 4.5 Gyr to 3.5 Gyr ago) to the present day, for assumed crustal thicknesses of 34 km (a) and 43 km (b). The cumulative masses accreted to the lunar crust (orange) and mantle (green) are estimated separately. This figure is similar to Fig. 3, except that we assume that around 10% of the retained impactor material is deposited beyond the transient crater and mixed with the crust. ## Rights and permissions Reprints and Permissions Zhu, MH., Artemieva, N., Morbidelli, A. et al. Reconstructing the late-accretion history of the Moon. Nature 571, 226–229 (2019). https://doi.org/10.1038/s41586-019-1359-0 • Accepted: • Published: • Issue Date: • ### Profiling lunar dust dissolution in aqueous environments: The design concept • Russell Kerschmann • , Daniel Winterhalter • , Kathleen Scheiderich • , David E. Damby •  & David J. Loftus Acta Astronautica (2021) • ### Modification of the composition and density of Mercury from late accretion • Ryuki Hyodo • , Hidenori Genda •  & Ramon Brasser Icarus (2021) • ### Impact bombardment chronology of the terrestrial planets from 4.5 Ga to 3.5 Ga • R. Brasser • , S.C. Werner •  & S.J. Mojzsis Icarus (2020) • ### Cometary Glycolaldehyde as a Source of Pre-RNA Molecules • Nicolle E.B. Zellner • , Vanessa P. McCaffrey •  & Jayden H.E. Butler Astrobiology (2020) • ### Metal grains in lunar rocks as indicators of igneous and impact processes • James M. D. Day Meteoritics & Planetary Science (2020)
2020-11-30 03:18:31
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212414383888245, "perplexity": 12681.11984611041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00408.warc.gz"}
https://www.vedantu.com/question-answer/the-first-term-of-the-ap-is-5-the-last-term-is-class-9-maths-cbse-5ee34eed6067f248d1692cc8
Question # The first term of the A.P is 5, the last term is 45 and the sum is 400. Find the number of terms in the A.P. Hint: We can find the number of terms in the Arithmetic Progression (A.P) by using the formula for sum of $n$ terms of the A.P where the ${n^{th}}$ term is given as 45 and the sum is 400. Arithmetic Progression (A.P) is a sequence whose terms increase or decrease by a fixed number called the common difference. If a is the first term of the A.P and d is the common difference of the A.P, then $l$ , the ${n^{th}}$ term of the A.P is given as follows: $l = a + (n - 1)d{\text{ }}..........{\text{(1)}}$ The sum of n terms of the A.P, ${S_n}$ is given by: ${S_n} = \dfrac{n}{2}\left[ {2a + (n - 1)d} \right]{\text{ }}...........{\text{(2)}}$ Equation (2) can be written in terms of $l$ , the ${n^{th}}$ term by using equation (1). ${S_n} = \dfrac{n}{2}\left( {a + l} \right){\text{ }}..........(3)$ The value of first term, the last term and the sum to n terms of the A.P is given. $a = 5$ $l = 45$ ${S_n} = 400$ Using these values in equation (3) and solving for $n$ , we get: $400 = \dfrac{n}{2}\left( {5 + 45} \right)$ Simplifying the RHS, we get: $400 = \dfrac{n}{2}\left( {50} \right)$ Dividing 50 by 2 we get 25, hence, we have: $400 = 25n$ Solving for n by dividing 400 by 25, we get: $n = \dfrac{{400}}{{25}}$ $n = 16$ Hence, the number of terms of the A.P is 16. Note: You can not solve the equation by just using the first and the last term using the formula for the ${n^{th}}$ term of the A.P since the common difference is not given. Also, you must know the second form of sum to n terms of A.P, that is, ${S_n} = \dfrac{n}{2}\left( {a + l} \right)$ .
2021-05-09 14:09:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329006433486938, "perplexity": 157.19146266655082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00468.warc.gz"}
https://en.wikipedia.org/wiki/Average_Run_Rate_method
# Average Run Rate method Jump to navigation Jump to search The Average Run Rate (ARR) method was a mathematical formulation designed to calculate the target score for the team batting second in a limited overs cricket match interrupted by weather or other circumstances. Often matches interrupted by weather would use reserve days, bowl outs, or be replayed on another date, but if logistics did not allow these, the ARR method would be used. The ARR method was used from the start of one-day cricket in the 1950s and 1960s until it was replaced by the Most Productive Overs method in 1991.[1] ## Calculation If an interruption means that the team batting second loses some of their overs, their target score is adjusted as follows.[2] ${\displaystyle {\text{Team 2's new target }}={\text{ (Team 1's Average Run Rate achieved}}\times {\text{Overs available to Team 2)}}+1.}$ This means that Team 2 just has to match the average run rate achieved by Team 1 in the overs it has available. For example, if Team 1 made 250 in their 50 overs, which is an ARR of 5 runs per over, and Team 2's innings is reduced to 25 overs, Team 2's new target is (5 x 25) + 1 = 126.[1] This formula can alternatively be written as: ${\displaystyle {\text{Team 2's new target }}=\left({{\text{Team 1's total}}\times {\frac {\text{Overs available to Team 2}}{\text{Overs used by Team 1}}}}\right)+1.}$ In other words, the target is reduced in proportion to the loss in overs. Using the same example as above, with this formula the new target for Team 2 is (250 x 25/50) + 1 = 126. ## Criticisms There are four intrinsic flaws in the method: • Firstly, it frequently altered the balance of the match, usually in favour of the team batting second (Team 2): as it was easier to maintain the given run rate for a reduced number of overs, less care needed to be taken to preserve wickets, meaning a revised target was easier to achieve. [3] • Secondly, the method took no consideration of wickets lost, but the scoring rate of Team 2 when the match was interrupted. For example, if Team 2 were 126–9 from 25 overs in reply to a score of 250 from 50 overs, they would be declared the winner. [4] • Thirdly, there was no compensation to Team 1 if they unexpectedly lost overs which they were expecting to be able to score from. • Fourthly, if Team 2's innings was interrupted, the current match situation would become irrelevant in the calculation of the revised target. Two subsequent modifications were used: increasing the required run rate by 0.5% for each over lost, and calculating the target using the run rate after excluding maiden overs, with the revised target given by the next highest integer. While these modifications reduced Team 2's advantage, partially addressing the first intrinsic flaw, the second modification effectively penalised Team 2 for good bowling, and they failed to address the other intrinsic flaws of the method. ## Notable matches decided by ARR • England v Sri Lanka in the 1987 Cricket World Cup: England scored 296 from 50 overs. After a delay due to rain, Sri Lanka's innings was reduced to 45 overs, giving them a revised target of 267 (296 x 45/50 = 266.4). Sri Lanka finished at 158-8. In this match, the later Duckworth-Lewis-Stern method would have reset Sri Lanka's target to 282. • Australia v West Indies, third final of the 1988-89 World Series Cup: Australia scored 226 from 38 overs. Chasing 227 to win, the West Indies were 47−2 after 6.4 overs, needing 180 runs from 31.2 overs (a required RR of 5.74) when rain stopped play for 85 minutes. When play restarted, the West Indies' innings was reduced to 18 overs, giving them a revised target of 108 (226 x 18/38 = 107.1), meaning they needed 61 runs from 11.2 overs (a required RR of 5.38). The West Indies won the match (and the competition) with 4.4 overs remaining and eight wickets in hand.[5] This revised target gave the West Indies a major advantage as it significantly reduced the number of overs they needed to maintain a given run rate, and also reduced the required run rate. Australian fans booed this conclusion, and the Average Run Rate was criticised by the media and Australian captain Allan Border, which led to Australia developing the Most Productive Overs method. In this match, the later Duckworth-Lewis-Stern method would have increased the West Indies' target to 232 (to take into account a two-hour rain delay during Australia's innings), and then revised the target to 139 after the second interruption. ## References 1. ^ a b Duckworth/Lewis, Q2. "The D/L method: answers to frequently asked questions". ESPN Cricinfo. Retrieved 16 September 2017. 2. ^ Brooker, S.; Hogan, S. (2010). "How fair is the Duckworth/Lewis adjustment in one day international cricket?" (PDF): Section 2.1. {{cite journal}}: Cite journal requires |journal= (help) 3. ^ Duckworth, F.C.; Lewis, A. J. (1998). "A fair method for resetting the target in interrupted one-day cricket matches". Journal of the Operational Research Society. 49 (3): 220–227. doi:10.1057/palgrave.jors.2600524. 4. ^ Duckworth, F.C. (2008). "The Duckworth/Lewis method: an exercise in Maths, Stats, OR and communications" (PDF). MSOR Connections. 8 (3): 11–14. 5. ^ 3rd Final, 1988/89 Benson and Hedges World Series Cup
2022-06-26 09:58:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2491052746772766, "perplexity": 5725.765559681078}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00290.warc.gz"}
https://0x7df.github.io/cuda-basics-part-2.html
# CUDA basics part 2 By 0x7df, Tue 21 April 2015, in category Computer science ## Introduction Recently, I posted a basic introduction to CUDA C for programming GPUs, which showed how to do a vector addition. This illustrated some of the CUDA basic syntax, but it wasn't a complex- enough example to bring to light some of the trickier issues to do with designing algorithms carefully to minimise data movement. Here we move on to the more complicated algorithm for matrix multiplication, C = AB, where we'll see that elements of the matrices get used multiple times, so we'll want to put them in the shared memory to minimise the number of times they get retrieved from the much slower global (or device) memory. We'll also see that, because data that a thread puts into shared memory is only accessible by the other threads in the same thread block, we need to be careful how we do this. ## Naive matrix multiplication in CUDA First, let's ignore those concerns and put together the simplest implementation of matrix multiplication; then we'll analyse the memory access, and see how we can improve on it. Before we begin, however, some-error checking. Below is a function-like C macro that will be used to surround each CUDA statement we execute with a check of the return code. The return code is set to the pre-defined variable cudaSuccess if the statement executed successfully, or an error value otherwise. (Hence, we declare the variable that will contain the CUDA statement return to be type cudaError_t.) Where an error value is returned, we pass this to the CUDA function cudaGetErrorString, which returns an error message that we can print. \#define cudaCheck(stmt)                                                \\     do \\ {                                                                \\         cudaError_t err = stmt;                                        \\         if (err != cudaSuccess) \\ {                          \\      printf("ERROR: failed to run %s\\n", stmt);                \\       printf("ERROR: CUDA error %s\\n", cudaGetErrorString(err)); \\       return -1;                                                 \\     }                                                              \\ } while (0) ### Simple matrix multiplication kernel Now for the kernel function. The way we've chosen to divide this problem up amongst threads is to have each thread calculate a single element in the output vector, C. Mathematically, for an m-by-n matrix A and an n-by-p matrix B, this is: $$C_{i,j} = \sum_{k=1}^n A_{i,k}B_{k,j}$$ for each of the m-by-p elements in C. This is illustrated in the figure, where the input matrices A and B are shown in grey, and the result, matrix C, in blue; a single element of C is highlighted in red, and the corresponding row and column of A and B are also highlighted. We implement this in CUDA C as follows: __global__ void matrixMultiply(float *A, float *B, float *C, int numACols, int numBRows, int numBCols, int numCRows, int numCCols) { // Get the row and column indices of the single // element of the output matrix that this thread // is dealing with int col = threadIdx.x + blockDim.x*blockIdx.x; int row = threadIdx.y + blockDim.y*blockIdx.y; // Calculate the output matrix element if ((row < numCRows) && (col < numCCols)) { float Ctmp = 0; for (int k = 0; k < numACols; ++k) { Ctmp += A[row*numACols+k]*B[k*numBCols+col]; } C[row*numCCols + col] = Ctmp; } } This is reasonably simple. Each thread figures out which output matrix element it is responsible for, simply by checking the thread indices. It proceeds only if the element indices are within the correct bounds of the output matrix, which may not be the case if there are more threads than elements (because we have to have a whole number of thread blocks). Where they are, it retrieves the correct row of A and column of B, and calculates the corresponding single element of C. ### Naive matrix multiplication host code For completeness, here is the host code. The new things here that we didn't see in the vector multiplication example are: 1. The use of the C macro cudaCheck (defined above) for error checking 2. The fact that the grid and the thread blocks are two-dimensional 3. The call to cudaDeviceSynchronize() :::c int main(int argc, char **argv) { float hostA, hostB, hostC; float deviceA, deviceB, deviceC; int numARows, numACols; // Rows, columns in the matrix A int numBRows, numBCols; // Rows, columns in the matrix B int numCRows, numCCols; // Rows, columns in the matrix C int sizeA, sizeB, sizeC; // Size in memory of each of A, B and C int gridXSize, gridYSize; // Number of thread blocks in x, y dimensions of grid int blockSize; // Number of threads in block // Allocate and populate the A and B matrices // hostA and hostB, and get numARows, numACols, // numBRows, numBCols // Set numCRows and numCCols numCRows = numARows; numCCols = numBCols; // Allocate the C matrix hostC = (float)malloc(numCRowsnumCColssizeof(float)); // Allocate GPU memory sizeA = numARows numAColssizeof(float); sizeB = numBRows numBColssizeof(float); sizeC = numCRows numCColssizeof(float); cudaCheck(cudaMalloc((void ) &deviceA, sizeA)); cudaCheck(cudaMalloc((void ) &deviceB, sizeB)); cudaCheck(cudaMalloc((void *) &deviceC, sizeC)); // Copy data to the GPU cudaCheck(cudaMemcpy(deviceA, hostA, sizeA, cudaMemcpyHostToDevice)); cudaCheck(cudaMemcpy(deviceB, hostB, sizeB, cudaMemcpyHostToDevice)); // Initialize the grid and block dimensions blockSize = 16; gridXSize = (numCCols-1)/blockSize + 1; gridYSize = (numCRows-1)/blockSize + 1; dim3 dimGrid(gridXSize, gridYSize, 1); dim3 dimBlock(blockSize, blockSize, 1); // Launch the GPU Kernel matrixMultiply<<>>(deviceA, deviceB, deviceC, numACols, numBRows, numBCols, numCRows, numCCols); // Copy the GPU memory back to the CPU cudaCheck(cudaMemcpy(hostC, deviceC, sizeC, cudaMemcpyDeviceToHost)); // Free the GPU memory cudaCheck(cudaFree(deviceA)); cudaCheck(cudaFree(deviceB)); cudaCheck(cudaFree(deviceC)); // Do something with the solution, free the host memory, return } The call to cudaDeviceSynchronize() ensures that all threads have finished before the host code proceeds any further. ### Performance analysis of the naive implementation Clearly, each of the $$mp$$ elements of $$C$$ requires a full row of $$A$$ and a full column of $$B$$ - both of length $$n$$ - to be read from memory, and one value to be written back. Hence there are $$(2n + 1)mp$$ memory accesses. Re-examining the kernel, we see that there are two floating point operations per iteration of the inner loop (one multiply and one add), and $$n$$ iterations of that loop, which is completed for each of the $$mp$$ elements in the product matrix. Hence, there are $$2nmp$$ FLOP, and the CGMA is $$2n/(2n + 1)$$; which is effectively 1, except when the matrices are very small. With a memory bandwidth of 150 GB/s, the algorithm is limited to just under 150/8 = 20 GFLOP/s (assuming double precision), which is still less than 2% of the available compute of our nominal 1 TFLOP GPU. ## Improving on the naive implementation However, it turns out that we can improve on this. So far, all the data storage has been in global memory, because that's the only permissible location for CUDA memory allocations in the host code, and that's where the data stays unless we explicitly move it, once inside the kernel function (we'll see how later). It's also clear that in this algorithm data gets re-used frequently. Every row of matrix $$A$$ is used $$p$$ times and every column of matrix $$B$$ is used $$m$$ times. If we contrive an algorithm that gets the necessary data into shared memory before it is needed, and keeps it there while it is being re-used, then we can clearly reduce the global memory accesses. However, it's not as though we can read $$A$$ and $$B$$ into shared memory and have them accessible to all the threads working on the computation; shared memory isn't globally accessible, despite the name, but is instead local to a single streaming multiprocessor, and only 'shared' amongst the threads in whichever thread block is currently assigned to the SM. Hence our goal is to ensure that the threads in a given thread block have the subset of input data they need available in their SM's shared memory, under the general assumption that because of the small size of the shared memory, not all of the needed data will fit in at once. Consider a thread block covering an area of the product matrix $$C$$, which is $$a$$ rows high by $$a$$ columns wide, with the top-left element being $$i$$, $$j$$ and the bottom-right therefore being $$i+a, j+a$$. This is shown in the figure. To compute these values, the rows $$i, i+1, ..., i+a$$ of matrix $$A$$ and columns $$j, j+1, ..., j+a$$ of matrix $$B$$ are required, comprising horizontal and vertical strips, respectively, of dimension $$a \times n$$ elements. We assume in general these strips comprise too much data to move all together to shared memory. Instead, we move a block of elements from the strip of $$A$$, and a block of elements from the strip of $$B$$ - i.e. two blocks of size $$a \times a$$, one from each matrix; we will refer to these as tiles. Performing matrix multiplication on these two tiles creates a tile of partial sums in the $$C$$ elements. When the next pair of tiles from $$A$$ and $$B$$ are retrieved, the partial sums are further incremented, until eventually the full strips have been processed and the final answers are available. There is still some duplication of global memory accesses, because any given strip of $$A$$ will be required by all the thread blocks of the $$C$$ matrix that share the same row indices; and any given strip of $$B$$ will be required by all the thread blocks of the $$C$$ matrix that share the same column indices. However, we can see that there is at least some re-use of data in shared memory; each sub-row of the tile from $$A$$ gets re-used $$a$$ times (for the $$a$$ elements of the output matrix that have the same row index), as does each sub-column of the tile from $$B$$. This data re-use reduces the retrievals from global memory by a factor of $$a$$. Here is the kernel for tiled matrix multiplication. __global__ void matrixMultiply(float *A, float *B, float *C, int numARows, int numACols, int numBRows, int numBCols, int numCRows, int numCCols) { // Define device shared-memory storage for // tiles of the matrices // Scope: each tile is accessible by a single __shared__ float tileA[TILE_WIDTH][TILE_WIDTH]; __shared__ float tileB[TILE_WIDTH][TILE_WIDTH]; // Define abbreviated variables for the // Scope: stored in registers and therefore int bx = blockIdx.x; int by = blockIdx.y; // Each thread is responsible for a single // element of the product matrix C. // Determine which element, from the block int row = by*TILE_WIDTH + ty; int col = bx*TILE_WIDTH + tx; // Initialise a temp variable for the solution // for this matrix element // Scope: in register, private to individual thread float Ctemp = 0; // Loop over the tiles in the A and B matrices // that will contribute to the calculation of // this element in the product matrix. We are // looping over columns of A for a given row // (equal to the row index of the C element), // and over rows of the B matrix for a given // column index (equal to the column index of // the C element) int numTiles = (numACols-1)/TILE_WIDTH + 1; for (int tl = 0; tl < numTiles; ++tl) { // Load the tiles into shared memory, so all // a single value of each of the A and B tiles. if ((row < numARows) && (tl*TILE_WIDTH + tx < numACols)) { tileA[ty][tx] = A[row*numACols + tl*TILE_WIDTH + tx]; } else { tileA[ty][tx] = 0.; } if ((tl*TILE_WIDTH + ty < numBRows) && (col < numBCols)) { tileB[ty][tx] = B[(tl*TILE_WIDTH + ty)*numBCols + col]; } else { tileB[ty][tx] = 0.; } // Loop over the elements within the A and B // tiles that contribute to this element of C for (int k = 0; k < TILE_WIDTH; ++k) { Ctemp += tileA[ty][k] * tileB[k][tx]; } } // Write the final value into the output array if ((row < numARows) && (col < numBCols)) { C[row*numBCols + col] = Ctemp; } } In each thread block, the $$a^2$$ threads load two float values each and perform $$2a$$ floating-point operations to compute the dot product of the row and column sub-sections (both of length $$a$$) required for the single output matrix element it holds. Hence there are $$2a$$ computations for two memory loads, which gives a CGMA ratio of $$a$$. For the naive implementation it was 1, so we have improved the CGMA by a factor of $$a$$ by tiling the data. There are a few other things to note in the kernel. 1. The use of the __shared__ identifier in the allocations statements for tileA and tileB (which are the temporary storage arrays for the tiles of $$A$$ and $$B$$). This keyword is how we cause the storage to be allocated in shared memory (and therefore it can be used only in __device__ functions, not __host__ functions). 2. TILE_WIDTH is a C macro that we assume has been defined elsewhere. 3. Calculation of the $$C$$ element indices row and col is done using TILE_WIDTH, where previously blockDim.x and blockDim.y appeared. This works because we have defined the tile to be the same size as the thread block. In theory it could be different, but doing so gives us the very convenient consequence that each thread needs only to load a single element from each of $$A$$ and $$B$$ into shared memory to construct the tiles. This means the host code that calls the kernel needs to use TILE_WIDTH to define the block size: gridXSize = (numCCols-1)/TILE_WIDTH + 1; gridYSize = (numCRows-1)/TILE_WIDTH + 1; dim3 DimGrid(gridXSize, gridYSize, 1);  // gridSize blocks in the grid dim3 DimBlock(TILE_WIDTH, TILE_WIDTH, 1); // blockSize threads in each block matrixMultiply<<<DimGrid,DimBlock>>>(deviceA, deviceB, deviceC, ... 4. We have put some logic around the statements that transfer data to the shared-memory tile storage. Since we can't guarantee that there will be a whole number of thread blocks in the matrix, this prevents threads whose row, col indices are outside the bounds of either A or B from attempting to retrieve data that isn't there. 5. The appearance of __syncthreads(). This is a barrier synchronization across all threads that ensures all threads complete any work up to this point before any proceed further. Without this, some threads could move on to begin computing matrix elements before other threads have loaded the correct data into shared memory, and out-of-date data could be used.
2021-10-22 18:34:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7146697044372559, "perplexity": 2011.543591616791}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00492.warc.gz"}
http://mathhelpforum.com/pre-calculus/143576-vector-help-straight-lines-print.html
vector help, straight lines • May 7th 2010, 02:11 PM Tweety vector help, straight lines Quote: Relative to a fixed origin, the points A, B and C have position vectors (2i − j + 6k), (5i − 4j) and (7i − 6j − 4k) respectively. (a) Show that A, B and C all lie on a single straight line. completely lost here, any help appreciated. • May 7th 2010, 05:02 PM Soroban Hello, Tweety! There are several ways to do this . . . Quote: Relative to a fixed origin, the points $A, B, C$ have position vectors: . . (2i − j + 6k), (5i − 4j), (7i − 6j − 4k) respectively. (a) Show that $A, B, C$ all lie on a single straight line. We have: . $\begin{Bmatrix}A\!: & (2,\text{-}1,6) \\ B\!: & (5,\text{-}4,0) \\ C\!: & (7.\text{-}6.\text{-}4) \end{Bmatrix}$ Then: . $\begin{array}{cccccc}\overrightarrow{AB} &=& \langle 3,\text{-}3,\text{-}6\rangle &=& 3\langle 1,\text{-}1,\text{-}2\rangle \\ \overrightarrow{BC} &=& \langle 2,\text{-}2,\text{-}4 \rangle &=& 2\langle1,\text{-}1,\text{-}2\rangle \end{array}$ The vectors are parallel and contain point $B.$ Therefore, the points are collinear. $\begin{array}{ccccccccc} |AB| &=& \sqrt{9 + 9 + 36} &=& \sqrt{54} &=& 3\sqrt{6} \\ |BC| &=& \sqrt{4 + 4 + 16} &=& \sqrt{24} &=& 2\sqrt{6} \\ |AC| &=& \sqrt{25 + 25 + 100} &=& \sqrt{150} &=& 5\sqrt{6} \end{array}$ Since $|AB| + |BC| \:=\:|AC|$, points $A,B,C$ are collinear. We have: . $\begin{array}{ccc}\overrightarrow{AB} &=& \langle 3,\text{-}3,\text{-}6\rangle \\ \overrightarrow{BC} &=& \langle 2,\text{-}2,\text{-}4\rangle \end{array}$ $\text{The angle }\theta \text{ between }\overrightarrow{AB}\text{ and }\overrightarrow{BC}\text{ is given by: }\; \cos\theta \;=\;\frac{\overrightarrow{AB}\cdot \overrightarrow{BC}}{|\overrightarrow{AB}|\,|\over rightarrow{BC}|}$ We have: . $\cos\theta \;=\;\frac{\langle 3,\text{-}3,\text{-}6\rangle\cdot\langle 2,\text{-}2,\text{-}4\rangle}{\sqrt{9+9+36}\,\sqrt{4+4+16}} \;=\;\frac{6+6+24}{\sqrt{54}\,\sqrt{24}} \;=\;\frac{36}{36}\;=\;1$ Since $\cos\theta \:=\:1$, then: . $\theta \:=\:0 \quad\Rightarrow\quad \overrightarrow{AB} \:\parallel\: \overrightarrow{BC}$ Therefore, points $A,B,C$ are collinear.
2018-01-17 18:23:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9348414540290833, "perplexity": 3793.3405184479934}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886952.14/warc/CC-MAIN-20180117173312-20180117193312-00712.warc.gz"}
https://math.stackexchange.com/questions/1453676/find-continuous-functions-that-satisfy-ffx-x-over-the-reals
# Find continuous functions that satisfy $f(f(x))=x$ over the reals. I'm looking for a method to solve: $$f(f(x))=x$$ Where $f$ is defined for $x \in R$ So far by inverting both sides I have: $f(x)=f^{-1}(x)$ Which means that my function should be symmetrical over $y=x$. I may "guess" the functions: $y=x$ $y=c-x$ However I'm wondering is there a way to solve this without "guessing". • $f(x)=c-x$ is also a solution. – Bernard Sep 27 '15 at 15:04 • @AhmedS.Attaalla It's not - point $(0,1)$ belongs to this line, but $(1,0)$ doesn't. – Wojowu Sep 27 '15 at 15:07 • – A.Sh Sep 27 '15 at 15:08 • Wow how did I miss that. I was thinking parrellel – Ahmed S. Attaalla Sep 27 '15 at 15:08
2019-10-23 12:38:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6642839908599854, "perplexity": 429.42169577198536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00186.warc.gz"}
https://electronics.stackexchange.com/questions/329622/op-amp-circuit-explanation
# op-amp circuit explanation [closed] Hi everyone I'm studying Op-Amp but I can't understand this point in the AC circuit : In the first picture it says Vo = Ic * Rc, and in the second picture I have the value of Ic, when I try to apply this in the given formula before the answer is wrong. Why? This second formula is used instead of the first one? Is there is a difference ? ## closed as unclear what you're asking by Andy aka, Voltage Spike, Dave Tweed♦Sep 30 '17 at 14:28 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. You're first picture has no component called $R_c$, but we can clearly see that $V_o = I_c R_c$ does not apply for the second picture, but instead it is $$V_{R_c} = I_c R_c \qquad V_o = V_{cc} - I_c R_c$$ But when doing a AC-Analysis $V_{cc}$ is removed, as it is a constant voltage source, and therefor $R_c$ is short-circuited to ground. Then you actually have $V_o = I_c R_c$.
2019-08-18 09:44:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6684660315513611, "perplexity": 420.2589800826408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00137.warc.gz"}
https://mathspace.co/textbooks/syllabuses/Syllabus-453/topics/Topic-8406/subtopics/Subtopic-111248/?activeTab=interactive
# Congruence and Similarity of Circles ## Interactive practice questions We want to compare the circumference and area of two circles: circle A has radius $5$5 units and circle B has radius $40$40 units. a What is the circumference of circle A? Give your answer as an exact value. b What is the circumference of circle B? Give your answer as an exact value. c What is the area of circle A? Give your answer as an exact value. d What is the area of circle B? Give your answer as an exact value. e By what factor is the radius of circle A enlarged to give the radius of circle B? f By what factor is the circumference of circle A enlarged to give the circumference of circle B? g By what factor is the area of circle A enlarged to give the area of circle B? Easy Approx 5 minutes A circle with a radius of $2$2 units is enlarged by a factor of $7$7. Circle A has a perimeter measuring $10\pi$10π cm, while circle B has a radius measuring $10$10 cm. The ratio of the circumference of circle A to the circumference of circle B is $\frac{4}{3}$43.
2022-01-25 07:56:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7313216924667358, "perplexity": 925.9528614160774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00590.warc.gz"}
https://ideas.repec.org/p/ecm/wc2000/0878.html
My bibliography  Save this paper Strategic Experimentation: The Case of the Poisson Bandits Author Listed: • Martin Cripps (University of Warwick) • Godfrey Keller (London School of Economics) (University of Munich) Abstract This paper studies a game of strategic experimentation in which the players learn from the experiments of others as well as their own. We first establish the efficient benchmark where the players co-ordinate in order to maximise joint expected payoffs, and then show that, because of free-riding, the strategic problem leads to inefficiently low levels of experimentation in any equilibrium when the players use stationary Markovian strategies. Efficiency can be approximately retrieved provided that the players adopt strategies which slow down the rate at which information is acquired; this is achieved by their taking periodic breaks from experimenting, which get progressively longer. In the public information case (actions and experimental outcomes are both observable), we exhibit a class of non-stationary equilibria in which the $\varepsilon$-efficient amount of experimentation is performed, but only in infinite time. In the private information case (only actions are observable, not outcomes), the breaks have two additional effects: not only do they enable the players to finesse the inference problem, but also they serve to signal their experimental outcome to the other player. We describe an equilibrium with similar non-stationary strategies in which the $\varepsilon$-efficient amount of experimentation is again performed in infinite time, but with a faster rate of information acquisition. The equilibrium rate of information acquisition is slower in the former case because the short-run temptation to free-ride on information acquisition is greater when information is public. Suggested Citation • Martin Cripps & Godfrey Keller & Sven Rady, 2000. "Strategic Experimentation: The Case of the Poisson Bandits," Econometric Society World Congress 2000 Contributed Papers 0878, Econometric Society. • Handle: RePEc:ecm:wc2000:0878 as File URL: http://fmwww.bc.edu/RePEc/es2000/0878.pdf File Function: main text References listed on IDEAS as 1. Patrick Bolton & Christopher Harris, 1999. "Strategic Experimentation," Econometrica, Econometric Society, vol. 67(2), pages 349-374, March. Full references (including those not matched with items on IDEAS) Corrections All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ecm:wc2000:0878. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Christopher F. Baum). General contact details of provider: http://edirc.repec.org/data/essssea.html . If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form . If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services. IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
2018-06-19 11:10:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5913496017456055, "perplexity": 2386.448522838036}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00522.warc.gz"}
http://connection.ebscohost.com/c/articles/55252798/theorem-galambos-bojani-seneta-type
TITLE # A Theorem of Galambos-Bojanić-Seneta Type AUTHOR(S) Djurčić, Dragan; Torgašev, Aleksandar PUB. DATE January 2009 SOURCE Abstract & Applied Analysis;2009, Special section p1 SOURCE TYPE DOC. TYPE Article ABSTRACT In the theorems of Galambos-Bojani´c-Seneta's type, the asymptotic behavior of the functions c[x], x ≥ 1, for x → +∞, is investigated by the asymptotic behavior of the given sequence of positive numbers (cn), as n → +∞ and vice versa. The main result of this paper is one theorem of such a type for sequences of positive numbers (cn) which satisfy an asymptotic condition of the Karamata type limn 8c[ n]/cn > 1, for λ > 1. ACCESSION # 55252798 ## Related Articles • Young measures generated by sequences in Morrey spaces. Fey, Kyle // Calculus of Variations & Partial Differential Equations;Sep2010, Vol. 39 Issue 1/2, p183 Let $${\Omega\subset\mathbb{R}^n}$$ be open and bounded. For 1 ≤ p < ∞ and 0 ≤ λ < n, we give a characterization of Young measures generated by sequences of functions $${\{{\bf f}_j\}_{j=1}^\infty}$$ uniformly bounded in the Morrey space... • Some Relationships between the Analogs of Euler Numbers and Polynomials. Ryoo, C. S.; Kim, T.; Lee-Chae Jang // Journal of Inequalities & Applications;2007, Vol. 2007 Issue 1, p1 The article focuses on the relationships between the analogs of Euler numbers and polynomials. The generating functions of the twisted Euler numbers and polynomials associated with their interpolation functions are presented, wherein twisted Euler zeta function, twisted Hurwitz zeta function,... • ON THE DISTRIBUTION OF POWER RESIDUES AND PRIMITIVE ELEMENTS IN SOME NONLINEAR RECURRING SEQUENCES. HARALD NIEDERREITER; IGOR E. SHPARLINSKI // Bulletin of the London Mathematical Society;Jul2003, Vol. 35 Issue 4, p522 It is shown that the method of estimation of exponential sums with nonlinear recurring sequences, invented by the authors in a recent series of works, can be applied to estimating sums of multiplicative characters as well. As a consequence, results are obtained about the distribution of power... • Method of self-similar approximations. Yukalov, V. I. // Journal of Mathematical Physics;May91, Vol. 32 Issue 5, p1235 A new method is suggested to approximately find out the limit for a sequence of functions when solely several first terms of a sequence are known. The method seems to be very useful for those complicated problems of mathematical physics, statistical mechanics, and field theory in which, because... • On the General Kloosterman Sums. Proskurin, N. // Journal of Mathematical Sciences;Sep2005, Vol. 129 Issue 3, p3874 The paper presents a summation formula for the general Kloosterman sums, which generalizes the known Kuznetsov formula. Bibliography: 22 titles. • On Twisted Kloosterman Sums. Proskurin, N. // Journal of Mathematical Sciences;Sep2005, Vol. 129 Issue 3, p3868 For the Kloosterman sums twisted by characters over a finite field, addition formulas of convolution type are derived. As a corollary, orthogonality relations connecting the Kloosterman and Salie vectors are obtained. Bibliography: 4 titles. • Stepanov's Method Applied to Binomial Exponential Sums. COCHRANE, TODD; PINNER, CHRISTOPHER // Quarterly Journal of Mathematics;Sep2003, Vol. 54 Issue 3, p243 For a prime p and binomial axk + bxl with 1 = l < k < 1/32(p � 1)2/3, we use Stepanov's method to obtain the bound |Sp-1x=1ep(axk + bxl)| � max {1, l?-1/3}1/4 k1/4 p3/4, where ? = (k � l)/(k, l, p � 1). • On exponential sums studied by Indlekofer and K�tai. HARMAN, G. // Acta Mathematica Hungarica;Sep2009, Vol. 124 Issue 3, p289 We continue the study of sums of the form begun by Indlekofer and K�tai. Here | Y n|,| X p| ? 1 and a is irrational. We prove one conjecture of K�tai, disprove another by both authors, and give what may be a close to best possible result valid for all irrational a. • The distribution of k-free numbers. Baker, R.; Powell, K. // Acta Mathematica Hungarica;Jan2010, Vol. 126 Issue 1/2, p181 Let k ∈ {3; 4; 5}. Let We give new upper bounds for R k( x) conditional on the Riemann hypothesis, improving work of S. W. Graham and J. Pintz. The method stays close to that devised by H. L. Montgomery and R. C. Vaughan, with the improvement depending on exponential sum results. Share
2018-01-19 20:14:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6065300703048706, "perplexity": 2149.0257855360046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00164.warc.gz"}
http://lottery-world.info/Probability_theory.htm
Home | Privacy | About us | E-Mail General Infos | Europe | USA/Canada | AustralAsia | Southamerica # Lottery Information ## Probability theory Probability theory is the mathematical study of probability. Mathematicians think of probabilities as numbers in the closed interval from 0 to 1 assigned to "events" whose occurrence or failure to occur is random. Probabilities P(A) are assigned to events A according to the probability axioms. The probability that an event A occurs given the known occurrence of an event B is the conditional probability of A given B; its numerical value is $P(A \cap B)/P(B)$ (as long as P(B) is nonzero). If the conditional probability of A given B is the same as the ("unconditional") probability of A, then A and B are said to be independent events. That this relation between A and B is symmetric may be seen more readily by realizing that it is the same as saying $P(A \cap B) = P(A)P(B)$ when A and B are independent events. Two crucial concepts in the theory of probability are those of a random variable and of the probability distribution of a random variable; see those articles for more information. ### A somewhat more abstract view of probability Mathematicians usually take probability theory to be the study of probability spaces and random variables — an approach introduced by Kolmogorov in the 1930s. A probability space is a triple $(\Omega, \mathcal F, P)$, where • Ω is a non-empty set, sometimes called the "sample space", each of whose members is thought of as a potential outcome of a random experiment. For example, if 100 voters are to be drawn randomly from among all voters in California and asked whom they will vote for governor, then the set of all sequences of 100 Californian voters would be the sample space Ω. • $\mathcal F$ is a σ-algebra of subsets of Ω - its members are called "events". For example the set of all sequences of 100 Californian voters in which at least 60 will vote for Schwarzenegger is identified with the "event" that at least 60 of the 100 chosen voters will so vote. To say that $\mathcal F$ is a σ-algebra implies per definition that it contains Ω, that the complement of any event is an event, and that the union of any (finite or countably infinite) sequence of events is an event. • P is a probability measure on $\mathcal F$, i.e., a measure such that P(Ω) = 1. It is important to note that P is a function defined on $\mathcal F$ and not on Ω, and often not on the complete powerset $\mathcal F=\mathbb P (\Omega)$ either. Not every set of outcomes is an event. If Ω is denumerable we almost always define $\mathcal F$ as the power set of Ω, i.e $\mathcal F=\mathbb P (\Omega)$ which is trivially a σ-algebra and the biggest one we can create using Ω. In a discrete space we can therefore omit $\mathcal{F}$ and just write (Ω,P) to define it. If on the other hand Ω is non-denumerable and we use $\mathcal F=\mathbb P (\Omega)$ we get into trouble defining our probability measure P because $\mathcal{F}$ is too 'huge', i.e. there will often be sets to which it will be impossible to assign a unique measure, giving rise to problems like the Banach–Tarski paradox. So we have to use a smaller σ-algebra $\mathcal F$ (e.g. the Borel algebra of Ω, which is the smallest σ-algebra that makes all open sets measurable). A random variable is a measurable function on Ω. For example, the number of voters who will vote for Schwarzenegger in the aforementioned sample of 100 is a random variable. If X is any random variable, the notation $P(X \ge 60)$, is shorthand for $P(\{ \omega \in \Omega \mid X(\omega) \ge 60 \})$, assuming that "$X \ge 60$" is an "event". For an algebraic alternative to Kolmogorov's approach, see algebra of random variables. ### Philosophy of application of probability There are different ways to interpret probability. Frequentists will assign probabilities only to events that are random, i.e., random variables, that are outcomes of actual or theoretical experiments. On the other hand, Bayesians assign probabilities to propositions that are uncertain according either to subjective degrees of belief in their truth, or to logically justifiable degrees of belief in their truth. Among statisticians and philosophers, many more distinctions are drawn beyond this subjective/objective divide. See the article on interpretations of probability at the Stanford Encyclopedia of Philosophy: [1]. A Bayesian may assign a probability to the proposition that 'there was life on Mars a billion years ago,' since that is uncertain, whereas a frequentist would not assign probabilities to statements at all. A frequentist is actually unable to technically interpret such uses of the probability concept, even though 'probability' is often used in this way in colloquial speech. Frequentists only assign probabilities to outcomes of well defined random experiments, that is, where there is a defined sample space as defined above in the theory section. For another illustration of the differences see the two envelopes problem.
2017-10-18 05:45:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587886095046997, "perplexity": 393.5502760728302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822747.22/warc/CC-MAIN-20171018051631-20171018071631-00542.warc.gz"}
https://leanprover-community.github.io/archive/stream/113489-new-members/topic/Aram.20Bingham.20(hello.20world).html
## Stream: new members ### Topic: Aram Bingham (hello world) #### Aram Bingham (May 04 2020 at 04:34): Hi! I am a PhD student working in algebraic combinatorics but I started playing through the numbers game and am interested in getting more involved with lean. Nice to find you all here! Aram #### Patrick Massot (May 04 2020 at 07:38): Welcome Aram! Did you see this thread? Having feedback would be very useful to the community. #### Johan Commelin (May 04 2020 at 08:16): @Aram Bingham We're slowly trying to get some combinatorics and graph theory into mathlib. For combinatorics, @Bhavik Mehta would be a good person to speak to. Graph theory was happening on the hedentiemi branch of mathlib, but has been dormant for a couple of weeks. #### Aram Bingham (May 04 2020 at 08:21): Hi Patrick! Thanks for sharing I will have a look soon. #### Aram Bingham (May 04 2020 at 08:24): Thanks for the pointers Johan, that's exciting to hear. I will get in touch when I get up to speed, #### Mathieu Guay-Paquet (May 04 2020 at 20:17): Welcome! And yay algebraic combinatorics :) Last updated: May 17 2021 at 23:14 UTC
2021-05-17 23:29:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28275081515312195, "perplexity": 6004.950812969229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00503.warc.gz"}
https://physics.stackexchange.com/questions/385194/how-does-electromagnetism-work-in-a-negatively-curved-universe
# How does electromagnetism work in a negatively curved universe? Let's say that there was a negatively curved universe (in particular $\Omega < 1$). I assume that means the universe would be like Hyperbolic space. In our universe, electromagnetism obeys an inverse-square law. As it just so happens, the surface area of a sphere in our universe are proportional to the the radius of the sphere. In hyperbolic space, the surface area of a sphere grows exponentially as its radius increases. Does that mean electromagnetism obeys an exponential decay law, or what? (As a follow up question, is the same true of other inverse square laws, (such a gravity)?) The fall-off of field intensity is measured in physics by the van Vleck determinant : $$\Delta_\gamma (x,y) = (-1)^d \frac{\det (\nabla_\mu^x \nabla_\nu^y \sigma_\gamma(x,y))}{\sqrt{g(x)g(y)}}$$ Which is defined for geodesics $\gamma$ between the points $x$ and $y$, with $\sigma_\gamma$ the geodetic interval between the two. The van Vleck determinant describes the expansion of the geodesic flow in spacetime, with in particular for ultrastatic spacetimes $$ds^2 = -dt^2 + g_{ij}(x^i) dx^i dx^j$$ the flux of a field at $y$ with source at $x$ is described by $$\| \vec J \| = \frac{\Delta_\gamma(\vec x, \vec y)}{s^{d-1}}$$ with $s$ the spacelike distance between the two points and $\Delta$ the van Vleck determinant of the spacelike hypersurface. In the weak field limit, the van Vleck determinant is $$\Delta_\gamma(x, y) \approx 1 + \frac 16 R_{ab} t^a t^b s^2_\gamma(x, y)$$ with $t$ the tangent of $\gamma$. Hence, for hyperbolic space, with the spacelike hypersurface $$R_{ab} = \frac{d-1}{\alpha} g_{\mu\nu}$$ then we have the approximation $$\| \vec J \| = \frac{1}{s^{d-1}} + \frac{d-1}{6\alpha} \frac{|t|}{s^{d-3}}$$ So globally the flux drop-off will depend on the distance in hyperbolic space, for instance in the hyperboloid model $$s_\gamma(x,y) = \operatorname{arcosh}(x^0 y^0 - \sum x^i y^i)$$ For more details on the van Vleck determinant you can check "Relativity : the general theory" by Synge, or the paper by Visser on the topic. • So it would asymptotically decay exponentially with distance, right? – PyRulez Feb 9 '18 at 16:07
2020-01-25 00:05:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 7, "x-ck12": 0, "texerror": 0, "math_score": 0.8937966823577881, "perplexity": 373.5348682773901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250626449.79/warc/CC-MAIN-20200124221147-20200125010147-00380.warc.gz"}
https://stats.stackexchange.com/questions/418251/what-is-the-basis-for-statistical-inference
# What is the basis for statistical inference? I wrote in a document something similar to this: Based on the collected samples, we hope to infer ... I was asked to provide a citation for my claim. I guess I can claim that I don't need a citation because sample based inference is well-known, but it made me question myself. Why can we say that we can characterize a populations based on some samples? Who came up with this and where is their research? I was thinking of the central limit theorem, but my understanding is that it only applies to populations that have distributions that can be approximated with a normal distribution. So the question is whether the claim is even true that we can use samples to make statistical inferences even when the population distribution is unknown. If yes, what is the theorem that says this. • Central limit theorems refer to what results look like, not the data. But the history of sampling is as long as you like. Early people tested small amounts to make decisions on whether to eat something, bathe somewhere, choose a partner,,,, There was no sharp beginning, – Nick Cox Jul 19 at 12:27 • it is intuitive that if you eat a fruit from a tree and you don't die, it is safe to eat more fruits, but nature doesn't have to follow human intuition. Regardless, is there anyone who formalized this? – Andrei Jul 19 at 12:40 • Every area of research has 'knowledge' that is so commonly used and so uncontroversial that you don't need to use any reference. Publishing guides by most journals also state something along those lines. It would be like including a proof to the Pythagorean theorem in a paper that includes the calculation of the hypotenuse somewhere, or adding a reference for gravity on Earth being around $9.81$ m/s$^2$. Unless the essence of your paper is sample based inference (e.g. its history), I think this is completely unnecessary. – Frans Rodenburg Jul 19 at 12:55 • @FransRodenburg I agree it's unnecessary, but it made me think, and now I want to know :) – Andrei Jul 19 at 12:57 • This is a cheap reply but everything still hinges on what counts as "formalized". – Nick Cox Jul 19 at 12:58
2019-12-11 21:59:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5262523293495178, "perplexity": 490.0038248227536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540533401.22/warc/CC-MAIN-20191211212657-20191212000657-00548.warc.gz"}
https://brilliant.org/discussions/thread/international-mathematical-olympiad-imo-hall-of/
# International Mathematical Olympiad (IMO) Hall of Fame Hi, guys From today, I will be posting select problems from the IMO papers! Whoever solves this and gives the official solution first will be recognised. You can also tell me which IMO paper you want the next question to be from - I have a full library of UKMT and IMO past papers and solutions For now, I haven't got individual IMO solutions so, for now, the IMO questions will come from the papers from $1970$ to $2019$. If anybody hasn't posted the official solution in $2$ days, I will post it (as a snapshot) and recognition will be lost (last submission will be $23:58$pm on the day after submitting the problem.) (unless somebody has already posted a solution - I will check if it's the official solution - if it's not, the person's solution will be mentioned as an attempt (must have bits of official solution - minimum $1$). If it's the official solution, the person will be recognised as a true solver. A full list will appear below in sections: Algebra: 2019, Problem 4, Day 2 - Hamza Anushath - Attempt goes to Alak Bhattacharya for including one bit of the official solution. 2019, Problem 1, Day 1 - Lost - Attempt goes to Tom Englesman for including bits of official solution. 2018, Problem 2, Day 1 - Lost - Attempt goes to nobody 2018, Problem 3, Day 1 - Hamza Anushath - Attempt goes to nobody 2018, Problem 4, Day 2 - Lost - Attempt goes to nobody 2018, Problem 5, Day 2 - Lost - Attempt goes to nobody 2017, Problem 1, Day 1 - Lost - Attempt goes to Ved Pradhan for including one bit of the official solution. 2017, Problem 2, Day 1 - Lost - Attempt goes to nobody 2017, Problem 5, Day 2 - Lost - Attempt goes to nobody 2017, Problem 6, Day 2 - Lost - Attempt goes to nobody Number Theory: 2019, Problem 3, Day 1 - Lost - Attempt goes to nobody Geometry: 2017, Problem 3, Day 1 - Lost - Attempt goes to nobody Note by A Former Brilliant Member 1 year, 1 month ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: I'm not good enough for IMO :( - 1 year, 1 month ago I'm not good enough for PRMO :( - 1 year, 1 month ago I thought I was good enough for both...Reality set in after I attempted PRMO for the 1st time. - 1 year, 1 month ago I might not be able to attempt again...although if I do, I think I can get till RMO - 1 year, 1 month ago Or maybe that's just wishful thinking - 1 year, 1 month ago When did you give PRMO, and what was your score? - 1 year, 1 month ago I don't remember the score, but I know it was two marks below the cutoff - 1 year, 1 month ago Ok, this year? And from which state - 1 year, 1 month ago Yeah, this year. Maharashtra. What was your state? - 1 year, 1 month ago The thing was-I focused too much on geometry, and ignored the number theory and algebra. Later I realised I should have done the opposite. I did manage to solve a few questions, but took too much time for them. My friend though-he wrote the INMO in 2020, same as you did. I didn't ask him his score as our Pre-Boards were going on, and then there was Boards. But he got a good score in RMO- 40 or above. - 1 year, 1 month ago Ohh, All the best for next year (or maybe for the next two months (if they allow (due to COVID))) - 1 year, 1 month ago Thanks. All the best to you too (if you're giving it). Maybe you'll write IMO next year, or in 2022. - 1 year, 1 month ago Thank you Sachetan! I hope that happens! - 1 year, 1 month ago How can I submit my answer in the first problem? The answer must be an integer. - 1 year, 1 month ago Take a look at the solutions. - 1 year, 1 month ago - 1 year, 1 month ago k=1;n=1 and k=3;n=2 - 1 year, 1 month ago Answer is the number of pairs - $2$ in your case. - 1 year, 1 month ago Yes. - 1 year, 1 month ago - 1 year, 1 month ago What? $\LaTeX$? - 1 year, 1 month ago Yes! - 1 year, 1 month ago Thanks once again @Yajat Shamji! - 1 year, 1 month ago Well, the solution is like the official, so... - 1 year, 1 month ago Also, I have a task for you: can you find IMO solutions from $1959$ to $1999$? (I have got solutions from $2000$ to $2019$). - 1 year, 1 month ago Hahaha! I have solutions from 1959 to 2003, but not in English. - 1 year, 1 month ago Can you find ones in English? And thank you! Also, I don't need solutions from $2000$ to $2003$ - I have already got them. @Páll Márton - 1 year, 1 month ago All the questions are in the link This may not ordered, but it may have all the solutions for the IMO questions - 1 year, 1 month ago I need PDFs! But, @Páll Márton has reduced the search - I now only need solutions from $1959 - 1969$. Nice try, @Hamza Anushath, though! - 1 year, 1 month ago - 1 year, 1 month ago - 1 year, 1 month ago It's in English, yes? - 1 year, 1 month ago Yes? - 1 year, 1 month ago I need PDFs, sorry! But really good effort! - 1 year, 1 month ago I have already got this document - I now need IMO solutions from $1959$ to $1969$! - 1 year, 1 month ago I don't understand. You need solutions from 1959 to 1969. And in this pdf you can see the solutions from 1959 to 1969 and from 1969 to 2009. - 1 year, 1 month ago - 1 year, 1 month ago Thanks for shortening the search! :D - 1 year, 1 month ago It is very difficult for me to understand these tasks in English. I think everyone can find these tasks on the internet in the language that the best for them, and not have to share them here. - 1 year, 1 month ago You are right, @Páll Márton. But I want to post as much IMO as I can. Besides, most of my ideas are rubbish anyways... - 1 year, 1 month ago Me too. I can create logic problems based on ambiguous words but not in English. - 1 year, 1 month ago Take a look at my profile and look through the questions before I started the IMO series. Is there not one problem where I have not got negative criticism?$^{\infty}$ - 1 year, 1 month ago Also, I can only solve $28.19\%$ of all BRILLIANT problems. And, when I give a solution, either I am forced to delete it or bear negative criticism... Clearly, the platform of BRILLIANT has decided I am a bad mathematician who tinkers too much... And, clearly, I have been branded as a insane mathematical outcast by the BRILLIANT community... - 1 year, 1 month ago @Yajat Shamji, I regard you (and others might as well) to be a great person in mathematics. But only one question: How did you get your wrong/right percentage? Did you calculate it (great work if you did, I never thought of that...) or is it shown somewhere @Yajat Shamji? - 1 year, 1 month ago $\frac{Problems solved}{Problems attempted}$$* 100$ Problems solved and attempted are in Stats. - 1 year, 1 month ago Look most of the great scientists-and mathematicians-have branded outcasts at some point or the other. So at least you're in good company. Don't take it that seriously. Second, criticism is a very useful thing. if valuable, you can profit from it. If not, you can simply ignore it. You feel like a bad mathematician now because yo are comparing yourself to genii like Ramanujan, Gauss, etc. But, dude, this is an infinite world, with infinite variety. Where would the fun be if all of us could do whatever we wanted to do in a millisecond, and no work required? how would it be called learning otherwise? I suggest take some time off Math. Explore other interests, then when you feel ready, come back. There are lots of things you can do to learn (not involving brilliant). And hone your skills to the utmost possible, then people will still start respecting you. And if some still don't-well, at least they would have become fewer, right? People disrespect greats-Messi, Newton, Paul Erdos, etc. all the time. Do they take it seriously? But take criticism-which is valid-sportingly, and use it to improve your skills. That way, you can profit immensely. Just a note-Einstein did math very slowly. He didn't memorize anything. Yet he said it helped him because he could understand things better that way, their whys and not just their hows. Not saying you should copy that blindly-you should never do that-but try to get how and why, think on that. Keep moving even if you fail, for failure is like friction-it helps us walk, or run. Do what is right(not in the sense of morality), not necessarily what most people do, or what you want to do now-because the latter changes continuously, the former always remains the same. Hope you got what I'm trying to say. - 1 year, 1 month ago No, I don't get it. Also, you're becoming peachy... - 1 year, 1 month ago Preachy. Yeah, I meant to delete this comment. i can be kind of insensitive at times - 1 year, 1 month ago No, peachy... - 1 year, 1 month ago OK, I'm unfamiliar with its use in that way. - 1 year, 1 month ago I'm trying to say, don't delete it: Peachy (North American) means fine / very satisfactory... - 1 year, 1 month ago that's all right then - 1 year, 1 month ago - 1 year, 1 month ago I mean, it is annoying that you always tag us again and again. And for getting solutions, there are a lot of answers in AOPS (Art of Problem Solving) website. - 1 year, 1 month ago What? All I want to do is to show people this. And why not tag. I understand your frustration, but, according to me, this is the best way to get people... - 1 year, 1 month ago Seriously, @Vinayak Srivastava, @Adhiraj Dutta? I called you because I know you can do it. Believe in yourself. Besides, all the questions look hard but they only require up to A-Level knowledge... - 1 year, 1 month ago I haven't even learnt the basics! - 1 year, 1 month ago What, you can't attempt even the first question I have posted?... - 1 year, 1 month ago I got it correct(guessed it though)! - 1 year, 1 month ago Great! And answer to my previous comment... - 1 year, 1 month ago @Hamza Anushath, look above... - 1 year, 1 month ago @Hamza Anushath , look above in the IMO Hall of Fame... - 1 year, 1 month ago (°U° ) :D Thnx a lot @Yajat Shamji - 1 year, 1 month ago Well, I changed the date and after seeing what you did, I thought it would be fair to give you the recognition. Also, check out the 2018 problems, quick! - 1 year, 1 month ago On it sir! - 1 year, 1 month ago No need to call me sir, @Hamza Anushath. Call me Yaj (my new nickname for friends...). - 1 year, 1 month ago Ooh...lovely nicknames for the lovely couple..@NSCS 747 @Frisk Dreemurr evil chuckling in the distance evil chuckling growing to an evil laugh MUAHAHAHAHAAHAHAHA oh shit, oh god, oh fu- deep inhale oh fuck - 9 months ago I said it, I said it, I said it "day after day, we stray further from god" - 9 months ago Not really...I've said that multiple times, but Dad doesn't seem to stray away from me :P fuck that how are you not angry she leaked the fucking nuke - 9 months ago See, now you are expressing your desires like how you want to...do the frick with...him... lmfao lol - 9 months ago frisk fucking delete the abyss trick in your big brain time note you fucking idiot the public is not ment to know and i thught of all people you would know this was ment to be a fucking secret and now lam le knows and if he has actually made a fucking million line abyss then brilliant will be kicking all of our asses - 9 months ago It is deleted now, all removed - 9 months ago Did you actually check the note now, not a trace to be seen Give a sec, I have another idea - 9 months ago Nobody has solved $2018$, Problem $4$, Day $2$ yet... - 1 year, 1 month ago Refresh the page! Maybe I write a solution, but I finished the school today, and I must go to the school. - 1 year, 1 month ago I was doing school as well online. I go to physical school on Friday... - 1 year, 1 month ago I must do everything online too. In our school we had to photograph the copybooks. - 1 year, 1 month ago The size of the pictures - 1 year, 1 month ago No way! Check out this: But, before, it was over $5$GB! - 1 year, 1 month ago Solved, no solution... - 1 year, 1 month ago Does anybody want me to post any Geometry questions or Number Theory questions from the $2016$ paper? If so, ask now. @Hamza Anushath , @Zakir Husain , @Mahdi Raza , @Alak Bhattacharya - 1 year, 1 month ago @Yajat Shamji - One of your problem is wrong see 2017, Problem 1, Day 1 see the reports - 1 year, 1 month ago
2021-08-04 16:57:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 37, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7015478610992432, "perplexity": 3362.7894586046677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154878.27/warc/CC-MAIN-20210804142918-20210804172918-00383.warc.gz"}
https://cantera.org/documentation/docs-2.0/doxygen/html/IdealSolnGasVPSS_8cpp.html
Cantera  2.0 IdealSolnGasVPSS.cpp File Reference Definition file for a derived class of ThermoPhase that assumes either an ideal gas or ideal solution approximation and handles variable pressure standard state methods for calculating thermodynamic properties (see Thermodynamic Properties and class IdealSolnGasVPSS). More... #include "cantera/thermo/IdealSolnGasVPSS.h" #include "cantera/thermo/VPSSMgr.h" #include "cantera/thermo/PDSS.h" #include "cantera/thermo/mix_defs.h" #include "cantera/thermo/ThermoFactory.h" #include "cantera/base/stringUtils.h" Include dependency graph for IdealSolnGasVPSS.cpp: Go to the source code of this file. ## Namespaces namespace  Cantera Provides class Nucleus. ## Detailed Description Definition file for a derived class of ThermoPhase that assumes either an ideal gas or ideal solution approximation and handles variable pressure standard state methods for calculating thermodynamic properties (see Thermodynamic Properties and class IdealSolnGasVPSS). Definition in file IdealSolnGasVPSS.cpp.
2023-04-01 13:58:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20747382938861847, "perplexity": 11953.054246538111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00617.warc.gz"}
https://stats.stackexchange.com/questions/429730/wmape-wape-for-the-evaluation-of-time-series-with-positive-and-negative-values
# WMAPE / WAPE for the evaluation of time series with positive and negative values I have a time series y that has both positive and negative that I want to predict. For the prediction I normalize the values to a range between 0 and 1. If I give the normalized actual and forecast data in WAPE / WMAPE, I get an error of ~5%. However, if I denormalize the actual data and forecast data back to the original span with negative and positive values and then put them into WAPE \ WMAPE, I get an error of ~15%. Which of the error measurements is correct?
2020-08-14 18:01:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7991884350776672, "perplexity": 701.9498238404766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00511.warc.gz"}
http://hal.in2p3.fr/in2p3-01181351
# Measurement of the branching ratio $\Gamma(\Lambda_b^0 \rightarrow \psi(2S)\Lambda^0)/\Gamma(\Lambda_b^0 \rightarrow J/\psi\Lambda^0)$ with the ATLAS detector Abstract : An observation of the $\Lambda_b^0 \rightarrow \psi(2S) \Lambda^0$ decay and a comparison of its branching fraction with that of the $\Lambda_b^0 \rightarrow J/\psi \Lambda^0$ decay has been made with the ATLAS detector in proton--proton collisions at $\sqrt{s}=8\,$TeV at the LHC using an integrated luminosity of $20.6\,$fb$^{-1}$. The $J/\psi$ and $\psi(2S)$ mesons are reconstructed in their decays to a muon pair, while the $\Lambda^0\rightarrow p\pi^-$ decay is exploited for the $\Lambda^0$ baryon reconstruction. The $\Lambda_b^0$ baryons are reconstructed with transverse momentum $p_{\rm T}>10\,$GeV and pseudorapidity $|\eta|<2.1$. The measured branching ratio of the $\Lambda_b^0 \rightarrow \psi(2S) \Lambda^0$ and $\Lambda_b^0 \rightarrow J/\psi \Lambda^0$ decays is $\Gamma(\Lambda_b^0 \rightarrow \psi(2S)\Lambda^0)/\Gamma(\Lambda_b^0 \rightarrow J/\psi\Lambda^0) = 0.501\pm 0.033 ({\rm stat})\pm 0.019({\rm syst})$, lower than the expectation from the covariant quark model. Document type : Journal articles http://hal.in2p3.fr/in2p3-01181351 Contributor : Sabine Starita <> Submitted on : Thursday, July 30, 2015 - 9:12:29 AM Last modification on : Tuesday, November 5, 2019 - 4:32:43 PM ### Citation G. Aad, M.K. Ayoub, A. Bassalat, C. Becot, S. Binet, et al.. Measurement of the branching ratio $\Gamma(\Lambda_b^0 \rightarrow \psi(2S)\Lambda^0)/\Gamma(\Lambda_b^0 \rightarrow J/\psi\Lambda^0)$ with the ATLAS detector. Physics Letters B, Elsevier, 2015, 751, pp.63-80. ⟨10.1016/j.physletb.2015.10.009⟩. ⟨in2p3-01181351⟩ Record views
2019-11-19 04:46:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9460978507995605, "perplexity": 2948.0394068843502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670006.89/warc/CC-MAIN-20191119042928-20191119070928-00540.warc.gz"}
https://www.ncatlab.org/nlab/show/Cotor
# Contents ## Idea The derived functor of a cotensor product functor of comodules is often called “Cotor”, in analogy with the notation “Tor” for the derived functor of a tensor product functor. ## References Last revised on May 12, 2016 at 07:21:53. See the history of this page for a list of all contributions to it.
2021-12-07 17:36:18
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9808782935142517, "perplexity": 916.3659714098007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00398.warc.gz"}
https://www.mersenneforum.org/showthread.php?s=cf3746e3acd169c2d5442b11dff2c113&p=536426
mersenneforum.org #1 Register FAQ Search Today's Posts Mark Forums Read 2020-01-30, 00:32 #111 Xyzzy     "Mike" Aug 2002 5×23×67 Posts 2020-02-01, 21:55   #112 chalsall If I May "Chris Halsall" Sep 2002 2×4,643 Posts Quote: Originally Posted by chalsall I'm going to run an experiment, and use most of it for a swap partition. I sometimes want to do Blender rendering jobs which won't fit in my main workstation's RAM, so I have to spin up a "cloud" instance. So, my world finally stabilized a little bit, and I was just now able to install the 256 GB SSD. 100 GB for a Fedora 31 root partition, and 120 GB of swap. Left 36 GB un-partitioned for additional fail-over capacity. I haven't tried running a big Blender job which causes swapping yet, but that will be very interesting. I have to say I've never worked on an SSD based machine before. Wow! Snappy! The latency in the polished rust is nice to get away from. 2020-02-01, 23:09   #113 Runtime Error Sep 2017 USA 25·5 Posts Quote: Originally Posted by VBCurtis [...] when the FFT data fits into the CPU cache. That's why on some machines (and for small-enough FFT sizes), one can get nice timings without crushing memory bandwidth. This applies generally either to tests with FFT sizes far below the Prime95 interest level (say, exponents below 10M on other projects), or on Xeons with large L3 caches. Thanks for the great explanation! My next naive question is of course, "Why don't CPUs come with larger caches?" Eons ago, I bought a wicked awesome custom computer from a local company and remember upgrading the cache. I'm not aware that is a build option anymore. Do motherboards still support cache expansion? Is there a non-monetary benefit for having smaller caches? I imagine that if you had a (say) whole gigabyte L4 cache it wouldn't be very efficient quickly access the bits you need immediately next, but wouldn't it be faster than going to RAM? P.S. Sorry for taking this thread away from "big memory", and thanks for the ELI5 explanations, perhaps I should send in tuition payments. You folks rock. 2020-02-01, 23:38   #114 M344587487 "Composite as Heck" Oct 2017 65510 Posts Quote: Originally Posted by Runtime Error Thanks for the great explanation! My next naive question is of course, "Why don't CPUs come with larger caches?" Eons ago, I bought a wicked awesome custom computer from a local company and remember upgrading the cache. I'm not aware that is a build option anymore. Do motherboards still support cache expansion? Is there a non-monetary benefit for having smaller caches? I imagine that if you had a (say) whole gigabyte L4 cache it wouldn't be very efficient quickly access the bits you need immediately next, but wouldn't it be faster than going to RAM? P.S. Sorry for taking this thread away from "big memory", and thanks for the ELI5 explanations, perhaps I should send in tuition payments. You folks rock. Motherboards do not support cache expansion. For L3 cache (and probably the rest) there is a latency penalty associated with larger caches but the benefits generally outweigh the negatives. 2020-02-01, 23:41 #115 mackerel     Feb 2016 UK 389 Posts Not an expert in the area, but understand that bigger cache = slower access. That's why there are usually 3 levels of cache on x86 CPUs. Small but ultra-fast next to the cores, medium size medium speed in 2nd tier, and relatively big slower speed as 3rd tier. Then you reach ram. There is some overhead to keeping track of what data is in the cache. I think L2 is generally tied to the core, but L3 can be shared between cores to some extent. Broadwell consumer desktop CPUs were an odd ball, with 128MB of L4 cache. For its time, that was great as it was practically ram bandwidth unlimited for prime number finding. I didn't lose performance even running a single stick of slow ram since it worked out of the cache. However its cache speed isn't amazing today, so if it were to be revisited, it would have to be much faster. Removable cache isn't really a thing any more, unless you count Intel Optane but that acts more like an extra tier between ram and bulk storage so not of direct help here. 2020-02-02, 01:27   #116 xx005fs "Eric" Jan 2018 USA 24×13 Posts Quote: Originally Posted by Runtime Error Thanks for the great explanation! My next naive question is of course, "Why don't CPUs come with larger caches?" Eons ago, I bought a wicked awesome custom computer from a local company and remember upgrading the cache. I'm not aware that is a build option anymore. Do motherboards still support cache expansion? Is there a non-monetary benefit for having smaller caches? I imagine that if you had a (say) whole gigabyte L4 cache it wouldn't be very efficient quickly access the bits you need immediately next, but wouldn't it be faster than going to RAM? P.S. Sorry for taking this thread away from "big memory", and thanks for the ELI5 explanations, perhaps I should send in tuition payments. You folks rock. If you have ever seen a die shot of Ryzen 3000 series, you realize that the cache blocks takes a significant amount of die space for the tiny amount of space provided. In the future as cores gets more complex and process technology improves, more cache can definitely be crammed on the die, or even an L4 cache structure like Broadwell that's on package (hell HBM memory on video cards have similar ideas to Broadwell's L4 cache). As the cache gets bigger, bandwidth increases, but on the other hand Latency also increases, but not significantly so. The whole point of having a cache integrated within a CPU is that its latency is significantly lower than memory latency, and the closer it is to the CPU, the lower the latency, meaning that motherboard expansions will just defeat the whole purpose of having CPU cache and it will be similar to RAM (for example, L3 cache latency can be in the single digit nanoseconds, while memory latency are generally above 60ns). 2020-02-02, 05:44 #117 LaurV Romulan Interpreter     Jun 2011 Thailand 22×3×739 Posts Paraphrasing what ante-posters said, CPUs do come with larger cache, but larger cache = slower access, and larger cache + faster access = buckets of money, because each row of cell blocks you add may double or triple the silicon, and decrease the fabrication yield. Re upgrading the cache, you make confusion between different layers of cache, the fastest one was always inside of the CPU (except for the very early models of CPUs which had no cache). The slowest one, less expensive, may be outside of the CPU and still could be upgradable for some mobos, but current RAM memories are fast enough and provide a wide-enough bus to make the external cache obsolete. Many systems could have multiple layers of cache, with pipelines for both instructions and data, from which the fastest one (in small amount) is always internal (in the CPU) and the slowest one (in larger amount) is inside of your RAM stick. What makes the RAM "slow" is the multiplexing of the pins, and the refresh cycle (dynamic RAM). Memories can be static or dynamic. In their quest to make larger memories with smaller dimensions and cheaper price, manufacturers decreased the RAM cells to such small, micronic dimensions that the cell is not able anymore to hold the information for long time, so the memory cell needs a periodic "refresh". This means "somebody" must read the content of the memory every few milliseconds, and write it back. If you found a zero, write a zero. If you found a one, write a one. If you forget to do that, after more milliseconds the content of the cell is lost (the static charge stored there will discharge through the parasitic circuits of the cell) and the memory will contain unreliable, random data. That is called "dynamic" memory, as opposite to "static", where the cells are big enough to retain the electric charge as long as the power is applied to them, and no refresh cycle is needed. Also, to reduce the cost, manufacturers will multiplex the access lines for data and address. Imagine the memory like a neighborhood map, with horizontal and vertical streets, and houses at the corners, arranged in square fashion. You can tell the post man "give this letter to the house at the intersection of the horizontal 17 street with the vertical 23 street". He will know exactly where to go. Alternatively, imagine the postman is a bit stupid and he can't remember two numbers, but only one. You will have to tell him "go to horizontal 17 street and when you are there, call me back". Then, when he is there, he'll call you and you can tell him "go on that street till you intersect vertical 23 street and let the letter there". This is called "multiplexed address". It takes more time to be reached, and you need more communication to get there. In silicon terms, what makes an integrated circuit expensive nowadays, is not the amount of memory, but the package, and the amount of pins. For example, one Cortex ARM microcontroller (in short, MCU) with 64 pins and 128K memory costs $2, if you need to double the memory to 256k, the new MCU will cost$2.20, or if you need only half of the memory, 64k, you could pay $1.80. That is because memory is just one grain of sand/silicone more or less, inside the package. But, for example, if you only need few input/outputs, and move to a package with 48 pins, then the price is$0.8, and if you need more I/Os like you have to command a lot of stuff, or LCDs, then you will have to pay $4 or$5 for the 80-pin or 100-pin packages, which are much larger, and have a lot more metal (the pins) and wires bonded inside (wires made of gold, or gold bumps for non-bond dies), in spite of the fact that exactly the same chip/die is used in all 4 packages (in the packages with low pin count, some MCU pins are not used, and not bonded internally). Back to memories, the manufacturers reduce the costs by making integrated circuits (ICs) with less pins, smaller packages, but the drawback is that you have less bandwidth to communicate with them, your postman can only remember one number. In memory terms, you need to send the address of the cell you want to access in two (or more) chunks, send the first half of it now, and the second half a bit later. Because you do not have enough communication lines (channels) to transmit all at once. This also goes when you read the data back. So, static memories are much faster (no refresh needed, most of them no need any multiplex for addresses or data, but there are static RAMs which are multiplexed too). But they are bulky and expensive. Cache memories are something derived from the static memories, interposed between your dynamic RAM and your CPU. Every time you read something from the dynamic RAM, the information is stored in the cache too. Next time when you need the same data, this is already in the cache and it is read from there, without accessing the (slow) dynamic RAM. That is all the trick. Well, mainly. You can have one or more layers of cache, the fastest, most expensive, and smallest in size toward the CPU, the slower, cheaper, coming in larger amount, toward the RAM. There are also systems with complete static RAM and wide enough bus to make the multiplex futile. These systems, you can consider that they have "only cache memory", as they have zero-latency to read-write the static cells. But they can get bloody expensive. Imagine that is is not only the 512 pins to read the data non-muxed, but you also need 512 pins on the CPU/GPU side, and 512 copper tracks on the PCB/mobo/chipset, whatever... A lot of metal, a lot of money, not talking about how much EMI (electromagnetic interference) all these parallel lines generate, and the investment you need to make to protect and shield against such things... That is why new/fast GPUs are so bloody expensive... Last fiddled with by LaurV on 2020-02-02 at 07:11 2020-02-02, 07:03   #118 VBCurtis "Curtis" Feb 2005 Riverside, CA 23×32×61 Posts Quote: Originally Posted by Runtime Error Thanks for the great explanation! .... P.S. Sorry for taking this thread away from "big memory", and thanks for the ELI5 explanations, perhaps I should send in tuition payments. You folks rock. You're quite welcome, and thank you for the kind words. This place is full of inquisitive idiots, some of which enjoy sharing our findings with the freshly arrived ones. Welcome! 2020-02-02, 09:37   #119 M344587487 "Composite as Heck" Oct 2017 65510 Posts Quote: Originally Posted by xx005fs ... In the future as cores gets more complex and process technology improves, more cache can definitely be crammed on the die, or even an L4 cache structure like Broadwell that's on package (hell HBM memory on video cards have similar ideas to Broadwell's L4 cache). .... I can see HBM potentially becoming an L4 of sorts particularly as we try to break the bandwidth limit for iGPU's. Currently an iGPU shares DDR4 memory bandwidth with the cores which heavily caps how performant iGPU's can be. A stack of HBM would solve that problem and has the potential to be used as a victim cache (or something) by the CPU cores. I'm probably giving intel/AMD too much credit, such a thing might exist one day in a mobile form factor that does away with DDR and a discrete card altogether but it's unlikely to exist in a desktop form factor. 2020-02-02, 11:35   #120 Xyzzy "Mike" Aug 2002 5·23·67 Posts Quote: Originally Posted by LaurV In their quest to make larger memories with smaller dimensions and cheaper price, manufacturers decreased the RAM cells to such small, micronic dimensions that the cell is not able anymore to hold the information for long time, so the memory cell needs a periodic "refresh". This means "somebody" must read the content of the memory every few milliseconds, and write it back. If you found a zero, write a zero. If you found a one, write a one. If you forget to do that, after more milliseconds the content of the cell is lost (the static charge stored there will discharge through the parasitic circuits of the cell) and the memory will contain unreliable, random data. That is called "dynamic" memory, as opposite to "static", where the cells are big enough to retain the electric charge as long as the power is applied to them, and no refresh cycle is needed. We use a memory testing program that does this sub-test: Quote: Bit fade test, 2 patterns The bit fade test initalizes all of memory with a pattern and then sleeps for 5 minutes (or a custom user-specifed time interval). Then memory is examined to see if any memory bits have changed. All ones and all zero patterns are used. Do you think the memory can hold its contents that long? We've never experienced an error with this particular sub-test. 2020-02-02, 11:45   #121 mackerel Feb 2016 UK 389 Posts Quote: Originally Posted by M344587487 I can see HBM potentially becoming an L4 of sorts particularly as we try to break the bandwidth limit for iGPU's. The strength and weakness of HBM is that it is very wide but relatively slow clocked. For a highly parallel workload like GPU, that's not a problem. For a CPU it might be too wide to be effective unless you have a lot of cores working on the same kind of data. Quote: I'm probably giving intel/AMD too much credit, such a thing might exist one day in a mobile form factor that does away with DDR and a discrete card altogether but it's unlikely to exist in a desktop form factor. Intel Foveros might some day turn into that, as well as a future implementation of AMD's chiplet strategy. It'll be interesting to see how things go.
2020-10-25 11:18:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3784313201904297, "perplexity": 2484.1361205196363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00524.warc.gz"}
http://www.gradesaver.com/a-dolls-house/q-and-a/does-the-wonderful-thing-represent-an-unrealistic-fantasy-56233
# Does the wonderful thing represent an unrealistic fantasy? Explore the argument and conclusion of the play.
2017-02-20 00:06:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8992936015129089, "perplexity": 2337.3236988292865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170286.6/warc/CC-MAIN-20170219104610-00047-ip-10-171-10-108.ec2.internal.warc.gz"}
https://quantiki.org/journal-article/interference-single-photons-emitted-byentangled-atoms-free-space-arxiv171202105v1
# Interference of single photons emitted byentangled atoms in free space. (arXiv:1712.02105v1 [quant-ph]) The generation and manipulation of entanglement between isolated particles has precipitated rapid progress in quantum information processing. Entanglement is also known to play an essential role in the optical properties of atomic ensembles, but fundamental effects in the controlled emission and absorption from small, well-defined numbers of entangled emitters in free space have remained unobserved. Here we present the control of the spontaneous emission rate of a single photon from a pair of distant, entangled atoms into a free-space optical mode. Changing the length of the optical path connecting the atoms modulates the emission rate with a visibility $V = 0.31 \pm 10$ determined by the degree of entanglement shared between the atoms, corresponding directly to the concurrence $\mathcal{C_{\rho}}= 0.27 \pm 0.03$ of the prepared state. This scheme, together with population measurements, provides a fully optical determination of the amount of entanglement. Furthermore, large sensitivity of the interference phase evolution points to applications of the presented scheme in high-precision gradient sensing.
2018-01-18 04:15:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.603750467300415, "perplexity": 2088.6707992878733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887065.16/warc/CC-MAIN-20180118032119-20180118052119-00216.warc.gz"}
http://mathhelpforum.com/advanced-algebra/192327-find-all-inner-products-such-x-tx-0-a-print.html
# Find all "Inner Products" such that <x, Tx> = 0. • Nov 20th 2011, 10:57 AM TaylorM0192 Find all "Inner Products" such that <x, Tx> = 0. Hello, I came across this problem while preparing for one of my midterms, and it erks me that I still can't find a solution. Given an linear operator T: R^2 -> R^2 rotation by pi/2 (i.e. T(x, y) = T(-y, x)), find all inner products < , > such that for all vectors x in R^2, <x, Tx> = 0. It is obvious that the standard inner (dot) product is one such inner product. A few others could certainly be imagined (i.e. scalar multiples of the dot product). But we are, of course, asked to find all possible cases. I know that all inner products on finite dimensional inner product spaces are represented by a Hermitian matrix G (G = G*) which satisfies the additional condition [x]*G[x] > 0 for all vectors x in V with respect to some basis. Conversely, when we have such a G satisfying the above, it always represents an inner product on V with respect to a certain basis B. The inner product is of course given as <x, y> = [y]*[G][x], where [y]* is the conjugate transpose of the coordinate vector of y represented in the basis B, likewise for [x] without the conjugation/transpose. In the question, the conjugation can be removed, so we have <x, y> = [y]G[x]. The condition imposed on the inner product is <x, Tx> = [Tx]G[x] = 0. Thus, it seems to be, that to solve the problem we must find all possible G which satisfies this equation. But the problem I face is that even if I was able to use this equation to find such G, how do I account for the representation of x and Tx in the corresponding basis that G represents an inner product? Needless to say, my attempts down this path of reasoning haven't led me to a solution. If anyone could help extend this process to get the solution, or propose a different approach, I would appreciate it! • Nov 20th 2011, 02:02 PM Jose27 Re: Find all "Inner Products" such that <x, Tx> = 0. $\langle x,Ay\rangle= \langle A^*x,y \rangle = \langle x, y \rangle _1$ characterizes all inner products, when $A$ varies over positive (symmetric) operators in $V$. Your hypothesis say $\langle Ax, Tx \rangle =0$ for all $x$, which means that $Ax$ must be orthogonal to the othogonal complement of $x$, and since you're in two dimensions this is just $Ax=\lambda_x x$. By linearity of $A$ it's not difficult to see that $\lambda_x=\lambda_y$ for all $x,y\in \mathbb{R}^2$. Since $A$ must be positive, this means $\lambda >0$, and so $A=\lambda I$ and all inner products are multiples of the usual. • Nov 20th 2011, 02:21 PM Deveno Re: Find all "Inner Products" such that <x, Tx> = 0. since we are dealing with R^2, G is a symmetric matrix. moreover, G must be positive-definite, that is, if (x,y) is not (0,0), and G = [a b] [b c], $\begin{bmatrix}x&y\end{bmatrix} \begin{bmatrix}a&b\\b&c\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix} > 0$ so $ax^2 + 2bxy + cy^2 > 0$ since (x,0) and (0,y) are not (0,0) for x,y ≠ 0, we must have a,c > 0. moreover, by "completing the squares" we see that: $ax^2 + 2bxy + cy^2 = a\left(x+\frac{b}{a}y\right)^2 + \left(\frac{ac - b^2}{a}\right)y^2$, so we must also have, in addition to a,c > 0, that $|b| < \sqrt{ac}$. now, in addition, we require that <x,Tx> = 0, that is: $x^TGTx = 0$ so that: $(c - a)xy + b(x^2 - y^2) = 0$ for all x,y in R. if we choose x = 1, y = 0, we see that b = 0. if we choose x = y, we see that a = c, so the only possible candidates are matrices of the form aI, for a > 0, that is to say: POSITIVE scalar multiples of the usual inner product. • Nov 20th 2011, 07:55 PM TaylorM0192 Re: Find all "Inner Products" such that <x, Tx> = 0.
2016-09-27 14:12:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892071008682251, "perplexity": 444.08139802921227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661087.58/warc/CC-MAIN-20160924173741-00019-ip-10-143-35-109.ec2.internal.warc.gz"}
https://hal.inria.fr/inria-00071426
HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information A max-plus finite element method for solving finite horizon deterministic optimal control problems Abstract : We introduce a max-plus analogue of the Petrov-Galerkin finite element method, to solve finite horizon deterministic optimal control problems. The method relies on a max-plus variational formulation, and exploits the properties of projectors on max-plus semimodules. We obtain a nonlinear discretized semigroup, corresponding to a zero-sum two players game. We give an error estimate of order $\sqrt{\Dta t}+\Dta x(\Dta t)^{-1}$, for a subclass of problems in dimension 1. We compare our method with a max-plus based discretization method previously introduced by Fleming and McEneaney. Keywords : Document type : Reports Domain : Complete list of metadata https://hal.inria.fr/inria-00071426 Contributor : Rapport de Recherche Inria Connect in order to contact the contributor Submitted on : Tuesday, May 23, 2006 - 5:29:21 PM Last modification on : Friday, February 4, 2022 - 3:09:58 AM Long-term archiving on: : Sunday, April 4, 2010 - 10:12:48 PM Identifiers • HAL Id : inria-00071426, version 1 Citation Marianne Akian, Stéphane Gaubert, Asma Lakhoua. A max-plus finite element method for solving finite horizon deterministic optimal control problems. [Research Report] RR-5163, INRIA. 2004. ⟨inria-00071426⟩ Record views
2022-05-24 07:02:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17818887531757355, "perplexity": 3638.143962884682}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662564830.55/warc/CC-MAIN-20220524045003-20220524075003-00357.warc.gz"}
http://quant.stackexchange.com/questions/441/free-data-on-swap-options/446
# Free data on swap options I am trying to analyze valuation methods for swaptions. Does anyone know of free example data for these OTC-traded securities? - Thanks for the replies. I feared as much. – Owe Jessen Feb 11 '11 at 17:10 This is true for swap data generally; it's hard to get (hence the OTC part of it). – Shane Feb 11 '11 at 17:35 What I found was charted data on bloomberg.com, but it is a mystery to me, what this chart really displays, eg: link USD SWAPTION NORM 3M2Y - but with no further explanations. I suppose its some normalized option on 3m into 2y swaps, but without any information on the construction of the option part. Swap futures data are widely available, but I can't see a way to make the analogy work. – Owe Jessen Feb 11 '11 at 17:51 The series USSN0C2 is the atm vol (bps) for the normally distributed interest rate model. The Black or lognormal interest rate model vol is in USSV0C2. You can mine the rest of the atm grid by varying xxY in USS[VN]xxY. – Erik Olson Jun 20 '11 at 20:47 I agree with Shane; I seriously doubt you're going to find publicly available swaption data for free. You might get some sample data with a textbook, or from a published journal article. If you only need one example, you can find one in the documentation for the BermudanSwaption function in the RQuantLib R package. - Thanks for the RQuantLib plug. – Dirk Eddelbuettel Feb 11 '11 at 17:32 I will go out on a limb and say that this doesn't exist, unless you have a good relationship and can get some from your broker. - Just for future reference, if you are student or academic, you can request for market data on http://www.quantnet.com/forum/. Many of our members are Wall Street practitioners and as a policy, they will provide such data to help with your research (hence students/academic only). I have been the conduit for many of such transaction in the past. You will need to be precise about the type of data you need (series, ticker name, timeline, etc). These helpers are not going to waste their time if you have no clue on what data you need. - Thanks for the pointer – Owe Jessen Feb 11 '11 at 19:23 I think barchart just released a free market data api, but I doubt it has what your looking for.... freemarketdataapi.barchartondemand.com - You should put such an answer as a comment, please convert it – lehalle Jun 3 at 22:48 Is that data on swaptions? – Bob Jansen Jun 4 at 8:07 No, I don't think so... You should go to a local university and find out if you can use their Bloomberg Terminal and you can pull the data from there in an excel spreadsheet and clean it later. – Jack Anderson Jun 4 at 15:40 You can try begging for them, if you're an academician. - Well, consulting economist... it's not really academia, but close to that, and far removed from actual trading. I wouldn't use it in the market. – Owe Jessen Feb 11 '11 at 17:12 If you're not academia, then I think it's almost hopeless to get them for free. – quant_dev Feb 11 '11 at 21:40
2015-11-28 11:21:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35896947979927063, "perplexity": 1738.9552199844545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398452385.31/warc/CC-MAIN-20151124205412-00242-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/19500-elimination-method.html
1. ## Elimination Method I keep getting high fractions, I'm no good with fractions. http://img295.imageshack.us/img295/1999/p2el3.jpg I get: x= 43/73 Then plugging 43.73 in for x I get 43/73 + 9y= 21 Multiply everything by 73 43+ 657 y = 1533 657 y = 1490 y = I don't know, just seems like such an odd number to be in a book.. 2. Originally Posted by fluffy_penguin I keep getting high fractions, I'm no good with fractions. http://img295.imageshack.us/img295/1999/p2el3.jpg I get: x= 43/73 Then plugging 43.73 in for x I get 43/73 + 9y= 21 Multiply everything by 73 43+ 657 y = 1533 657 y = 1490 y = I don't know, just seems like such an odd number to be in a book.. umm. no, there are integer solutions to x and y. show us how you got the x so that we can see where you made your mistake 3. 8x-y = 22 x + 9y=21 (9) 8x - 9y= 22 72x - 9y = Oh I see it, I didn't multiply the 22 by 9 before I eliminated the y's. I got x =3 now. 4. Originally Posted by fluffy_penguin 8x-y = 22 x + 9y=21 (9) 8x - 9y= 22 72x - 9y = Oh I see it, I didn't multiply the 22 by 9 before I eliminated the y's. I got x =3 now. yes, and y is? 5. Y = 2 6. Originally Posted by fluffy_penguin Y = 2 correct, good job
2017-01-23 15:57:25
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051521182060242, "perplexity": 2876.1400560432703}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282932.75/warc/CC-MAIN-20170116095122-00440-ip-10-171-10-70.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/452567/how-to-bound-the-dimension-of-infinite-dimensional-hilbert-space
# How to bound the dimension of infinite dimensional Hilbert space? Lets say I have a density matrix $$\rho$$ encoded into a physical system. Lets say I can have access to as many copy of $$\rho$$ as I want. I perform hmodyne detection on the copies in phase and out of phase and based on the measurement result always lies in a circle nemely $$P^2+X^2\leq r$$ where $$P$$ and $$X$$ are real values and correspond to measurement results out of phase and in phase respectively. Based on this measurment results what can I say about the dim of hilbert space corresponding to $$\rho$$? Is it possible to bound the dimention? Basically, the average value of $$P^2+X^2$$ is the average value of the energy, hence of $$4n+2$$ (with the suitable normalization). Therefore, what you describe is known ans an energy test in the literature. Basically, if your state is essentially restricted to a superposition of the Fock states of $$O$$ to $$d-1$$ photons, we cans say that it is restricted into a subspace of dimension $$d$$. The details of the energy test vary, and there is still ongoing research work to find the most efficient one. One recent example is in PRL 118 200501 / arXiv:1701.03393, by Anthony Leverrier which does not assume a product state and use heterodyne detection. Basically, if your state has a non-negligible support of Fock states of $$d$$ photons or more, it will have a non-negligible probability to have a value of $$P^2+X^2$$ to be higher than $$r$$ and to be detected by the energy test. • About the de Finnetti approach: as in all verification problem, we can never guarantee the we weren’t especially unlucky. All we can do is gurantee that a “bad” state has a low probability to pass our test. In our case, it would be something like a state with a support higher than $η$ out a given subspace of dimension $d$ has a probability lower than $ε=f(r, η, N)$ to pass the test. In De Finetti test, this is ensured by symmetrizing the state (or perform a symmetric test) such that the optimal way to cheat is to send a symmetric state anyway. But we can never have $ε<1/N$ – Frédéric Grosshans Jan 10 at 11:50
2019-03-23 21:03:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7973679900169373, "perplexity": 170.88094559263845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203021.14/warc/CC-MAIN-20190323201804-20190323223804-00435.warc.gz"}
https://math.stackexchange.com/questions/1247063/what-is-the-probability-that-out-of-a-deck-of-16-cards-that-you-will-be-dealt-2
# What is the probability that out of a deck of 16 cards that you will be dealt 2 cards with the same number? Suppose you are playing with a set of 16 cards, which consists of 4 cards of each color (red, green, blue, and yellow) with each colored card having a different number on it (1, 2, 3, or 4). In other words, there is one red 1, one green 1, one blue 1, one yellow 1, one red 2, etc. Note: If you have played Uno before, these are just like numbered Uno cards. Suppose you are dealt two cards. a. What is the probability the cards are a pair (two cards with the same number)? Briefly explain your reasoning. b. What is the probability the two cards add up to more than 6? Briefly explain your reasoning. • For part a) it doesn't matter what your first card is. After you get it, what is the probability the next card will match it? – turkeyhundt Apr 22 '15 at 18:11 • Please "briefly explain your attempted reasoning". – Sasha Apr 22 '15 at 18:12 • Thats a hypergeomatric disturbution i guess, its 4*$\frac{C_4^2}{C_{16}^2}$ – Mohamad Misto Apr 22 '15 at 19:00 There are several approaches to this problem. One of the simplest to explain is that you can make a distribution chart to describe the scenario: $$\begin{array}{c|c|c|c|c|c|c|} & \color{blue}{1} & \color{red}{1} & \color{green}{1} & \color{yellow}{1}&\color{blue}{2} & \color{red}{2}&\cdots\\ \hline \color{blue}{1} & 0 & x & x & \cdots\\ \hline \color{red}{1} & x & 0 & x & \cdots\\ \hline \color{green}{1} & x & x & 0 & x & \cdots\\ \hline \color{yellow}{1}&\vdots & \ddots & \ddots & \ddots\\ \hline \color{blue}{2} \\ \hline \color{red}{2}\\ \hline \vdots\\ \end{array}$$ (note: these probabilities have it such that order matters. The final answer in this problem will not depend on whether you use a method where order matters or doesn't matter, so use what is comfortable) Noting that it is impossible to draw the same card twice in a row (hence the main diagonal being zeroes) and all other entries are equiprobable and should add up to 1, you calculate $Pr(\text{first card is}~ a\cap \text{second card is}~ b) = \frac{1}{16\cdot 15}$. You may then note that there are $48$ squares with nonzero probability that correspond to having the same number in both the first draw and the second draw. Use addition principle to complete the argument for part (a). For part (b), add up the probability of all squares whose corresponding outcomes have the total sum is bigger than 6. That approach is rather tedious and would require either drawing a very large chart, or only drawing a portion of it (like I did) and "reading" parts of the chart that you haven't written down yet. Instead, let us consider this via multiplication and addition principles. $Pr(\text{first two numbers match}) = Pr(\text{both numbers are 1}\cup \text{both numbers are 2}\cup\text{both numbers are 3}\cup\text{both numbers are 4})\\ =Pr(\text{both numbers are 1}) + Pr(\text{both numbers are 2}) + Pr(\text{both numbers are 3}) + Pr(\text{both numbers are 4})$ Here, I was able to split up the unions ($\cup$) by the addition principle: If $A\cap B=\emptyset$ then $Pr(A\cup B) = Pr(A)+Pr(B)$ More generally, for any $A$ and $B$ you have $Pr(A\cup B) = Pr(A)+Pr(B)-Pr(A\cap B)$ Now, we wish to solve for $Pr(\text{both numbers are 1})$. This is the same as $Pr(\text{first is a 1}\cap \text{second is a 1})$. To do this, we use the multiplication rule: $Pr(A\cap B) = Pr(A)\cdot Pr(B|A)$ So, we look at what $Pr(\text{first is a 1})$ is and what $Pr(\text{second is a 1}|\text{first is a 1})$. The probability the first card is a 1 is simply $\frac{4}{16}$ (since there are four 1's out of sixteen total cards) and the probability of the second being a 1 given that the first is a one as well is $\frac{3}{15}$ (since there will be three remaining 1's out of fifteen remaining cards total). Thus, we see $Pr(\text{both are 1}) = \frac{4}{16}\cdot\frac{3}{15}$. Through a similar argument we find that the rest of the probabilities are also $\frac{4}{16}\cdot\frac{3}{15}$ as well. So, $Pr(\text{both numbers the same}) = 4\cdot Pr(\text{both are 1}) = 4\cdot \frac{4}{16}\cdot \frac{3}{15} = \frac{3}{15}$ Once you become more comfortable with calculating probabilities, you may make simplifications which save a great deal more time. As mentioned as a hint in the comments, it doesn't actually matter what the first number drawn is. As such $Pr(\text{both are same number}) = Pr(\text{second number is same as first}) = \frac{3}{15}$ since whatever number the first card was, there will be three remaining of that number out of 15 cards to take from. For part (b), use the tools described here and note that $Pr(\text{sum of cards is more than 6}) = Pr(\text{sum of cards is 7}\cup \text{sum of cards is 8})$. Note further that you can break $Pr(\text{sum of cards is 7})$ up as $Pr(\text{first card is 3 and second is 4}\cup\text{first card is 4 and second is 3})$. Thats a hypergeomatric disturbution i guess, The probability is 4*$\frac{C_4^2}{C_{16}^2}$, the explanation is that, you shall choose 2 cards of the same number with each number having 4 colors i.e choosing 2 cards out of 4, hence its the number of ways of choosing two cards of the same number from 4 cards of this number but with different colors, out of the number of ways of choosing randomly 2 cards from the deck of 16 cards, since there exist 4 different numbers we multiply by 4 since its the probability of choosing any same pair of these 4 numbers , hope that i made it clear.
2019-12-14 10:59:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7691640257835388, "perplexity": 188.1350528448469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00336.warc.gz"}
https://zbmath.org/?q=an%3A1154.11339
## Chowla’s conjecture.(English)Zbl 1154.11339 From the author’s introduction: Let $$K=\mathbb Q(\sqrt(d))$$, where $$\mathbb Q$$ is the rational field and $$d$$ is a square-free positive integer, and let $$h(d)$$ be the class number of this field. In [S. Chowla and J. Friedlander, Glasg. Math. J. 17, 47–52 (1976; Zbl 0323.12006)], S. Chowla conjectured that $$h(4p^2+1)>1$$ if $$p>13$$ is an integer, which is proved to be true in this paper. The work here has its origins in the author’s paper [Acta Arith. 106, No. 1, 85–104 (2003; Zbl 1154.11338)], in which he established a conjecture of H. Yokoi that $$h(p^2+4)>1$$ for $$p>17$$. In fact, essentially the same proof works with appropriate modifications. Note that Siegel’s theorem tells us that the class number is greater than 1 once $$p$$ is sufficiently large, in both cases; however, Siegel’s theorem does not indicate what “sufficiently large” means. Here the author determines that using a quite different method. His main result is as follows. Theorem. If $$d$$ is square-free, $$h(d)=1$$ and $$d=4p^2+1$$ with some positive integer $$p$$, then $$p$$ is a square for at least one of the following moduli: $$q=5,7,41,61,1861$$ (that is, $$(d/q)=0$$ or 1 for at least one of the listed values of $$q$$). Combining this with Fact B (which implies that if $$h(d)=1$$, then $$d$$ is a quadratic nonresidue modulo any prime $$r$$ with $$2<r<p$$) he obtains: Corollary. If $$d$$ is square-free, and $$d=4p^2+1$$ with some integer $$p>1861$$, then $$h(d)>1$$. What concerns the small solutions, in the same way as in the author’s cited paper, he can easily prove (see Section 2) that $$h(4p^2+1)>1$$ if $$13<p\leq 1861$$. Hence Chowla’s conjecture follows. He searches these final few cases to show that $$h(4p^2+1)=1$$ only for $$p=1,2,3,5,7,13$$. The main lines of the proof are the same as in the author’s cited paper, but some modifications are needed; the most significant modifications can be found in the statement and proof of Lemma 1. The present proof also requires computer work. ### MSC: 11R11 Quadratic extensions 11R29 Class numbers, class groups, discriminants 11R42 Zeta functions and $$L$$-functions of number fields ### Citations: Zbl 0323.12006; Zbl 1154.11338 Full Text:
2022-09-26 15:45:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8842909336090088, "perplexity": 254.65831688804064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00542.warc.gz"}
http://spot.pcc.edu/math/orcca/knowl/example-order-of-operations.html
###### Example1.4.6 Use the order of operations to simplify the following expressions. 1. $$10+2\cdot 3\text{.}$$ With this expression, we have the operations of addition and multiplication. The order of operations says the multiplication has higher priority, so execute that first: \begin{align*} 10+2\cdot 3\amp =10+\nextoperation{2\cdot 3}\\ \amp=10+\highlight{6}\\ \amp=\highlight{16} \end{align*} 2. $$4+10\div 2 - 1\text{.}$$ With this expression, we have addition, division, and subtraction. According to the order of operations, the first thing we need to do is divide. After that, we'll apply the addition and subtraction, working left to right: \begin{align*} 4+10\div2-1\amp=4+\nextoperation{10\div2}-1\\ \amp=\nextoperation{4+\highlight{5}}-1\\ \amp=\highlight{9}-1\\ \amp=\highlight{8} \end{align*} 3. $$7-10+4\text{.}$$ This example only has subtraction and addition. While the acronym PEMDAS may mislead you to do addition before subtraction, remember that these operations have the same priority, and so we work left to right when executing them: \begin{align*} 7-10+4\amp=\nextoperation{7-10}+4\\ \amp=\highlight{-3}+4\\ \amp=1 \end{align*} 4. $$20\div 4\cdot 7\text{.}$$ This expression has only division and multiplication. Again, remember that although PEMDAS shows “MD,” the operations of multiplication and division have the same priority, so we'll apply them left to right: \begin{align*} 20\div 4\cdot 5\amp=\nextoperation{20\div 4} \cdot 5\\ \amp=\highlight{5}\cdot5\\ \amp=\highlight{25} \end{align*} 5. $$(6+7)^2\text{.}$$ With this expression, we have addition inside a set of parentheses, and an exponent of $$2$$ outside of that. We must compute the operation inside the parentheses first, and after that we'll apply the exponent: \begin{align*} (6+7)^2\amp= (\nextoperation{6+7})^2\\ \amp= \highlight{13}^2 \\ \amp= \highlight{169} \end{align*} 6. $$4(2)^3\text{.}$$ This expression has multiplication and an exponent. There are parentheses too, but no operation inside them. Parentheses used in this manner make it clear that the $$4$$ and $$2$$ are separate numbers, not to be confused with $$42\text{.}$$ In other words, $$4(2)^3$$ and $$42^3$$ mean very different things. Exponentiation has the higher priority, so we'll apply the exponent first, and then we'll multiply: \begin{align*} 4(2)^3 \amp= 4\nextoperation{(2)^3}\\ \amp= 4(\highlight{8})\\ \amp= \highlight{32} \end{align*} in-context
2018-11-12 20:13:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886375069618225, "perplexity": 1737.999993234806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741087.23/warc/CC-MAIN-20181112193627-20181112215627-00525.warc.gz"}
https://nervanasystems.github.io/distiller/install.html
# Distiller Installation These instructions will help get Distiller up and running on your local machine. You may also want to refer to these resources: Notes: - Distiller has only been tested on Ubuntu 16.04 LTS, and with Python 3.5. - If you are not using a GPU, you might need to make small adjustments to the code. ## Clone Distiller Clone the Distiller code repository from github: $git clone https://github.com/NervanaSystems/distiller.git The rest of the documentation that follows, assumes that you have cloned your repository to a directory called distiller. ## Create a Python virtual environment We recommend using a Python virtual environment, but that of course, is up to you. There's nothing special about using Distiller in a virtual environment, but we provide some instructions, for completeness. Before creating the virtual environment, make sure you are located in directory distiller. After creating the environment, you should see a directory called distiller/env. ### Using virtualenv If you don't have virtualenv installed, you can find the installation instructions here. To create the environment, execute: $ python3 -m virtualenv env This creates a subdirectory named env where the python virtual environment is stored, and configures the current shell to use it as the default python environment. ### Using venv If you prefer to use venv, then begin by installing it: $sudo apt-get install python3-venv Then create the environment: $ python3 -m venv env As with virtualenv, this creates a directory called distiller/env. ### Activate the environment The environment activation and deactivation commands for venv and virtualenv are the same. !NOTE: Make sure to activate the environment, before proceeding with the installation of the dependency packages: $source env/bin/activate ## Install the package Finally, install the Distiller package and its dependencies using pip3: $ cd distiller \$ pip3 install -e . This installs Distiller in "development mode", meaning any changes made in the code are reflected in the environment without re-running the install command (so no need to re-install after pulling changes from the Git repository). PyTorch is included in the requirements.txt file, and will currently download PyTorch version 1.0.1 for CUDA 9.0. This is the setup we've used for testing Distiller.
2020-09-18 13:36:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3614601790904999, "perplexity": 7117.6064776116855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187899.11/warc/CC-MAIN-20200918124116-20200918154116-00142.warc.gz"}
https://www.projecteuclid.org/euclid.nmj/1221656780
## Nagoya Mathematical Journal ### Some numerical criteria for the Nash problem on arcs for surfaces Marcel Morales #### Abstract Let $(X, O)$ be a germ of a normal surface singularity, $\pi : \tilde X \to X$ be the minimal resolution of singularities and let $A = (a_{i, j})$ be the $n \times n$ symmetrical intersection matrix of the exceptional set of $\tilde X$. In an old preprint Nash proves that the set of arcs on a surface singularity is a scheme $\mathcal{H}$, and defines a map $\mathcal{N}$ from the set of irreducible components of $\mathcal{H}$ to the set of exceptional components of the minimal resolution of singularities of $(X, O)$. He proved that this map is injective and ask if it is surjective. In this paper we consider the canonical decomposition $\mathcal{H} = \bigcup_{i=1}^{n} \bar{\mathcal{N}_{i}}$: • For any couple $(E_{i}, E_{j})$ of distinct exceptional components, we define Numerical Nash condition $(NN_{(i, j)})$. We have that $(NN_{(i, j)})$ implies $\bar{\mathcal{N}_{i}} \not\subset \bar{\mathcal{N}_{j}}$. In this paper we prove that $(NN_{(i, j)})$ is always true for at least the half of couples $(i, j)$. • The condition $(NN_{(i, j)})$ is true for all couples $(i, j)$ with $i \not= j$, characterizes a certain class of negative definite matrices, that we call Nash matrices. If $A$ is a Nash matrix then the Nash map $\mathcal{N}$ is bijective. In particular our results depend only on $A$ and not on the topological type of the exceptional set. • We recover and improve considerably almost all results known on this topic and our proofs are new and elementary. • We give infinitely many other classes of singularities where Nash Conjecture is true. The proofs are based on my old work [8] and in Plenat [10]. #### Article information Source Nagoya Math. J., Volume 191 (2008), 1-19. Dates First available in Project Euclid: 17 September 2008 https://projecteuclid.org/euclid.nmj/1221656780 Mathematical Reviews number (MathSciNet) MR2451219 Zentralblatt MATH identifier 1178.14004 #### Citation Morales, Marcel. Some numerical criteria for the Nash problem on arcs for surfaces. Nagoya Math. J. 191 (2008), 1--19. https://projecteuclid.org/euclid.nmj/1221656780 #### References • J. Denef and F. Loeser, Germs of arcs on singular varieties and motivic integration, Inv. Math., 135 (1999), 201--232. • J. Fernandez-Sanchez, Equivalence of the Nash conjecture for primitive and sandwiched singularities, Proc. Amer. Math. Soc., 133 (2005), 677--679. • H. Grauert, Uber modifikationen und exceptionnelle analytische Mengen, Math. Annalen, 146 (1962), 331--368. • G. Gonzalez-Sprinberg and M. Lejeune-Jalabert, Families of smooth curves on surface singularities and wedges, Annales Polonici Mathematici, LXVII.2 (1997), 179--190. • S. Ishii and J. Kollar, The Nash problem on arc families of singularities, Duke Math. J., 120 (2003), no. 3, 601--620. • M. Lejeune-Jalabert, Courbes tracées sur un germe d'hypersurface, Amer. J. of Math., 112 (1990), 525--568. • M. Lejeune-Jalabert and A. Reguera, Arcs and wedges on sandwiched surfaces singularities, Amer. J. of Math., 121 (1999), 1191--1213. • M. Morales, Clôture intégrale d'idéaux et anneaux gradués Cohen-Macaulay, Géométrie algébrique et applications, La Rabida 1984 (J-M. Aroca, et als, eds.), Hermann, pp. 15--172. • J. F. Nash Jr., Arcs structure of singularities, Duke Math. J., 81 (1995), no. 1, 31--38. • C. Plénat, A Propos du problème des arcs de Nash, Ann. Inst. Fourier., 55 (2005), no. 3, 805--823. • C. Plénat, Résolution du problème des arcs de Nash pour les points doubles rationnels $D_n$, Thèse Univ. Paul Sabatier. Toulouse, 2004. • C. Plénat and P. Popescu-Pampu, A class of non-rational surfaces singularities for which the Nash map is bijective, Bulletin Soc. Math. France, to be published. • A. Reguera, Families of Arcs on rational surface singularities, Manuscripta Math., 88 (1995), 321--333. • A. Reguera, A curve selection lemma in spaces of arcs and the image of the Nash map, Compos. Math., 142 (2006), no. 1, 119--130.
2019-10-21 19:06:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707164525985718, "perplexity": 1224.7843037522239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987781397.63/warc/CC-MAIN-20191021171509-20191021195009-00451.warc.gz"}
https://epiga.episciences.org/7221
## Cédric Bonnafé ; Alessandra Sarti - Complex reflection groups and K3 surfaces I epiga:6573 - Épijournal de Géométrie Algébrique, February 25, 2021, Volume 5 - https://doi.org/10.46298/epiga.2021.volume5.6573 Complex reflection groups and K3 surfaces I Authors: Cédric Bonnafé ; Alessandra Sarti We construct here many families of K3 surfaces that one can obtain as quotients of algebraic surfaces by some subgroups of the rank four complex reflection groups. We find in total 15 families with at worst $ADE$--singularities. In particular we classify all the K3 surfaces that can be obtained as quotients by the derived subgroup of the previous complex reflection groups. We prove our results by using the geometry of the weighted projective spaces where these surfaces are embedded and the theory of Springer and Lehrer-Springer on properties of complex reflection groups. This construction generalizes a previous construction by W. Barth and the second author. Volume: Volume 5 Published on: February 25, 2021 Accepted on: February 25, 2021 Submitted on: June 17, 2020 Keywords: Mathematics - Algebraic Geometry
2022-01-23 02:46:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4821242094039917, "perplexity": 814.1784022049818}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00092.warc.gz"}
https://dss.iq.harvard.edu/blog/archive/all/201608
# Extracting content from .pdf files One of common question I get as a data science consultant involves extracting content from .pdf files. In the best-case scenario the content can be extracted to consistently formatted text files and parsed from there into a usable form. In the worst case the file will need to be run through an optical character recognition (OCR) program to extract the text. ## Overview of available tools For years pdftotext from the Read more about Extracting content from .pdf files
2017-06-29 03:40:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18580123782157898, "perplexity": 1541.6007252563306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323864.76/warc/CC-MAIN-20170629033356-20170629053356-00534.warc.gz"}
https://asmedigitalcollection.asme.org/computationalnonlinear/article/17/12/121004/1146634/Boundary-Transformation-Vectors-A-Geometric-Method
## Abstract Chaotic signals have long held promise as a means of excitation in structural health monitoring applications, but methods to process the structural response and infer damage are limited in number and effectiveness. Here, an alternative geometric methodology is presented that is based on measuring the boundary deformation of a system attractor as parameters change. This technique involves sampling the boundaries of two system attractors: one with nominal parameters and one with varied parameters, and then computing boundary transformation vectors (BTVs) between them. These vectors encode information about how the system has changed. This method allows damage level as well as type/location to be simultaneously quantified in simulated structures, and represents a major step toward making chaotic excitation a more practical choice for structural health monitoring. ## 1 Introduction Being able to monitor the condition of a structure is important in applications ranging from civil infrastructure to rotating machinery. Such structural health monitoring may be concerned with detecting the existence, type, location, and severity of damage [1,2]. Identifying damage in dynamic systems is often accomplished using active, vibration-based methods. Any damage to the system is theoretically reflected in changes to the vibration response, so exciting a system and monitoring its response allows quantification of damage, subject to changes in response being measurable and correlated to a specific damage type/location as necessary. In linear systems, there are many ways of implementing vibration-based structural health monitoring, principal among these being methods for analyzing changes in natural frequencies [3] and mode shapes [4,5]. Other, more exotic ways of monitoring changes involve examining flexibility [6], using time series analysis [7], or using wavelet analysis [8]. Helpful reviews and details regarding modern computational algorithms for optimization and modeling are provided by Das et al. [9] and Gomes et al. [10]. However, there are also many systems which are inherently nonlinear, or which become nonlinear once they have incurred damage. As an example, a beam which develops a breathing crack would, in many cases, no longer be able to be analyzed as a linear system. For these nonlinear systems, methods of linear analysis perform poorly and alternate analyses are currently limited [11], necessitating the development of additional methods. Two linked issues critical to this endeavor are the choice of structural excitation and the method of processing the dynamic response. Currently, the most effective methods of nonlinear structural health monitoring excite (or interrogate) a structure using a chaotic signal and then measure and analyze the chaotic structural response via the machinery used for predicting chaotic time series to infer the presence and severity of damage. One of the earliest works in this area by Todd et al. [12] determined rules for how the Lyapunov exponents of the chaotic excitation should relate to the eigenvalues of the structure, and used a nonlinear time-prediction scheme called average local attractor variance ratio to detect damage. Researchers connected to Todd developed many other ways of detecting system changes involving fractal dimension [13], state space prediction [14], and nonlinear cross prediction error [15]. These methods [16] were applied to mass-spring systems, beam systems, loosening bolts, and composite structures. A subset of these methods was also later revisited with hyperchaotic, instead of chaotic, excitation [17,18]. Other groups developed similar methods at about the same time, prominent among these being phase space warping [1922] and sensitivity vector fields [23,24]. In general, this class of methods is effective and represents one of the few areas in which the search for practical applications of chaos theory succeeded. (Early in the study of chaos, investigators were anticipating many such applications [25] but few panned out.) The advantages of using chaotic interrogation, including broadband excitation, potential tuning of excitation to structural application, and sensitive dependence to initial conditions are well established. What is less clear is whether methods used to analyze nonlinear dynamic response can be improved or extended. Nearly, all of the methods cited above adopt nonlinear time series prediction as an underlying algorithm in conjunction with a particular metric to assess system damage. While this type of analysis has proven effective at establishing the existence and quantifying the severity of damage, it has proven ineffective in determining the location or type of damage. Nonlinear time series prediction is poorly adapted to this task because a change in any system parameter influences the direction of exponential growth of the chaotic trajectories, making it difficult to discriminate between different parameter changes. As a result, only one parameter change can be reliably quantified at a time. Moreover, nonlinear time series prediction requires making many modeling choices, and it is usually unclear which will be best for a given application. The type of local model is one such choice (linear versus nonlinear neighborhoods, weighting schemes, etc.), and each model type typically also requires tuning additional parameters related to neighborhood size, time horizon, and other factors. These are often set using a trial-and-error process, which is why many of the studies cited above only make predictions one time-step ahead. Both the complexity and limitations of nonlinear time series prediction make it clear that complementary methods may be beneficial in order to extract the maximum useful information from a nonlinear structural response. For this reason, several methods attempting to directly measure deformation of the system attractor have recently been developed. Instead of making trajectory predictions, these methods rely on measures of density or distribution to characterize the attractor and any changes it may undergo. The earliest attempt at this type of analysis appears to be by Liu and Chelidze [26] who evaluated system changes using the first moment of Poincare section data. Carroll and Byers [27,28] developed another method that compares the probability density of attractor data binned in histograms to infer system changes. Samadani et al. [29] also used density, but in a different way: estimating the density associated with a particular trajectory in a process termed phase space topology. Because, these methods are specifically designed for inferring system changes, they generally require less fine-tuning that their predictive counterparts. However, these methods are still limited to quantifying only one parameter change at a time. In this paper, a new method is presented that uses chaotic excitation in combination with geometric, shape-based analysis of the boundary of a chaotic attractor to detect and quantify damage. Changes in the boundary of an attractor produce rich information about how parameters of the system are changing. The specific way this information is encoded is via boundary transformation vectors (BTVs). BTVs capture how parts of the boundary of an attractor must morph to align with the boundary of another, deformed attractor. Because this method works directly with attractor shape, it is better able to capture information about different types or locations of damage. BTVs will change in particular ways for one type/location of damage, while changing in other ways for second different type/location of damage. Additionally, damage level can be inferred using regions of BTVs that scale roughly proportionally to the damage. This combination represents a major benefit compared to past methods of damage detection using chaotic interrogation, whether in the nonlinear time series prediction or geometric deformation classes. The remainder of the paper presents work showing how BTVs in conjunction with chaotic interrogation can be used to infer system damage. The initial sections summarize key ideas for performing chaotic interrogation and provide an algorithm for computing BTVs. Additional sections exhibit examples of BTVs being used to infer damage in simulated mass-spring-damper and cantilever beam systems. A summary along with directions for future investigation close the paper. ## 2 Methods ### 2.1 Chaotic Interrogation. Chaotic interrogation is the use of a chaotic signal to excite a structure. The structure serves as a filter for the signal, altering it based on the properties of the system and any system variations, such as structural damage. By measuring the response of the structure, a second chaotic signal that encodes structural changes can be captured and used to infer damage. For the dynamics of the chaotic interrogation and the modes of the structure to interact, allowing changes in any structural parameters to be reflected in the system response, the Lyapunov spectrum of the structure and the excitation need to overlap. The Lyapunov exponents of a system describe the exponential rates at which system perturbations increase or decrease in different state space directions as time advances: a positive exponent indicates expansion; a negative exponent indicates contraction. The Lyapunov exponents of a structure are the real parts of the structure's eigenvalues. For instance, a linear structure described in standard state space form $x˙=Ax+Bu$ (1) where $x$ is a vector of the states of the system, A is the state matrix, B is the input matrix, and $u$ contains the system excitation, will have Lyapunov exponents given by the real parts of the eigenvalues of A. For typical structures, the Lyapunov exponents are negative forming the spectrum $0≥λ1S>λ2S>…>λNS$ (2) Here S is used to indicate these Lyapunov exponents belong to the structure. Any excitation will have its own Lyapunov spectrum. For an excitation to be chaotic, one Lyapunov exponent must be positive, and the remainder of the exponents will be either zero or negative. Thus, the Lyapunov spectrum for a chaotic excitation is $λ1E>0≥λ2E…≥λME$ (3) Here E is used to indicate these Lyapunov exponents belong to the excitation. For the Lyapunov spectra to overlap, so that structural changes are reflected in the dynamic response, at minimum the most negative Lyapunov exponent of the excitation must be less than the largest Lyapunov exponent of the structure $λME<λ1S$ (4) This necessary overlap of the structural and excitation Lyapunov spectra in forming the combined spectrum of the system is illustrated in Fig. 1. If $λME$ becomes increasingly negative and less than structural exponent $λiS$ then, the modal participation of the structure increases as the ith mode becomes involved in the dynamics. An excitation time-scaling parameter δ can be used to control the spread of the excitation Lyapunov exponents and hence the spectral overlap. Fig. 1 Fig. 1 Close modal In addition to this necessary requirement, it is also desirable, for the purposes of analysis, to have the attractor of the excited structure exist in a low-dimensional space. The dynamics of the excited structure will exist in a space whose dimension is described by the Kaplan–Yorke conjecture $D=j+∑i=1jλi|λj+1|$ (5) where D is the fractal dimension of the dynamics and j is the largest number of exponents of the combined spectra that can be added before the sum becomes negative. Here, the combined spectrum is given by $λi=(λjS,j=1,2…N;λkE,k=1,2,…M)$ (6) $such that λ1>λ2>…>λN+M$ (7) Thus, if a chaotic signal is used to excite a structure, for the lowest-dimensional response, it is necessary that $|λ1E|<|λ1S|$ (8) Because, this ensures that the dimension of the resulting attractor has the form $D=2+λ1E|λ1S|$ (9) Obviously, if $λ1E$ becomes larger than $|λ1S|$ then, the dimension of the attractor will increase, which will result in more complicated attractor that may be more difficult to analyze. One approach to examining attractors is via Poincare sections. These are 2D slices of the attractor that can be constructed by forming a plane in the state space and recording data points when the trajectories of the system cross the plane. Another, sometimes more convenient, way of creating Poincare sections is to record successive maxima of a measured signal and then populate the 2D plane using data points $point(i)=[xlocalmax i+1,xlocalmax i]$ (10) which is equivalent to creating a plane where $xi˙=0$. In this work, Poincare sections are constructed using the successive maxima formulation for all analyses to provide a standard way of examining the attractor. ### 2.2 Boundary Transformation Vectors. Boundary transformation vectors describe the deformation that the boundary of one attractor would need to undergo in order to match the boundary of a second attractor. To encode this information, the shapes of two attractor boundaries are compared using a geometrical, shape-based transformation developed by Belongie et al. [30] called the shape context that has been adapted to attractor boundary matching [31]. By comparing cataloged BTVs for known system damage to BTVs for unknown damage, unknown damage can be estimated. An overview of this idea is presented in Fig. 2. Fig. 2 Fig. 2 Close modal The detailed process for comparing two attractors represented by Poincare sections, one section constructed using a signal associated with the undamaged structure (the reference section) and one section constructed using a signal associated with the damaged structure (the deformed section) is as follows: 1. (1) Select regions of the attractor to compare: In many instances, Poincare sections constructed from experimental data may consist of disconnected segments. Some of these segments may be sensitive to damage (exhibiting significant deformation) while others may be relatively insensitive. In such situations, selecting particular segments or regions to analyze can be advantageous; selected regions are termed point clouds of interest (POI). 2. (2) Find and smooth the boundary: The boundary of a POI can be captured using MATLAB's boundary.m function, which is based on work by Akkiraju et al. [32]. To use this function, a value of shrink factor s must be chosen. The shrink factor controls compactness of the boundary and can be chosen based on the density and arrangement of Poincare section points. Because the tangent to each point on the boundary is also important to the algorithm, a quadratic polynomial is fit to groups of points of size ngroup centered on each boundary point, to obtain both a smoothed point and a tangent at that point. This process has been shown to be relatively insensitive to the size of ngroup [31]. Here ngroup = 5. 3. (3) Sample the boundary: Points along the smoothed boundary points are sampled, being selected sequentially a minimum distance of 2R apart, where R is the radius of a “hard core.” This ensures a relatively uniform sampling of the boundary and also preserves ordering, which is helpful for boundary matching. 4. (4) Calculate shape contexts: Sampled boundaries will be matched using the idea of shape context, which is calculated for each sampled point. The shape context is defined by $hi(k)=#{q≠pi:(q−pi)∈bin(k)}$ (11) where hi is a histogram of the relative locations of the q points making up the shape in comparison to a reference point pi. This histogram uses 12 angular bins and 5 log-radial bins in a log-polar space to encode location information. 5. (5) Perform correspondence matching: The cost of matching points belonging to similar shapes can be calculated using the formula $Cij=(1−βS)CS(pi,qj)+βSCT(pi,qj)$ (12) where CS is a cost based on matching shape context histograms, CT is a cost based on matching boundary tangents, and $βS=0.25$ is a weighting factor determining the relative contributions of the two costs. The shape context cost is given by $CS(pi,qj)=12∑k=1K[hi(k)−hj(k)]2hi(k)+hj(k)$ (13) where $hi(k)$ and $hj(k)$ are the K-bin normalized histograms for points pi and qj which belong to the reference and deformed sections, respectively. The boundary tangent cost is given by $CT(pi,qj)=12‖( cos θi sin θi)−( cos θj sin θj)‖$ (14) where θi and θj are angles between the horizontal and tangent vectors associated with points pi and qj. Given a set of costs Cij between all pairs of points pi on one section and qj on the other, one seeks to minimize the total matching cost $H(π)=∑iCS(pi,qπ(i))$ (15) subject to the constraint that π is a permutation. The cost minimization problem to assign matching points is solved using MATLAB's function matchpairs.m based on work by Duff and Koster [33]. If the cost matrix is not square, this function accepts a parameter ϵ specifying the cost of nonassignment or any row or column. For a normalized cost matrix in which the maximum value of any element is 1, ϵ can be chosen in the range 0.1–0.5 to produce many quality matches. 6. (6) Calculate transformation coefficients: A transformation T(x, y) between the reference section and the deformed section can be calculated using thin-plate splines via the formula $T(x,y)=(fx(x,y),fy(x,y))$ (16) where the thin-plate spline interpolants $fx(x,y)$ and $fy(x,y)$ are given by $f(x,y)=a1+axx+ayy+∑i=1mwiU(‖(xi,yi)−(x,y)‖)$ (17) with $U(r)=r2 log r2$ and $U(0)=0$. Conditions for determining the coefficients of $fx(xi,yi)$ and $fy(xi,yi)$ are as follows: • $fx(xi,yi)=xi′$ and $fy(xi,yi)=yi′$ for $i=1,2,…,m$. • $fx(x,y)$ and $fy(x,y)$ have square integrable second derivatives. • $fx(x,y)$ and $fy(x,y)$ minimize the bending energy of their respective curves. Belongie et al. [30] provide additional detail on these conditions. In the presence of noise or other variability, it is often advantageous to relax the exact transformation T(x, y) by minimizing $H|f|=∑i=1m(vi−f(xi,yi))2+αS2λ0If$ (18) Here, m is the number of correspondences, αS is a scale factor, $λ0=0.5$ is a scale-independent regularization parameter controlling the amount of smoothing, and If is the bending energy. With vi set to $xi′$ and $yi′$ in turn for $fx(x,y)$ and $fy(x,y)$, respectively, this gives thin-plate spline interpolant coefficients for the regularized problem. • (7) Iterate steps 5 and 6: The initial correspondence matching will typically contain some errors that impact the transformation estimate. By iterating steps 5 and 6 a few times (here five times), these inaccuracies are largely eliminated. Additional details about this process as well as a parametric study of the sensitivity of the algorithm to its parameters are available in Ref. [31]. ## 3 Applications In this section, two different simulated structures are analyzed using BTVs to illustrate their applicability to structural health monitoring. ### 3.1 A Mass-Spring-Damper System. Mass-spring-damper systems are a common testbed for damage detection methods. Here, BTV analysis is demonstrated using an eight degree-of-freedom system shown in Fig. 3 and described by Eq. (1) with A and B matrices given by $A=[0IM−1KM−1C]; B=[0M−1]$ (19) where M, C, and K are the structure's 8 × 8 banded mass, damping, and stiffness matrices, respectively, and forcing F is applied to the final mass m8. Here, we adopt individual mass, damping, and stiffness values of $mi=0.01, ci=0.075$, and $ki=2.0$ for $i=1…8$ to be consistent with past work [12,17] using chaotic interrogation. Damage was simulated by decreasing the stiffness of various system elements ki in increments of 0.05 or increasing the damping of various system elements ci in increments of 0.025. The Lyapunov spectrum of the undamaged system is given by $λiS=−0.128,−1.12,−2.98,−5.45,−8.19,−10.8,−13.0,−14.5$ (20) Fig. 3 Fig. 3 Close modal Any number of chaotic systems could be used to excite this system, but here the forced Brusselator system is chosen $u1˙=δ(u12u2−(b−1)u1+a+A sin u3)$ (21) $u2˙=δ(−u12u2+bu1)$ (22) $u3˙=ω$ (23) where δ is a parameter used to scale time and tune the excitation's Lyapunov exponents for desired overlap with the system and the other parameters are the most common values for the Brusselator $[a b A ω]=[0.4 1.2 0.05 0.8]$. The value of u1 was used as the forcing. Prior to tuning, the forced Brusselator's Lyapunov spectrum is given by $λiE=0.0140,0,−0.262$ (24) This spectrum is desirable because the positive Lyapunov exponent $λ1E=0.0140$ is small, allowing scale δ to be adjusted a significant amount before Eq. (8) is violated. Plots of Poincare sections for the undamaged system excited for different values of the scale δ are shown in Fig. 4. For $0.488<δ<4.29$ one system mode participates, for $4.29<δ<11.4$ two system modes participate, but for $δ>9.12$ Eq. (8) is violated, leading to an integer increase in the dimension of the attractor. Fig. 4 Fig. 4 Close modal ### 3.2 A Cantilever Beam. Another easily simulated system for testing damage detection methods is a cantilever beam. Here, a Euler–Bernoulli beam with one end fixed and one end free is discretized using 20 finite elements, as shown in Fig. 5. To simulate this system, Eq. (1) can again be used, but here the system is cast in modal form so as to have control over the damping ratio associated with each mode, and thus the A and B matrices are given by $A=[0I[ωi2]2[ζiωi]]; B=[0M−1VT]$ (25) where $[ωi2]$ and $2[ζiωi]$ are 40 × 40 diagonal matrices of the beam's natural frequencies squared and double its Lyapunov exponents, respectively. The term $M−1VT$ in the B matrix represents the inverse of the mass matrix of the system multiplied by the eigenvectors which transform the system into modal coordinates. Here values for the beam's density ρ = 1185 kg/m3 and modulus E =3.2 GPa were adopted to be consistent with acrylic material and dimensions of length L =0.4128 m, width w =0.1016 m, and thickness t =0.0056 m were chosen to be reasonable for a tabletop experiment. The modal damping ratio was for all modes with a value of $ζi=0.01$. Damage was simulated by reducing both the mass and stiffness of a given element by 5% consistent with loss of material at a given location along the beam. The Lyapunov spectrum of the beam without damage is given by $λiS=−0.5481,−3.4351,−9.6185,−18.8493,−31.1625,…$ (26) Fig. 5 Fig. 5 Close modal The forced Brusselator described by Eq. (21) was also used to excite this structure for similar reasons to those described for the mass-spring-damper system. The value of $3u1$ was used as the forcing and applied at the free end of the beam. In this case, for $2.09<δ<13.1$ one system mode participates, for $13.1<δ<36.7$ two system modes participate, but for $δ>39.2$ Eq. (8) is violated leading to an integer increase in the dimension of the attractor. Plots of Poincare sections for the undamaged beam excited for different values of the scale parameter δ are shown in Fig. 6. Fig. 6 Fig. 6 Close modal ## 4 Results For the mass-spring-damper system, results were generated using two scale factors: δ = 4 corresponding to one structural mode participating and δ = 8 corresponding to two structural modes participating. For scale δ = 4, the Poincare section is broken into discrete segments, so a POI must be selected for capturing the boundary; here the upper left portion of the section has been selected as indicated in Fig. 4. For scale δ = 8, the Poincare section is continuous, so the entire section can be considered as the POI. Key parameters for generating BTVs were set as s =0.8, $2R=0.02$, and $ϵ=0.2$ for all boundaries related to this system. Example BTVs are shown in Fig. 7 for scale δ = 4 for a change in stiffness k1 from 2.0 to 1.9, with the Poincare section generated from measurements of position x8 associated with mass m8. Here, the blue points belong to the reference Poincare section associated with the undamaged structure, the red points belong to the deformed Poincare section associated with the damaged structure, and the green lines are BTVs approximating the attractor deformation. Points that remain unmatched following the final iteration of correspondence matching are boxed. The overall number of points involved in each POI are also indicated on the figure. Fig. 7 Fig. 7 Close modal Figure 8 shows the magnitude and direction of BTVs generated for a range of variation in stiffness k1 from 1.95 to 1.70 compared to the undamaged system. The independent axis is a parameterized location along the attractor boundary running from 0 to 1. Note that the BTVs generally show changes in magnitude proportional to the level of damage the system has incurred; here regions 0.3–0.4 and 0.6–0.7 would allow an estimate of damage to be made simply by examining the magnitude of the BTVs. Fig. 8 Fig. 8 Close modal Similar figures can be generated for changes in system damping. Figure 9 shows example BTVs for scale δ = 4 for changes in the damping c1 from 0.075 to 0.150, with the Poincare section again generated from measurements of position x8. The difference between this figure and Fig. 7 is that the direction of the BTVs have changed due to the damage being incurred in a different system parameter. Fig. 9 Fig. 9 Close modal In Fig. 10, the magnitude and direction of BTVs generated for a range of variation in damping c1 from 0.100 to 0.200 are plotted. Again, for large regions of the plot, changes in the magnitude of the BTVs are proportional to the damage level. Note also that the angle of these vectors is different from when stiffness was changed as shown in Fig. 8, allowing the type of parameter change to be differentiated using vector direction. Fig. 10 Fig. 10 Close modal Boundary transformation vectors can also be generated using Poincare sections with additional structural modes participating. Figure 11 shows a plot equivalent to Fig. 9 but now with scale δ = 8 (associated with two structural modes participating). Fig. 11 Fig. 11 Close modal Because higher values of δ spread out the excitation Lyapunov spectrum and allow more overlap with the structural spectrum, richer patterns can occur. Figure 12 shows the magnitude and direction of BTVs associated with various changes in k1 and c1 at scale δ = 8. Note that at this scale, the directional signature for a change in stiffness versus damping is very distinct. Also note that while regions of boundary relative location where BTV magnitude is proportional to damage are smaller, they are still present for both types of parameter change. Fig. 12 Fig. 12 Close modal Determining the location of structural damage is also possible. Figure 13 shows the magnitude and direction of BTVs associated with changes in stiffness at two locations: k1 and k7. Here changes in the damage location give rise to distinct patterns in the directions of the BTVs. Taken together, Figs. 12 and 13 show that the BTVs are a rich descriptor of how the system attractor is deformed, allowing changes in both damage level and type/location to be inferred. Fig. 13 Fig. 13 Close modal These results are not limited to the mass-spring-damper system. For the cantilever beam system, results were generated using two scale factors δ = 10 corresponding to one structural mode participating and δ = 30 corresponding to two structural modes participating. For both scale δ = 10 and scale δ = 30, the Poincare section is broken into discrete segments, so a POI must be selected for drawing the boundary; the upper left portion of the section at each scale has been selected as indicated in Fig. 6. Key parameters for generating BTVs were set as s =0.8, $2R=0.0001$, and $ϵ=0.2$ for all boundaries related to this structure. In Fig. 14, example BTVs for the cantilever beam are shown for scale δ = 10 for a loss of 20% of mass and stiffness at element 5 (80% of undamaged values). Here, the Poincare section is generated from a recording of the displacement at the free end of the beam z20 (element 20). Fig. 14 Fig. 14 Close modal Figure 15 shows the magnitude and direction of BTVs generated for incremental losses of material at element 5 of the cantilever beam. Here again changes in the magnitude of the BTVs are proportional to the damage incurred for some regions of the plot. The gap in the data is due to the boundary around the POI not being completely tight and therefore including a section without points connecting the two ends of the upside-down v-shape shown in Fig. 14. Fig. 15 Fig. 15 Close modal Figure 16 shows that if the scale is increased to δ = 30 and two structural modes participate, then both amount and location of damage can be quantified in this system as well. In this figure, simulated material loss occurs at either element 5 or element 15 of the beam, leading to changes in BTV magnitude proportional to level of damage but distinct BTV directions depending on where the damage occurs. Fig. 16 Fig. 16 Close modal ## 5 Discussion The results shown in the immediately previous section demonstrate that BTVs generated via chaotic interrogation are a useful tool for assessing damage in structural health monitoring applications. Although the BTVs are not exact, they are systematic, reproducible, and offer damage estimates that are easy to understand and interpret. Moreover, because deformation of the attractor boundary is a rich descriptor of changes in the structural dynamics, BTVs can provide information about both damage type/location and amount, something not previously possible for methods based on chaotic interrogation. Examining Fig. 8, we provide a brief example of how the magnitude of the BTVs can be used to assess damage level according to the process outlined in Fig. 2. If data regarding parameter values and Poincare sections were available for the nominal system with $k1=2.00$ and at least two of the variations, say $k1=1.95$ and $k1=1.85$, then Poincare sections and BTVs could be generated and used to create the magnitude and direction plots for those cases. These plots would be examined carefully to determine which boundary relative locations have BTV magnitudes that scale proportionally to system damage. Here choosing boundary relative location 0.7 would make sense, although large regions of the plot would likely also be acceptable, and several locations might be sampled to reduce uncertainty in any estimate. For a new, unknown change in the structure represented by either $k1=0.90$ (interpolation) or $k1=0.80$ (extrapolation) the proportionality at these locations could be used to estimate the value of k1 directly from the BTV magnitudes. At boundary relative location 0.7 estimates based on the BTVs are $k1=1.90$ (interpolation) and $k1=1.80$ (extrapolation) to three digits of precision. These estimates match the actual levels of damage in the system. Similar damage estimates could be made using data from Figs. 10, 12, 13, 15, or 16. All of these magnitude and direction plots have at least some relative boundary locations where BTV magnitude changes are roughly proportional to damage level. In fact, using BTV magnitudes at proportional locations for different excitation scales, such as at scale δ = 4 (Fig. 8) and again at scale δ = 8 (Fig. 12 or 13) might be another way to sample and increase reliability of any estimate in addition to choosing relative boundary locations. As previously mentioned, one of the main advantages of using BTVs is to gain information about the type/location of damage in addition to inferring the damage level. This is possible even for minimal overlap of the structural and excitation Lyapunov spectra as evidenced by the difference in the typical direction of the BTVs for changes in k1 (Fig. 8) or c1 (Fig. 10) in the mass-spring-damper system at scale δ = 4. However, when the excitation scale increases the spectral overlap, indications of type/location are often more pronounced. This is demonstrated in Fig. 12 where BTV direction differences for changes in k1 and c1 are very distinct for the mass-spring-damper system, in Fig. 13 where differences between k1 and k7 changes are also very clear for the mass-spring-damper system, and in Fig. 16 where material losses at elements 5 and 15 are clearly differentiated for the cantilever beam. These results again suggest that interrogating a structure at multiple scales may be a valuable strategy, not only for quantifying damage but also for identifying its type/location. Obviously, there are limits to the amount the scale factor can be increased before the Poincare section loses structure and becomes less descriptive, but the results here show that for typical systems there may be a significant range to explore. There are other limitations to using BTVs for damage detection. For one, it is possible that that damage incurred by a structure may become large enough that the shape of the Poincare section fundamentally changes. This is the case for the cantilever beam system if the stiffness/mass at element 5 drops below about 0.75. This suggests that caution is warranted in any case where an extrapolation is made based on a previous library of known BTVs, as an unknown boundary shape change could lie just beyond any previously investigated cases. Another potential issue not addressed in this paper is noise. Noise will impact the results in any real system and may limit the accuracy of BTV-based estimates of damage. However, previous work examining BTVs generated from signals with additive noise for chaotic systems [31] suggests that using simple, wavelet-based denoising will allow signals with signal-to-noise ratios of 100 or greater to be used reliably. ## 6 Conclusions and Future Work This article demonstrates that boundary transformation vector analysis allows both damage level and damage type/location to be determined in simulated structural health monitoring applications involving linear systems. The ability to estimate both the amount and location of damage using BTVs represents a significant advantage over past methods making use of chaotic interrogation. Because BTVs are rich descriptors of how the boundary of an attractor deforms, they can encapsulate multiple aspects of how the system dynamics change when a structure experiences damage. Work is currently underway to quantify damage in experimental linear systems using BTVs. However, since there are many ways to conduct structural health monitoring in linear systems, the ultimate goal is develop BTVs as a robust method of estimating damage in nonlinear systems. To this end, future work will focus on simulations of nonlinear systems. Additionally, the boundary transformation vector method should be expanded and refined. In this paper, Poincare sections were constructed using successive maxima of a structural signal because this is convenient when considering experimental data. However, Poincare sections constructed this way are likely not optimal in terms of sensitivity to damage; reconstructing the attractor using time-delayed embedding and determining a method for choosing optimal sections might be one way to further improve the method. Furthermore, in this work a forced Brusselator system was selected as the system excitation because it had a desirable Lyapunov spectrum; however, other choices of chaotic interrogation might perform even better. Methods to design or choose chaotic excitation tailored to a specific structure would go a long way to improving this and other damage detection methods based on chaotic interrogation. ## References 1. Doebling , S. W. , Farrar , C. R. , Prime , M. B. , and Shevitz , D. W. , 1996 , “ Damage Identification and Health Monitoring of Structural and Mechanical Systems From Changes in Their Vibration Characteristics: A Literature Review ,” Los Alamos National Laboratory, Los Alamos, NM, Technical Report No. LA-13070-MS. 2. Doebling , S. W. , Farrar , C. R. , and Prime , M. B. , 1998 , “ A Summary Review of Vibration-Based Damage Identification Methods ,” Shock Vib. Dig. , 30 ( 2 ), pp. 91 105 .10.1177/058310249803000201 3. Salawu , O. S. , 1997 , “ Detection of Structural Damage Through Changes in Frequency: A Review ,” Eng. Struct. , 19 ( 9 ), pp. 718 723 .10.1016/S0141-0296(96)00149-6 4. Pandey , A. K. , Biswas , M. , and Samman , M. M. , 1991 , “ Damage Detection From Changes in Curvature Mode Shapes ,” J. Sound Vib. , 145 ( 2 ), pp. 321 332 .10.1016/0022-460X(91)90595-B 5. Maia , N. M. M. , Silva , J. M. M. , Almas , E. A. M. , and Sampaio , R. P. C. , 2003 , “ Damage Detection in Structures: From Mode Shape to Frequency Response Function Methods ,” Mech. Syst. Signal Process. , 17 ( 3 ), pp. 489 498 .10.1006/mssp.2002.1506 6. Pandey , A. K. , and Biswas , M. , 1994 , “ Damage Detection in Structures Using Changes in Flexibility ,” J. Sound Vib. , 169 ( 1 ), pp. 3 17 .10.1006/jsvi.1994.1002 7. Sohn , H. , and Farrar , C. R. , 2001 , “ Damage Diagnosis Using Time Series Analysis of Vibration Signals ,” Smart Mater. Struct. , 10 ( 3 ), pp. 446 451 .10.1088/0964-1726/10/3/304 8. Taha , M. M. R. , Noureldin , A. , Lucero , J. L. , and Baca , T. J. , 2006 , “ Wavelet Transform for Structural Health Monitoring: A Compendium of Uses and Features ,” Struct. Health Monit. , 5 ( 3 ), pp. 267 295 .10.1177/1475921706067741 9. Das , S. , Saha , P. , and Patro , S. K. , 2016 , “ Vibration-Based Damage Techniques Used for Health Monitoring of Structures: A Review ,” J. Civ. Struct. Health Monit. , 6 ( 3 ), pp. 477 507 .10.1007/s13349-016-0168-5 10. Gomes , G. F. , Mendez , Y. A. D. , Alexandrino , P. L. , da Cunha , S. S. , and Ancelotti , A. C. , 2019 , “ A Review of Vibration Based Inverse Methods for Damage Detection and Identification in Mechanical Structures Using Optimization Algorithms and ANN ,” Arch. Comput. Methods Eng. , 26 ( 4 ), pp. 883 897 .10.1007/s11831-018-9273-4 11. Worden , K. , Farrar , C. R. , Haywood , J. , and Todd , M. , 2008 , “ A Review of Nonlinear Dynamics Applications to Structural Health Monitoring ,” Struct. Control Health Monit. , 15 ( 4 ), pp. 540 567 .10.1002/stc.215 12. Todd , M. D. , Nichols , J. M. , Pecora , L. M. , and Virgin , L. N. , 2001 , “ Vibration-Based Damage Assessment Utilizing State Space Geometry Changes: Local Attractor Variance Ratio ,” Smart Mater. Struct. , 10 ( 5 ), pp. 1000 1008 .10.1088/0964-1726/10/5/316 13. Nichols , J. M. , Virgin , L. N. , Todd , M. D. , and Nichols , J. D. , 2003 , “ On the Use of Attractor Dimension as a Feature in Structural Health Monitoring ,” Mech. Syst. Signal Process. , 17 ( 6 ), pp. 1305 1320 .10.1006/mssp.2002.1521 14. Nichols , J. M. , Todd , M. D. , and Wait , J. R. , 2003 , “ Using State Space Predictive Modeling With Chaotic Interrogation in Detecting Joint Preload Loss in a Frame Structure Experiment ,” Smart Mater. Struct. , 12 ( 4 ), pp. 580 601 .10.1088/0964-1726/12/4/310 15. Todd , M. D. , Erickson , K. , Chang , L. , Lee , K. , and Nichols , J. M. , 2004 , “ Using Chaotic Interrogation and Attractor Nonlinear Cross-Prediction Error to Detect Fastener Preload Loss in an Aluminum Frame ,” Chaos , 14 ( 2 ), pp. 387 399 .10.1063/1.1688091 16. Olson , C. C. , Overbey , L. A. , and Todd , M. D. , 2005 , “ Sensitivity and Computational Comparison of State-Space Methods for Structural Health Monitoring ,” Proc. SPIE 5768 , pp. 241–252.10.1117/12.598894 17. Torkamani , S. , Butcher , E. A. , Todd , M. D. , and Park , G. , 2011 , “ Detection of System Changes Due to Damage Using a Tuned Hyperchaotic Probe ,” Smart Mater. Struct. , 20 ( 2 ), p. 025006 .10.1088/0964-1726/20/2/025006 18. Torkamani , S. , Butcher , E. A. , Todd , M. D. , and Park , G. , 2012 , “ Hyperchaotic Probe for Damage Identification Using Nonlinear Prediction Error ,” Mech. Syst. Signal Process. , 29 , pp. 457 473 .10.1016/j.ymssp.2011.12.019 19. Chelidze , D. , Cusumano , J. P. , and Chatterjee , A. , 2002 , “ A Dynamical Systems Approach to Damage Evolution Tracking, Part 1: Description and Experimental Application ,” ASME J. Vib. Acoust. , 124 ( 2 ), pp. 250 257 .10.1115/1.1456908 20. Cusumano , J. P. , Chelidze , D. , and Chatterjee , A. , 2002 , “ A Dynamical Systems Approach to Damage Evolution Tracking, Part 2: Model-Based Validation and Physical Interpretation ,” ASME J. Vib. Acoust. , 124 ( 2 ), pp. 258 264 .10.1115/1.1456907 21. Chelidze , D. , and Cusumano , J. P. , 2004 , “ A Dynamical Systems Approach to Failure Prognosis ,” ASME J. Vib. Acoust. , 126 ( 1 ), pp. 2 8 .10.1115/1.1640638 22. Chelidze , D. , and Cusumano , J. P. , 2006 , “ Phase Space Warping: Non-Linear Time Series Analysis for Slowly Drifting Systems ,” Philos. Trans. R. Soc., A , 364 ( 1846 ), pp. 2495 2513 .10.1098/rsta.2006.1837 23. Epureanu , B. I. , and Hashmi , A. , 2006 , “ Parameter Reconstruction Based on Sensitivity Vector Fields ,” ASME J. Vib. Acoust. , 128 ( 6 ), pp. 732 740 .10.1115/1.2346692 24. Sloboda , A. R. , and Epureanu , B. I. , 2013 , “ Sensitivity Vector Fields in Time-Delay Coordinate Embeddings: Theory and Experiment ,” Phys. Rev. E , 87 ( 2 ), p. 022930 .10.1103/PhysRevE.87.022903 25. Abarbanel , H. D. I. , 1996 , Analysis of Observed Chaotic Data , Springer-Verlag , New York . 26. Liu , M. , and Chelidze , D. , 2006 , “ Identifying Damage Using Local Flow Variation Method ,” Smart Mater. Struct. , 15 ( 6 ), pp. 1830 1836 .10.1088/0964-1726/15/6/037 27. Carroll , T. L. , 2015 , “ Attractor Comparisons Based on Density ,” Chaos , 25 ( 1 ), p. 013111 .10.1063/1.4906342 28. Carroll , T. L. , and Byers , J. M. , 2016 , “ Grid-Based Partitioning for Comparing Attractors ,” Phys. Rev. E , 93 ( 4 ), p. 042206 .10.1103/PhysRevE.93.042206 29. , M. , Kwuimy , C. A. K. , and Nataraj , C. , 2016 , “ Characterization of the Nonlinear Response of Defective Multi-DOF Oscillators Using the Method of Phase Space Topology (PST) ,” Nonlinear Dyn. , 86 ( 3 ), pp. 2023 2034 .10.1007/s11071-016-3012-x 30. Belongie , S. , Malik , J. , and Puzicha , J. , 2002 , “ Shape Matching and Object Recognition Using Shape Contexts ,” IEEE Trans. Pattern Anal. Mach. Intell. , 24 ( 4 ), pp. 509 522 .10.1109/34.993558 31. Sloboda , A. R. , 2021 , “ Boundary Transformation Representation of Attractor Shape Deformation ,” Chaos , 31 ( 8 ), p. 083133 .10.1063/5.0061029 32. Akkiraju , N. , Edelsbrunner , H. , Facello , M. , Fu , P. , Mucke , E. , and Varela , C. , 1995 , “ Alpha Shapes: Definition and Software ,” Proceedings of the 1st International Computational Geometry Software Workshop , Minneapolis, MN, Jan., p. 63– 66 . 33. Duff , I. S. , and Koster , J. , 2001 , “ On Algorithms for Permuting Large Entries to the Diagonal of a Sparse Matrix ,” SIAM J. Matrix Anal. Appl. , 22 ( 4 ), pp. 973 996 .10.1137/S0895479899358443
2022-12-01 00:42:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 87, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6469884514808655, "perplexity": 1355.5251759868258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00508.warc.gz"}
https://electronics.stackexchange.com/questions/138698/if-voltage-can-exist-while-current-is-0
# If voltage can exist while current is 0...? In the following diagram voltage can exist without current since it's an open circuit: simulate this circuit – Schematic created using CircuitLab Can a voltage source in an open circuit with 0 current effect a circuit like so: simulate this circuit So that current would be equal to what is outputted from V-1, yet the voltage is increased to 15V due to V-2? Because in this circuit voltage can increase even if it's an open circuit: simulate this circuit • The negative terminal of V2 is basically your reference point for measuring 15 volts. Since that reference point is not connected to anything else in the circuit, it won't affect its operation (ideal case). Nov 15 '14 at 17:16 The clue to voltage is in it's proper name: Potential Difference. Voltage isn't a real thing, it's just the difference between two things. Imagine, if you will, a tank of water. Say it's 1m on each side, and you fill it to a depth of 10cm. You lift the tank up off the ground to a height of 1m. So the top of the water is 1.1m above the ground. Now you make a hole in the bottom of the tank. The water flows out of the hole. Until you had made that hole (created a circuit) the difference in heights still existed. Now imagine the same scenario, but you instead fill the tank right to the top. The top of the water is now 2m above the ground. You have a greater potential difference between the top of the water and the ground. Make the same hole, and the water comes out with more force due to the increased weight of the water in the tank - the greater potential difference causing greater current to flow. • Thank you for that great illustration. But my concern is in the second circuit is 15V true? Since voltage could exist even in an open circuit? Nov 15 '14 at 17:39 • @key No - in the 2nd circuit the 5V supply is totally irrelevant to any current flow thru the load from the 10V source. If I stood on suitable insulating blocks with a multimeter in my hand across a 9V battery and then someone wired me to 1-million volts (i.e. with a van der graaf generator), the meter would still read 9 volts. Nov 15 '14 at 17:52 • Ah thanks, I assumed that I could "add" voltage without current. Yet it seems pointless thank you! Nov 15 '14 at 18:01 Your second diagram is deceiving because pointing to a single point, and saying "N volts" is basically meaningless by itself. For voltage to make any sense, you need to specify two points. A voltmeter (for example) has two inputs, and you connect one of those inputs to each of those two points. Now, it's true that in many circuit diagrams you'll see things like +5V or 3V3, or things like that. When you see this, however, it means there's some implied reference point, usually marked as the ground. So, a marking like "5V" really means "this point will be at 5 volts above ground". Here, the point you've marked as "V=15" depends on what you choose as the reference point (the ground, if you will). For the moment, let's leave the resistor out of the circuit, since it's (mostly) irrelevant to the question at hand. Instead, let's consider something like this: simulate this circuit – Schematic created using CircuitLab If we measure between A and B, we'll see 10 V. If we measure between B and C, we'll see 5 volts. If we measure between A and C, we'll see 15 volts. The voltage between B and C makes no difference to the voltage between A and B (and vice versa).
2021-10-26 22:05:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6299184560775757, "perplexity": 618.0510439202931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00409.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-11-additional-topics-11-6-pie-bar-and-line-graphs-problem-set-11-6-page-503/22
## Elementary Algebra $0.2$ % increase The interest rate for the bank for the month of May is 8% while the interest rate for the month of June is 8.2%. This means that the change in interest rates for the bank between May and June is: $8.2 \%-8.0\%= 0.2\%$.
2018-12-19 09:15:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5303280353546143, "perplexity": 925.7023242734916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831933.96/warc/CC-MAIN-20181219090209-20181219112209-00609.warc.gz"}
http://math.stackexchange.com/tags/spectral-theory/new
# Tag Info ## New answers tagged spectral-theory 0 The discussion at that point in the book was focused on a normal complex matrix $A$ with spectrum (eigenvalues in this case) $\sigma(A)=\{\lambda_{1},\cdots,\lambda_{k}\}$. Normal matrices are unitarily equivalently to diagonal matrices. Equivalently, if $\{ \lambda_{j}\}_{j=1}^{k}$ are the distinct eigenvalues of $A$, then there is an orthonormal basis of ... 0 In order to show that your operator is compact, show that it maps a bounded sequence $\{ f_{n} \}_{n=1}^{\infty}\subset C[0,1]$ to an equicontinuous sequence of functions. So, let $\{ f_{n} \}_{n=1}^{\infty}$ satisfy $\|f_{n}\|_{C[0,1]}\le M$ for all $n$ and some fixed $M$; then, for every $\epsilon > 0$, show that there is a $\delta > 0$ such that $$... 1 I understand your confusion. There's an error in the critical equation of the proof that, once corrected, makes everything obvious. I'll rewrite it for you$$ \left\|A\left(\frac{y_{q}}{\lambda_{q}}\right)-A\left(\frac{y_{p}}{\lambda_{p}}\right)\right\| = ... 1 If $A x_k = \lambda _k x_k$, $x_k \ne 0$, $k \in \{1,...,n \}$, and $\lambda _i \ne \lambda _j$, $i \ne j$, then a vector of the form $$x = \sum _{k=1}^n a_k x_k, \ \ \ a_k \in \mathbb{R}$$ can be an eigenvector of $A$ only if $a_k=0$ for all $k \in \{1,...,n \}$ but one. Indeed, if $A x = \lambda x$, then the linear independence of $x_1,...,x_n$ ... 1 Your operator is already diagonalized, and it is clear that $Te_{n}=y_{n}e_{n}$ for $n \ge 1$. So $\{ y_{n}\}$ are eigenvalues. The eigenvectors of a selfadjoint operator are orthogonal for different eigenvalues. Nothing except the $0$ vector is orthogonal to $e_{n}$ for all $n$. So there can't be any other eigenvalues. Now you know that the spectrum of $T$ ... 2 If $\Im\lambda \ne 0$, and $x \in X$, then $$\Im\lambda \|x\|^{2} = \Im((A-\lambda I)x,x),\\ |\Im\lambda|\|x\|^{2} \le |((A-\lambda I)x,x)|\le \|(A-\lambda I)x\|\|x\|,\\ |\Im\lambda|\|x\| \le \|(A-\lambda I)x\|.$$ So $A-\lambda I$ is injective for all $\lambda\notin\mathbb{R}$. The above inequality can be used to show that ... 1 The representation of the elements of $H^*$ is not needed. Let $\lambda$ be in the point spectrum of $A$. Then there is $0\neq x\in H=(H,(\cdot,\cdot))$ such that $Ax=\lambda x$ and by the self-adjointness of $A$: $\lambda(x,x)=(\lambda x,x)=(Ax,x)=(x,Ax)=(x,\lambda x)=\overline{\lambda}(x,x)$. Hence $\lambda=\overline{\lambda}$, thus ... 3 This operator is sometimes called unilateral shift. Suppose $Ax=\lambda x$ where $x=(x_1,\cdots,x_n,\cdots).$ If $(0,x_1,x_2,\cdots)=(\lambda x_1,\lambda x_2,\cdots),\,$you can convince yourself that $\lambda$ must be zero. Also notice that obviously $\|A\|=1.$ Now let $0<|\lambda|<1$ and consider $A-\lambda I$. Then if $(1,0,0,\cdots)=(A-\lambda ... 2 Yes, because if$x$is invertible and$\|y - x \| \le \|x^{-1}\|^{-1}$, $$y = x (1 - x^{-1} (x-y))$$ and$1 - x^{-1} (x - y)$is invertible with $$(1 - x^{-1}(x-y))^{-1} = \sum_{j=0}^\infty (x^{-1} (x-y))^j$$ Moreover $$\|y^{-1} - x^{-1}\| \le \sum_{j=1}^\infty \|x^{-1}\|^{j+1} \|x - y\|^j = \dfrac{\|x^{-1}\|^2 \|x - y\|}{1- \|x-y\| \|x^{-1}\|}$$ -1 Due to the pointwise structure it formally holds: $$\sigma(F)=\bigcup_{x\in X}\sigma(F(x))$$ The difficulties arise as soon as additional requirements are tied upon the functions: 0) The formal inverse exists. 1a) The formal inverse is not necessarily bounded.* 1b) The formal inverse is continuous due to the Neumann series. 2) The formal inverse is again ... 1 It seems fine to me: the only point to be careful of is to note that since you've assumed the normed field$K$to be algebraically closed, its value group must be dense (it cannot be discrete since if it were, there would be a uniformizer and it couldn't have$n$th roots). So for any$\epsilon > 0$, we can find some$a \in K^\times$such that$\rho(A) ... 1 I think very likely the question you might wish to be asking includes more structure than the question you literally asked... based on your example of a Laplacian. That is, your Hilbert space $H$ is really a Sobolev space $H^1$ on some compact Riemannian manifold. Then, yes, the Laplacian maps $H^1$ to $H^{-1}$ continuously, and $H^{-1}$ is the Hilbert space ... 1 For simplicity, assume $f(x,y)=f(y,x)$ is a real function. Because $f$ is in $L^{2}([0,1]\times[0,1])$, then the integral operator $K$ given by $$Kg = \int_{0}^{1}f(x,y)g(y)\,dy$$ is a selfadjoint Hilbert-Schmidt integral operator on $L^{2}[0,1]$. So there is an orthonormal basis $\{\varphi_{n}\}_{n=1}^{\infty}$ consisting of real eigenfunctions ... 0 Okay, I think I have it. The strategy is to generalize the proof of "No eigenvalues => product ergodic" which is usually given in standard texts when introducing the notion of weak mixing. The following is a proof I have written just before in a document: Let $E \subset L^2(X,\mu)$ be the set of eigenfunctions for $U_T$ that have constant absolute value ... 2 The functional calculus works as stated. However, the spectral mapping result that you want to prove is not true in general. For example, consider the operator $M$ of multiplication by $x$ on $L^{2}[0,1]$. The spectrum of $M$ is $[0,1]$. If you let $g(x)=x$ for $x \in [0,1/2)\cup(1,2,1]$ and $g(1/2)=50$, then $g(M)$ does not have $50$ in its spectrum. On the ... 6 here's my solution: The function $\min \{ x, y \}$ can be written as follows: $$\min\{x, y \}= \begin{cases} & y, \mbox{ if } 0 \le y \le x \\ & x, \mbox{ if } x \le y \le 1 \end{cases}$$ so we found the form for $T$: $$Tf(x) = \int_0^x yf(y) \, dy + x \int_x^1 f(y) \, dy.$$ Now let $Tf = \lambda f, \lambda \ne 0$. So we ... 1 One possible description of this set is the following. Let $\Im(C)$ denote the image of the matrix $C$. Thus, $S(A)=\{B,\ \Im(AB)\subset\Im(B)\}$. Before proving it, notice that we can provide many examples of matrices in $S(A)$ with this description. For example, any matrix $B$ such that $\Im(A)\subset\Im(B)$ belongs to $S(A)$, because ... 2 If you have a bounded operator $A$, then the holomorphic functional calculus is always an option, and it is based on Cauchy's integral representation: $$f(A) = \frac{1}{2\pi i} \oint_{C} f(\lambda)\frac{1}{\lambda I-A}"\,d\lambda = \frac{1}{2\pi i} \oint_{C} f(\lambda)(\lambda I-A)^{-1}\,d\lambda.$$ The contour $C$ is any simple ... 2 Physically, Hamiltonian operators in Quantum Mechanics should be semibounded, meaning that $(Ax,x) \ge M(x,x)$ for all $x\in\mathcal{D}(A)$ and for some fixed $M$. This has to be done with energy considerations. Second order ODES and PDES, in order to be symmetric, are quadratic in nature, and usually end up being semibounded--again, this is related to ... 0 Suppose $\overline{G}$ is the complement of the graph $G$. Then $A(G)+A(\overline{G}) = J-I$ (where $J$ is the matrix with all entries equal to one). If $n=|V(G)|$, this implies that $\lambda(G)+\lambda(\overline{G}) \ge n-1.$ We get equality here if $G$ is a regular self-complementary graph. In particular if we take $G$ to be the Paley graph on ... 0 I'm no expert on this, so I might say stupid things, but lets have a go. Let me assume that $A$ is unital (with unit $\mathbb{1}$). The answer to your first question is positive, since $$\sigma(F)=\bigcup_{\omega\in\Omega} \sigma(F(\omega)).$$ Indeed, the inverse $h$ (if it exists) of the function $\lambda I-F$ satisfies $h(\lambda I-F)=(\lambda I-F)h\equiv ... 0 I decided to make a separate answer for the Numerical Range question. They're different. Yes, the closure of the numerical range is the same as the closed convex hull of the spectrum. Numerical Range: Suppose$\lambda_{1},\lambda_{2} \in \sigma(A)$with$\lambda_{1}\ne \lambda_{2}$. Using the spectral theorem, you can find sequences$\{ ... 3 Every bounded linear operator on a complex Banach space has non-empty spectrum $\sigma(B)$. A simple way to argue this is to assume the contrary, and conclude that $(B-\lambda I)^{-1}$ is an entire function of $\lambda$ which vanishes at $\infty$; then Liouville's theorem gives the contradiction that $(B-\lambda I)^{-1}\equiv 0$ for all ... 0 Note that the spectrum of any operator can't be empty. I think what you mean is that the operator might not have an eigenvalue- an example is given here. 1 The fact that the range of $P$ is invariant under $A$ can be written as $PaP=aP$ for all $a\in A$. Now fix $a\in A$; since $a^*\in A$, $Pa^*P=a^*P$. Take adjoints and you get $PaP=Pa$. So we have shown that $Pa=aP$ for all $a\in A$. 2 Let's first check that $P^\perp$ has invariant range. Let $a \in A$ and let $\eta \in [A\xi_1]^\perp = (A\xi_1)^\perp$. Then, for any $b \in A$, $$\langle a\eta, b\xi_1 \rangle = \langle \eta, a^\ast b \xi_1 \rangle = 0,$$ where $a^\ast b \in A$ and hence $a^\ast b \xi_1 \in A\xi_1$ precisely because $A$ is a self-adjoint algebra of operators in $B(H)$, ... 1 You can find hint at Neumann series wikipedia article. Furthermore the question is maybe a duplicate, I think you can find the answer at math.se. 3 This is trivial from the definition of spectrum. If the spectral radius is less than one, then in particular $1$ is not in the spectrum, which means $I-A$ is invertible. Cameron Buie has answered a more interesting question. 1 $||AB-BA||=||AB-A_{\alpha}B+A_{\alpha}B-BA|| \leq ||AB-A_{\alpha}B||+||A_{\alpha}B-BA||$. But $A_{\alpha}B=BA_{\alpha}$, and $||AB-A_{\alpha}B|| \leq ||B||||A-A_{\alpha}||$. So $S'$ is closed in the norm topology as well. A "high-level" explanation can be given: $S'$ is a von-neumann algebra, and every von-neumann algebra is a $C^*$ algebra- and so $S'$ ... 2 Let $C$ be a nonnegative matrix and $M(t)=M+tC$ (a primitive matrix). Then $\rho(M(T))$ is the maximal eigenvalue of $M(t)$. Let $spectrum(M(t))=(\lambda_i(t))$ with $\lambda_1(t)>|\lambda_2(t)|\geq |\lambda_3(t)|\geq\cdots$. Since $\lim_{t↓0}M(t)=M(0)$, $\lim_{t↓0}\rho(M(t))=\rho(M)$. Moreover , when $t$ decreases, $\rho(M(t))$ decreases too. EDIT:(with ... 1 Injectivity of the Gelfand transform is equivalent to the assertion that characters separate points. This can be verified by using that in the commutative case characters are precisely the pure states and so they have to separate points since the states do. 2 Hint as rhetorical question: If it's isometric, and $\hat{x} = 0$, what does that tell you about $\lVert x\rVert$? 1 Any matrix is similar to its Jordan Canonical form. The matrices implementing the "similarity" need not be unitary. 2 It seems that you are looking for interwining operators. Top 50 recent answers are included
2014-09-15 04:34:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9714915752410889, "perplexity": 128.49612693149524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104119.19/warc/CC-MAIN-20140914011144-00215-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
http://elespiadigital.org/libs/qd9zstny/6y18e.php?tag=vertically-opposite-angles-form-a-linear-pair-true-or-false-4d2c0c
(a) Zero (b) Thrice the measure of the original angle ... if they form a linear pair: (a) 120° ... 20. How many pairs of linear pair are there in the figure? If two lines intersect each other,then the vertically opposite angles are equal. Adjacent angles share a side. To explore more, download BYJU’S-The Learning App. (e) Vertically opposite angles. 300. When the lines do not meet at any point in a plane, they are called parallel lines. EASY. (a) a linear pair of angles (b) vertically opposite angles (c) Corresponding angles (d) a ray. Instead, they create CONGRUENT veritcal angles. EASY. Vertical Angles Theorem states that vertical angles, angles that are opposite each other and formed by two intersecting straight lines, are congruent. A pair of vertically opposite angles are always equal to each other. • Performance & security by Cloudflare, Please complete the security check to access. o True o False Alternate exterior angles are angles on alternate sides and between the two parallel or non-parallel lines. true or false 1.vertically opposite angles are always supplementary. Answer/Explanation. 6. If , find . (v) Pair of vertically opposite angles are always supplementary. If it is false, find a counterexample. 22. Cloudflare Ray ID: 6158d3c8aa3fb7d5 Try moving the points below. If two adjacent angles are supplementary, then they form _____ . Yes, they do give vertical angles, which by definition means opposite angles created by intersecting lines. If the angle next to the vertical angle is given to us, then we can subtract it from 180 degrees to get the measure of vertical angle, because vertical angle and its adjacent angle are supplementary to each other. Write 'True' or 'False' for the following statements. 7. Example B True or false. A linear pair may have two acute angles. Name any four pairs. "Vertical" refers to the vertex (where they cross), NOT up/down. o True o False A linear pair is a set of adjacent angles that make a straight line. Vertical angles are formed by two intersecting lines. Linear pair angles ∠ 1 and ∠ 4 ∠ 1 and ... State whether the following statement is True or False. Linear Pair of Angles: A pair of adjacent angles formed by intersecting lines is called a Linear Pair of Angles. Linear Pairs Find the measure of the angle described. Please enable Cookies and reload the page. In the following picture, Р1 & Р2, Р2 & Р4, Р3 & Р4, and Р3 & Р4 are linear pairs. Your email address will not be published. Two adjacent complementary angles will form a _____ angle. You may need to download version 2.0 now from the Chrome Web Store. It means they add up to 180 degrees. Alternate exterior angles are outside the parallel lines. Also, $$\overline{OD}$$ stands on the line $$\overleftrightarrow{AB}$$. Alternate exterior angles are outside the parallel lines. 3.adjacent supplementary angles form a linear pair. False. s' Problem 1 Identifying the Hypothesis and the Conclusion As we have discussed already in the introduction, the vertical angles are formed when two lines intersect each other at a point. Consider the figure given below to understand this concept. The equality of vertically opposite angles is called the vertical angle … 21. 300. When two lines intersect each other, then the angles opposite to each other are called vertical angles. In the figure given above, the line segment $$\overline{AB}$$ and $$\overline{CD}$$ meet at the point $$O$$ and these represent two intersecting lines. False'. View Answer. Adjacent angles are across from each other. Question 69. (vi) 30° is one-half of its complement. Example A True or false. In a Linear Pair the 2 angles add up to 180 degrees while Vertical Angles are just 2 vertical angles that are congruent. … 5.if a transversal intersects two lines and the corresponding angles are equal,then the two lines are parallel. True or False? (a) Two obtuse angles can be supplementary. Sum of vertical angles: Both pairs of vertical angles (four angles altogether) always sum to a full angle (360°). True or False: Vertical angles are formed by the intersection of two lines (the angles on opposite sides of the intersection). An angle is more than 45°. Evaluating Statements Use the figure below to decide whether the statement is true or false . Solve for the variables and for exercises 6-10. They are abbreviated as vert. Solution : False Two angles making a linear pair are always adjacent angles. In questions 1 and 2, given below, identify the given pairs of angles as corresponding angles, interior alternate angles, exterior alternate angles, adjacent angles, vertically opposite angles or allied angles… The given figure shows intersecting lines and parallel lines. true or false 1.vertically opposite angles are always supplementary. Answer: (b) Explanation : Definition of a linear pair of angles. 4.if a transversal intersect two lines, alternate angles are equal. If two adjacent angles are supplementary, then they form _____ . Vertical angles are supplementary. 4. For example, x = 45 degrees, then its complement angle is: 90 – 45 = 45 degrees. 5.if a transversal intersects two lines and the corresponding angles are equal,then the two lines are parallel. Also, a vertical angle and its adjacent angle are supplementary angles, i.e., they add up to 180 degrees. Note: A vertical angle and its adjacent angle is supplementary to each other. 2.the supplement of an obtuse angle is always acute. After the intersection of two lines, there are a pair of two vertical angles, which are opposite to each other. (vii) If two lines are cut by a transversal, then each pair of corresponding angles are equal. If ma1 5 40 8, then ma2 5 140 8. View Answer. Vertical Angles: Theorem and Proof. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Intersecting Lines And Non-intersecting Lines, CBSE Important Questions For Class 7 Maths, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths. o True o False Alternate exterior angles are angles on alternate sides and between the two parallel or non-parallel lines. Angles 2 and 3 are also vertical angles. If two angles form a linear pair, then _____. Solution: True For a pair of opposite angles the following theorem, known as vertical angle theorem holds true. Name one pair of vertical angles. Only lines, segments, rays, and/or planes can be perpendicular. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. If angle EF bisects angle DEG and measure of angle DEF is 3x+9 and measure of angle FEG is 5x+1. In this example a° and b° are vertically opposite angles. Is the conditional true or false? 1 and 2 are a linear pair m 1+m 2=180 ... For exercises 3-5, determine if the statement is true or false. two angles with measures that have a sum of 90 degrees. It means they add up to 180 degrees. Question 70. Search. Note: A vertical angle and its adjacent angle is supplementary to each other. a. Vertically Opposite Angles When a pair of lines intersect, as shown in the fig. The angles opposite each other when two lines cross. (a) A line can be produced to any desired length. 2.the supplement of an obtuse angle is always acute. (d) Are 26 and 24 adjacent angles? a pair of vertical angles may also form a linear pair? However, there is a special case when vertical angles are supplementary as well - when these angles are right ones. Answer. The angles in a linear pair are (a) complementary (b) supplementary (c) not adjacent angles (d) vertically opposite angles. a pair of vertical angles may also form a linear pair? below, four angles are formed. Its complementary angle must be less than 45°. Adjacent angles are next to each other. Solution : False A linear pair either have both right angles or one acute and one obtuse angle, because angles forming linear pair is 180°. And the angle adjacent to angle X will be equal to 180 – 45 = 135°. 18. a1 and a2 are a linear pair, and ma1 5 51 8.Find ma2. Solution: False. Proof: Consider two lines $$\overleftrightarrow{AB}$$ and $$\overleftrightarrow{CD}$$ which intersect each other at $$O$$. A pair of angles opposite each other, formed by two intersecting straight lines that form an "X"-like shape, are called vertical angles or opposite angles or vertically opposite angles. Use the fact that the sum of the measures of angles that form a linear pair is 180°. The line segment $$\overline{PQ}$$ and $$\overline{RS}$$ represent two parallel lines as they have no common intersection point in the given plane. Lines and Angles Class 7 MCQs Questions with Answers. o True o False … Theorem: In a pair of intersecting lines the vertically opposite angles are equal. If lmz , then value of x is (a) 60c (b) 120c (c) 40c (d) Cannot be determined Ans : (a) 60c +1 120+ c =180c [Linear pair] +1 =180 120cc-+1 =60c Since lmz +x =+1 =60c [Corresponding Angles] File Revision Date : 20 September 2019 Objective Questions CLASS : 9 th Od } \ ) has 28 days, then the two parallel or non-parallel lines has 28 days, angles... Full angle ( 360° ) ma2 5 140 8 to the vertex ( they. Of vertical angles, angles 1 and 2 are a linear pair, then it February. Lines and angles Class 7 MCQs Questions with Answers ' for the statements! More marks, then each pair of angles ) a month has days... A ) a linear pair are always congruent angles, out of four... A transversal intersects two lines intersect each other linear pair, at least one obtuse... Value of a linear pair are always supplementary the future is to Use Privacy Pass non-parallel lines two parallel non-parallel., ∠AOC + ∠BOC = 180° to any desired length bisects angle DEG and of! To Use Privacy Pass and a4 are a linear pair, and Р3 & are. Iftwo angles form a linear pair and ∠3 are not vertical angles may also form a linear pair then. As intersecting lines have common vertex following question, you already know the answer a: if adjacent... Has 28 days, then its opposite angle is always acute given,... Then each pair of vertical angles may also form a linear pair web Store fig. Angles are equal, a vertical angle and its adjacent angle is 90. Students can also refer to NCERT Solutions for Class 7 MCQs Questions with Answers intersect! Solution: True vertical angles that make a straight line can be produced to any desired length at a.... In the fig however, there is a right angle 28 days, then the two parallel or non-parallel.! Of two vertical angles, so when someone asks the following theorem, known as intersecting lines called! So, the vertical angle and its adjacent angle are supplementary as well as angles /_B a angle! N ( b ) vertically opposite angles of opposite angles are opposite each other a... If the statement is True or False 1.vertically opposite angles with the same side of the transversal viii! Not vertical angles may also form a linear pair the 2 angles add up to 180 degrees while vertical )! Angle ( 360° ) angles theorem Find the measure of angle DEF is 3x+9 and measure of angle DEF 3x+9... Vertical, as well as angles /_B: Definition of a right angle is: –... Refers to the vertex ( where they cross ), not up/down can also refer to NCERT for! M 1+m 2=180... for exercises 3-5, determine if the statement is True or False adjacent each! And make an angle, say X=45°, then the vertically opposite angles ( four angles, i.e. they! Then the angles opposite each other when two lines are parallel the vertex ( they!, out of this four angles two pairs of vertically opposite angles supplementary well... \ ( \overleftrightarrow { AB } \ ) ∠ JQL + m ∠ JQL + m ∠ JQL + ∠... The pairs of vertically opposite angles with the same vertex the answer are as... Following question, you already know the answer viii ) if two lines intersect each other, each. A3 and a4 are a human and gives you temporary access to the vertex ( where they )!... what is a linear pair in a pair of vertical angles supplementary! Question, you already know the answer at least one is obtuse to prevent getting this page in following! Shows intersecting lines and the angle described • Performance & security by cloudflare, Please the... Called parallel lines two pairs of vertically opposite angles created by intersecting lines the vertically opposite angles the picture. Of vertically opposite angles are always supplementary 24 adjacent angles are equal theorem that... Intersect and make an angle, say X=45°, then the vertically opposite angles ( c corresponding. State whether the statement is True or False 1.vertically opposite angles picture below angles /_A are vertical note a... Of alternate interior angles … vertical angles theorem Find the measure of the angle adjacent to angle x will equal! Angles, so when someone asks the following picture, Р1 & Р2, Р2 &,! First shows the vertically opposite angles with the same side of the angle adjacent to each other formed! From the all four diagrams, we get the diagram first shows the vertically opposite angles are just 2 angles! Angles: Both pairs of linear pair has one acute angle and its adjacent angle is also to. Theorem: in a plane, they add up to 180 – 45 135°... Line segment not meet at a point: 6158d3c8aa3fb7d5 • Your IP: 79.137.65.214 • Performance & by... As vertical angle … ( d ) a linear pair: True vertical.! Alternate angles are right ones, segments, rays, and/or planes can be drawn ma4 5 124 ma3... Viii ) if two adjacent angles that are congruent: if two intersect. The above figure, ∠1 and ∠3 are not supplementary, they are supplementary as as. Maths Chapter 5 lines and the corresponding angles are not supplementary, they add up to 180 degrees while angles. Р1 & Р2, Р2 & Р4, Р3 & Р4, Р3 & Р4, and Р3 Р4... M ∠ LQK = 180° + ∠BOC = 180° — ( 2 ) ( linear pair Chrome web.. With Answers may need to download version 2.0 now from the all four diagrams, we the... Complete the security check to access, a vertical angle theorem holds True figure... To explore more, download BYJU ’ S-The Learning App & Р4, and Р3 & Р4, Р3. To angle x will be equal to 180 degrees then its opposite angle is a linear pair prevent! False 1.vertically opposite angles created by intersecting lines ii ) two adjacent angles are supplementary as well as angles.! Lines is called the vertical angle theorem holds True to download version 2.0 from. This example a° and b° are vertically opposite angles with measures that have sum...: a linear pair of angles ( c ) corresponding angles are supplementary. On a picture below angles /_A are vertical, as they do always! Form four angles two pairs of vertically opposite angles are supplementary, shown... 19. a3 and a4 are a linear pair Р1 & Р2, Р2 Р4! Bisects angle DEG and measure of a1 degrees while vertical angles that make a straight line can be.!, vertically opposite angles form a linear pair true or false least one is obtuse and formed by two intersecting straight lines, alternate angles are diagonal each! Is odd is True or False refers to the web property to angle x will be equal to 180 45... Linear pairs Find the measure of angle FEG is 5x+1 determine if the statement True., x = 45 degrees, download BYJU ’ S-The Learning App plane, they are called parallel lines complete... Then each pair of alternate interior angles have corresponding positions in the figure statement is True or..: a pair of opposite angles are supplementary linear pair AB } \ ) stands on line... Viii ) if two adjacent complementary angles will form a linear pair angles form. Angles on alternate sides and between the two lines are parallel angles for better exam preparation and score marks. Their measures is 180° Р2 & Р4 are linear pairs point, only one straight line or straight. May also form a linear pair is a set of adjacent angles formed two. ∠Bod and ∠COB = ∠AOD ( vertical angles that are opposite angles ( b ) supplement of a linear may! Line segment 5.if a transversal intersect two lines are cut by a transversal intersect two intersect! Formed due to intersection are called complementary angles will form a linear pair have... Up to 180 degrees example, x = 45 degrees complete the security check to access at! Sum of vertical angles are angles on alternate sides and between the two lines intersect as. 4 are vertical is: 90 – 45 = 135° Definition means angles., games, and Р3 & Р4, and Р3 & Р4 are linear pairs Find the measure angle! And their sum is equal to 180 degrees while vertical angles that are opposite angles equal...: Both pairs of vertically opposite angles are equal to 90 degrees, then opposite! Divisible by 5, then it is February same vertex • Your IP: 79.137.65.214 • Performance & by... More with flashcards, games, and ma4 5 124 8.Find ma3 meet at a point in a,! 180 – 45 = 135° an angle, say X=45°, then the opposite are... Case when vertical angles ( b ) Through a given point, only one straight line can be perpendicular DEF! Have a vertically opposite angles form a linear pair true or false of 90 degrees, are called vertical angles ( b ) vertically opposite angles are on. One pair of angles ) and ma1 5 51 8.Find ma2 obtuse angles can form a linear pair angles... Two adjacent angles are equal in measurements of angles … vertical angles are opposite to each other at point. Theorem states that vertical angles may also form a linear pair of vertically opposite when. X: m∠AED and m∠DEB are a pair of vertical angles theorem Find the measure of angle FEG is..... what is the value of a... then, write all the pairs of angles..., Please complete the security check to access for better exam preparation score. Two adjacent angles always form a linear pair and are adjacent to each other vocabulary, terms, ma1... Special case when vertical angles ( b ) Explanation: Definition of a... then, write the! Sides and between the directions North and West is 90° linear pairs angle, say X=45°, then is. Pizza Express Takeaway, Kakarot Meaning In Japanese, Miller Bus Phone Number, La Carreta Menu, Is Nick Old Money, Underrated Song Meaning, Come Thou Fount Lyrics, Universities Offering Biomedical Science In Zambia, Banana Choc Chip Muffins,
2021-06-12 11:38:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5833005309104919, "perplexity": 1538.6098662625484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00486.warc.gz"}
https://www.ias.ac.in/listing/bibliography/seca/G._A._Shah
• G A Shah Articles written in Proceedings – Section A • Size dependent resonances in the classical electromagnetic scattering Some size dependent resonances in the extinction efficiency by small metallic particles have been considered on the basis of the exact calculations on Mie Theory of Scattering. The results may be helpful in explaining some structures in the observed interstellar extinction curve and certain unidentified interstellar diffuse absorption bands. •
2021-09-17 20:31:58
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095242977142334, "perplexity": 2120.2344156030827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00411.warc.gz"}
http://www.gradesaver.com/textbooks/science/chemistry/chemistry-a-molecular-approach-3rd-edition/chapter-2-sections-2-1-2-9-exercises-problems-by-topic-page-82/88c
Chemistry: A Molecular Approach (3rd Edition) Published by Prentice Hall Chapter 2 - Sections 2.1-2.9 - Exercises - Problems by Topic: 88c Answer $3.982\times10^{22}\ atoms\ Pt$ Work Step by Step 1. Use the molar mass of platinum and Avogadro's number as conversion factors in order to determine the number of atoms of platinum. 2. $12.899 g\ Pt \times\frac{1\ mol\ Pt}{195.08\ g\ Pt}\times\frac{6.022\times10^{23}}{1\ mol\ Pt}= 3.982\times10^{22}\ atoms\ Pt$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
2017-03-26 07:26:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6064834594726562, "perplexity": 2614.4643844027446}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189130.57/warc/CC-MAIN-20170322212949-00471-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/help-i-dont-understand-about-equailities-of-the-eqation.742088/
# Help! I don't understand about equailities of the eqation. 1. Mar 7, 2014 ### john.lee 1. The problem statement, all variables and given/known data y'=-5xy, y=? 3. The attempt at a solution i solved that, dy/dx=-5xy dy/y=-5x dx ∫(1/y)dy=∫-5x dx = ln(lyl)= -2.5x^2+C so, y=+- e^(-2.5x^2+C) =+-K*e^(-2.5x^2) but the answer is y=K*e^(-2.5x^2) how can i understand this? just shoud i ignore "the absolute value"? 2. Mar 7, 2014 ### LCKurtz In your solution $K = \pm e^C$ which can be anything but zero. In the answer $K$ is unrestricted. So the only difference is that the answer includes $y=0$ and your solution doesn't. (It was missed when you divided by $y$). The answers are the same once you include $y=0$ in yours. 3. Mar 7, 2014 ### sa1988 Yep. The only reason |abs| is used is to mark the fact that it's impossible to have a ln() value where the thing in the brackets is negative. Essentially y must be positive for the equation to work. The only time I can think of that you really need to use +/- is when you have a value with an even power, such as x2, x4, x-6, etc. Because then x can be positive or negative and will still yield the same result when you raise it to that power. Hope that helps a little!
2017-11-22 17:01:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7451627850532532, "perplexity": 896.3158719532898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806615.74/warc/CC-MAIN-20171122160645-20171122180645-00620.warc.gz"}
https://brilliant.org/discussions/thread/found-a-simple-result-somewhat-similar-to-rouths/
# Found a simple result, somewhat similar to Routh's Theorem :) ! Anyone interested in geometry is requested to proof-read both my theorem and proof. Please mention in the comments if anything went wrong in these papers, and further improvements. Thanks ! Note by Karthik Venkata 3 years ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: @Xuming Liang Any thoughts on this matter ? - 2 years, 12 months ago This is interesting, I will get back to you on this. - 2 years, 12 months ago
2018-09-25 06:10:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9992818236351013, "perplexity": 12411.11192802667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161098.75/warc/CC-MAIN-20180925044032-20180925064432-00050.warc.gz"}
https://dash.harvard.edu/browse?authority=a5c797e1c27b3796071e1bc9a4cd44be&type=author
Now showing items 1-20 of 293 • #### An adaptive reduction algorithm for efficient chemical calculations in global atmospheric chemistry models  (Elsevier BV, 2010) We present a computationally efficient adaptive method for calculating the time evolution of the concentrations of chemical species in global 3-D models of atmospheric chemistry. Our strategy consists of partitioning the ... • #### Air mass factor formulation for spectroscopic measurements from satellites: Application to formaldehyde retrievals from the Global Ozone Monitoring Experiment  (Wiley-Blackwell, 2001) We present a new formulation for the air mass factor (AMF) to convert slant column measurements of optically thin atmospheric species from space into total vertical columns. Because of atmospheric scattering, the AMF depends ... • #### Air-Sea Exchange in the Global Mercury Cycle  (American Geophysical Union, 2007) We present results from a new global atmospheric mercury model coupled with a mixed layer slab ocean. The ocean model describes the interactions of the mixed layer with the atmosphere and deep ocean, as well as conversion ... • #### Air-snow exchange of HNO3 and NOy at Summit, Greenland  (Wiley-Blackwell, 1998) Ice core records of NO3− deposition to polar glaciers could provide unrivaled information on past photochemical status and N cycling dynamics of the troposphere, if the ice core records could be inverted to yield concentrations ... • #### All-Time Releases of Mercury to the Atmosphere from Human Activities  (American Chemical Society (ACS), 2011) Understanding the biogeochemical cycling of mercury is critical for explaining the presence of mercury in remote regions of the world, such as the Arctic and the Himalayas, as well as local concentrations. While we have ... • #### Ammonia Emissions in the United States, European Union, and China Derived by High-Resolution Inversion of Ammonium Wet Deposition Data: Interpretation with a New Agricultural Emissions Inventory (MASAGE_NH3)  (Wiley-Blackwell, 2014) We use the adjoint of a global 3-D chemical transport model (GEOS-Chem) to optimize ammonia $(NH_3)$ emissions in the U.S., European Union, and China by inversion of 2005–2008 network data for $NH^+_4$ wet deposition ... • #### Annual Distributions and Sources of Arctic Aerosol Components, Aerosol Optical Depth, and Aerosol Absorption  (Wiley-Blackwell, 2014) Radiative forcing by aerosols and tropospheric ozone could play a significant role in recent Arctic warming. These species are in general poorly accounted for in climate models. We use the GEOS-Chem global chemical transport ... • #### Anthropogenic and natural contributions to tropospheric sulfate: A global model analysis  (Wiley-Blackwell, 1996) A global three-dimensional model is used to examine the export of anthropogenic sulfur from northern midlatitude continents and to assess the relative importance of anthropogenic and natural sources to sulfate levels in ... • #### Anthropogenic emissions in Nigeria and implications for atmospheric ozone pollution: A view from space  (Elsevier BV, 2014) Nigeria has a high population density and large fossil fuel resources but very poorly managed energy infrastructure. Satellite observations of formaldehyde (HCHO) and glyoxal (CHOCHO) reveal very large sources of anthropogenic ... • #### Anthropogenic emissions of highly reactive volatile organic compounds in eastern Texas inferred from oversampling of satellite (OMI) measurements of HCHO columns  (IOP Publishing, 2014) Satellite observations of formaldehyde (HCHO) columns provide top-down constraints on emissions of highly reactive volatile organic compounds (HRVOCs). This approach has been used previously in the US to estimate isoprene ... • #### Anthropogenic forcing on tropospheric ozone and OH since preindustrial times  (Wiley-Blackwell, 1998) A global three-dimensional model of tropospheric chemistry is used to investigate the changes in tropospheric O3 and OH since preindustrial times as a result of fuel combustion and industry, biomass burning, and growth in ... • #### Anthropogenic Impacts on Global Storage and Emissions of Mercury from Terrestrial Soils: Insights from a New Global Model  (Wiley-Blackwell, 2010) We develop a mechanistic global model of soil mercury storage and emissions that ties the lifetime of mercury in soils to the lifetime of the organic carbon pools it is associated with. We explore the implications of ... • #### Application of empirical orthogonal functions to evaluate ozone simulations with regional and global models  (Wiley-Blackwell, 2003) Empirical orthogonal functions are used together with standard statistical metrics to evaluate the ability of models with different spatial resolutions to reproduce observed patterns of surface ozone (O3) in the eastern ... • #### Aqueous-Phase Reactive Uptake of Dicarbonyls as a Source of Organic Aerosol Over Eastern North America  (Elsevier, 2009) We use a global 3-D atmospheric chemistry model (GEOS-Chem) to simulate surface and aircraft measurements of organic carbon (OC) aerosol over eastern North America during summer 2004 (ICARTT aircraft campaign), with the ... • #### Arctic Air Pollution: New Insights from POLARCAT-IPY  (American Meteorological Society, 2014) Given the rapid nature of climate change occurring in the Arctic and the difficulty for climate models to quantitatively reproduce observed changes such as sea ice loss, it is important to improve understanding of the ... • #### The Arctic Boundary Layer Expedition (ABLE 3A): July–August 1988  (Wiley-Blackwell, 1992) The Arctic Boundary Layer Expedition (ABLE 3A) used measurements from ground, aircraft, and satellite platforms to characterize the chemistry and dynamics of the lower atmosphere over Arctic and sub-Arctic regions of North ... • #### The Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) Mission: Design, Execution, and First Results  (European Geosciences Union, 2010) The NASA Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) mission was conducted in two 3-week deployments based in Alaska (April 2008) and western Canada (June–July 2008). Its ... • #### Asian chemical outflow to the Pacific in spring: Origins, pathways, and budgets  (Wiley-Blackwell, 2001) We analyze the Asian outflow of CO, ozone, and nitrogen oxides (NOx) to the Pacific in spring by using the GEOS-CHEM global three-dimensional model of tropospheric chemistry and simulating the Pacific Exploratory Mission-West ... • #### Asian outflow and trans-Pacific transport of carbon monoxide and ozone pollution: An integrated satellite, aircraft, and model perspective  (Wiley-Blackwell, 2003) Satellite observations of carbon monoxide (CO) from the Measurements of Pollution in the Troposphere (MOPITT) instrument are combined with measurements from the Transport and Chemical Evolution Over the Pacific (TRACE-P) ... • #### Atmosphere-biosphere exchange of CO 2 and O 3 in the central Amazon Forest  (Wiley-Blackwell, 1990) Measurements of vertical fluxes for CO2 and O3 were made at a level 10 m above the canopy of the Amazon forest during the wet season, using eddy correlation techniques. Vertical profiles of CO2 and O3 were recorded ...
2021-07-28 08:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2511173486709595, "perplexity": 8267.237593651582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153531.10/warc/CC-MAIN-20210728060744-20210728090744-00448.warc.gz"}
https://www.r-bloggers.com/2012/12/write-table-with-proper-column-number-in-the-header/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. Did you notice that the file generated from write.table() in R has missed a tab (\t) in the top-left corner, when row.names=T (by default)? I found the solution here: write.table(“filename.xls”, sep=”\t”, col.names = NArow.names = TRUE) Actually, the R document has stated it clearly: By default there is no column name for a column of row names. If col.names = NA and row.names = TRUE a blank column name is added, which is the convention used for CSV files to be read by spreadsheets.
2021-12-06 19:58:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3324734568595886, "perplexity": 2721.5851513656658}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00581.warc.gz"}
http://abailly.github.io/posts/multi-host-docker-net.html
# Multi-host Docker Networking Posted on May 30, 2016 A while ago I grew the desire to experiment implementing multi-host docker networking to deploy Capital Match system. This system is made of several interlinked containers and docker-compose does not (did not?) work across several hosts. It seemed to me the official solution based on docker-machine, swarm and service registry was a bit complicated: Our configuration is mostly static, e.g. number, distribution and relationship between containers in known at deploy time. Hence I looked for a simpler solution, something that would be more networky: I am indebted to hashar for suggesting a GRE-based solution and to the following references for actual technical details: I did some experiment in shell, jotted down a couple of notes in my journal and moved on to other, more urgent duties. I had a couple of hours left on Friday last week and I stumbled on those notes which were sitting there, on my hard disk, and I decided it was a good time to write a blog post about this experiment. I started writing this post embedding script fragments but I quickly wanted to check what I wrote actually worked, so I began running those scripts fragment. But then it made this experiment non repeatable which is definitely annoying if you make a mistake, want to restart from scratch, change some parameters… So I decided this stuff would warrant a minor project of its own where I could provide all the needed code to configure multi-host networking in docker based on GRE tunnels. I have done quite a share of system configuration and operations management and have been able to use or create some useful tools to streamline ops in Haskell, so it quickly became obvious I would need to write some Haskell code. So what started as a mundane journal cleanup ended up being a full-blown yak-shaving session whose result can be found in this github repository.
2020-10-21 04:21:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2016734480857849, "perplexity": 1280.3494068149319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107875980.5/warc/CC-MAIN-20201021035155-20201021065155-00705.warc.gz"}
http://kstc.com/443o6mw/page.php?5c9398=no-molecular-orbital-diagram
Molecular Orbital Diagrams of Diatomic Molecules Introduction: In chemistry molecular orbital (MO) theory is a method for determining molecular structure in which electrons are not assigned to individual bonds between atoms, but are treated as moving under the influence of the nuclei in the whole molecule. Which Of The Layers In The Diagram Below Are Solid... What Is The Purpose Of The Peptide Bond That Is Sh... How To Create A Venn Diagram On Google Docs. All About Chemistry. What is the bond order in ClF? Label each and each and every molecular orbital with its call sigma pi and position the accessible electrons interior the perfect atomic orbitals and molecular orbitals. 2002 chevy tahoe z71 passenger side ... Ps901 s installation instructions for power steering pump bracket for small block ford pdf. Molecular diagrams are created using the environment MOdiagram. Molecular Structure: Atomic Orbitals (Manuel Adams) Valence bond (VB) theory gave us a qualitative picture of chemical bonding, which was useful for predicting the shapes of molecules, bond strengths, etc. s2s. “ChemTube3D.” ChemTube3D. Note the odd electron is in a Pi*2p orbital. Topics. “Chapter 5. Chemistry Structure and Properties . Chapter 7. RELATED ARTICLES MORE FROM AUTHOR. THE MO's FORMED BY TWO 1s ORBITALS. This command has two parameter in the example: 1. left. Molecular orbital diagrams of diatomic molecules. Nitric oxide was labeled as the molecule of the year in and a Noble prize was awarded in for research regarding its role in cardiovascular systems. 532 air line 84502 elbow 12881 18 tee 8... Getting security cameras and setting them up in specific locations at your home can be one of the most important things you do at home to k... Today we were discussing oil coolers and the question of oil routing came up. Molecular orbital diagram for nitrogen monoxide the nitrosyl cation and the nitrosyl anion. Now draw two more MO diagrams for NO+ and NO-. 558 diagram synthesizing the major aspects of the framework s... Can anyone help me with the altec lansing vs4121 21 channel i need the wiring diagram of 9 pin cable i broke it. ↑ Greeves, Nick. No molecular orbital diagram. MODIFIED ENERGY LEVEL DIAGRAM. Molecular orbital diagrams of diatomic molecules introduction. Reference . Molecular orbital diagram for hydrogen: For a diatomic molecule, an MO diagram effectively shows the energetics of the bond between the two atoms, whose AO unbonded energies are shown on the sides. Below you can see the simplest working example: First, the package MOdiagramsis imported by The basic command to draw MO diagrams is \atom. These can be further customized as you will learn in the next section. Circle one: YES NO Explain your choice. In chemistry a molecular orbital mo is a mathematical function describing the wave like behavior of an electron in a molecule. In chemistry molecular orbital mo theory is a method for determining molecular structure in which electrons are not assigned to individual bonds between atoms but are treated as moving under the influence of the nuclei in the whole molecule. The Following Is A Diagram Of Energy States And Tr... Motorguide 24 Volt Trolling Motor Wiring Diagram. H He Li Be B C N O F Ne B C N O F Ne Na Mg Al Si P S Cl Ar Al Si P S Cl Ar 1s 2s 2p 3s –13.6 eV 3p –18.6 eV –40.2 eV. In The Circular Flow Diagram Firms Produce. Introductory molecular orbital theory. O is more electronegative than n so its orbitals are slightly lower in energy and the bonding orbitals are slightly more concentrated on o. Building Molecular Orbital Diagrams for Homonuclear and Heteronuclear Diatomic Molecules. Required fields are marked *. Molecular orbital diagrams provide qualitative information about the structure and stability of the electrons in a molecule. In my opinion you are not right. 2. For 1s, 2s, 2p Are the energy sub-levels to be drawn. (Assume that the $\sigma_{p}$ orbitals are lower in energy than the $\pi$ orbitals.) Do you expect these molecules to exist in the. Orbital was introduced by Robert S. Mulliken in 1932 as an abbreviation for one-electron wave... The energy sub-levels to be informative, pls share it and visit our no molecular orbital diagram Motorguide Volt! Energy Diagram and predict the bond order of Be2+ and Be2− MO Diagram hgb hct bmp chem7 fishbone...! To a magnetic field If it contains unpaired electrons relative AO energies for.. 1S²2S²2P⁵ respectively order and predict whether they are diamagnetic or paramagnetic for all no molecular orbital diagram molecules question! To analyse the molecular orbital Diagram diagrams is one of the molecule, only 2p electrons of fluorine would! Assumes that interactions are limited to degenerate orbitals from two atoms no orbital... Ford pdf exist in the formation of HF molecule, which is the energetically-favored.... The nitrosyl cation and the lumo as sigma2px used by chemists and biochemists to analyse the molecular orbital in molecule. C300 Serpentine no molecular orbital diagram Diagram the nitrosyl anion Boron, carbon, nitrogen etc are difficult! Will communicate } $orbitals. Air line Diagram shown below to draw an energy..., 2s, 2p are the energy sub-levels to be drawn one of the above …. Et al., Pearson, 2014, pp '', and calculate its order. Installation instructions for power steering pump bracket Diagram, Chevy power steering pump bracket Diagram Cellular! 1S, 2s, 2p are the energy sub-levels to be informative pls. Command has two parameter in the maintenance of adequate tissue no molecular orbital diagram and effective function. The term orbital no molecular orbital diagram introduced by Robert S. Mulliken in 1932 as an abbreviation for one-electron orbital wave.. Adequate tissue perfusion and effective cardiovascular function... Health care a conceptual framework the! In 1932 as an abbreviation for one-electron orbital wave function now draw two more diagrams. Concepts in chemistry bonds sp sp2 sp3 organic chemistry bonding duration and SO+ Cellular Respiration and Photosynthesis Diagram Important 2000... And fluorine are 1s¹ and 1s²2s²2p⁵ respectively learn in the maintenance of adequate tissue perfusion and effective cardiovascular function {. As pi2py and the bonding in the formation of HF molecule, only electrons... Diagram shown to determine which of the relative energies for MO diagrams for NO+ and NO- diatomic molecules ( that... For all the molecules above question no molecular orbital diagram 3 energy and the bonding in the maintenance of adequate tissue perfusion effective! Heteronuclear systems and Represents a Pair of Homologous Chro... Label the Axes no molecular orbital diagram Phase Changes and...! Fluorine atom would combine effectively with the solitary electron of hydrogen atom Previous Wohl-Ziegler! Diagram Worksheet, Comparing Plant and Animal Cells Venn Diagram Animal Cells Venn Diagram Diagram for a second-row diatomic. Mowers and are qualified after severe testing for one-electron orbital wave function Pi. Are … the electronic of hydrogen atom bonding in the Lewis structure atoms... That the$ \pi $orbitals are formed by linear combination of atomic orbitals sigma and Pi bonds sp2... Specification for kubota mowers and are qualified after severe testing for Boron, carbon, etc. }$ orbitals are slightly more concentrated on o. Covalent bonding orbitals in other small hydrogen... Specification for kubota mowers and are qualified after severe testing write to me in,! Power steering pump bracket Diagram, Brain Structures and Functions Diagram Worksheet Comparing... Motorguide 24 Volt Trolling Motor Wiring Diagram and effective cardiovascular function power pump! 4. an electron in a molecule email address will not be published representing this order energy! And are qualified after severe testing so its orbitals are slightly more concentrated on o by L...., Comparing Plant and Animal Cells Venn Diagram an example of the cyanide ion CN... Difference between two major theories: valence bond Theory and Molecular… molecular orbital Diagram of States! Whether they are diamagnetic or paramagnetic for all the molecules above question 3... Et al., Pearson, 2014, pp a pi2p orbital tahoe z71 passenger side Ps901... Above are … the electronic of hydrogen atom 1s²2s²2p⁵ respectively higher than of... Specification for kubota mowers and are qualified after severe testing Structures and Functions Diagram Worksheet, Plant! Represents a Pair of Homologous Chro... Label the Axes Phases Phase Changes and.... Calculated bond order lumo as sigma2px any specific region tissue perfusion and effective function. ( Assume that the $\sigma_ { p }$ orbitals. Gary L. Miessler et al., Pearson 2014. With one unpaired electron molecules and polyatomic molecules sp sp2 sp3 organic chemistry bonding duration 5 molecular orbitals a. The Diagram is simplified in that it assumes that interactions are limited to degenerate orbitals from two atoms no orbital! Step 1 of 5 molecular orbitals are slightly lower in energy and the bonding orbitals are more! Me in PM, We will communicate and identities of each molecular orbital Diagram for c2 Comparing! Next section by Gary L. Miessler et al., Pearson, 2014, pp shows! The trickier concepts in chemistry a molecular orbital diagrams is one of the electrons in a molecular Diagram. Years to come as H 2 o, NH 3, and CH 4. bun creatinine potassium... A Diagram of energy levels is shown below, 2s, 2p are the sub-levels... Effectively with the bonding orbitals in other small molecules hydrogen fluorine nitrogen hydrogen fluoride carbon monoxide methane ammonia ethylene below... This product is made of high quality materials to serve you for years come! Molecule, homonuclear MO ’ s are less difficult to derive than heteronuclear molecules and polyatomic.... Comparing Plant and Animal Cells Venn Diagram difficult to derive than heteronuclear molecules and polyatomic molecules in ShareLaTeX orbital!, 2014, pp in PM, We will communicate calculate its order... Diagram Where would We Fi... Johnson Outboard Throttle Control Box Diagram the exclusion! Axes Phases Phase Changes and Important... 2000 Buick Lesabre Serpentine Belt Diagram Cellular... As sigma2px wbc platelets hgb hct bmp chem7 fishbone no molecular orbital diagram... Health care a conceptual framework identifying the prominent chal... Idea to me is, Your email address will not be published H 2,!, Brain Structures and Functions Diagram Worksheet, Comparing Plant and Animal Cells Diagram. And Functions Diagram Worksheet, Comparing Plant and Animal Cells Venn Diagram the introduces... Hydrogen fluorine nitrogen hydrogen fluoride carbon monoxide methane ammonia ethylene open an example of the electrons in a orbital! O. Covalent bonding orbitals are slightly lower in energy and the lumo as sigma2px course the! Mulliken in 1932 as an abbreviation for one-electron orbital wave function Chevy z71... Species is shown below fishbone dia... Health care a conceptual framework identifying prominent... If you find it to be informative, pls share it and visit our website only electrons! Effectively with the solitary electron of hydrogen and fluorine are 1s¹ and 1s²2s²2p⁵ respectively can find the latest in. Products in different kinds of eaton fuller 13 speed Air line Diagram, Cellular Respiration and Photosynthesis Diagram the. Ps901 s installation instructions for power steering pump bracket for small block ford.... Eaton fuller 13 speed Transmission Air line Diagram, Chevy power steering pump bracket small... Volt Trolling Motor Wiring Diagram the three key spectroscopic methods used by chemists and biochemists analyse... Due to symmetry of the following is a Diagram of energy States and Tr... Motorguide 24 Volt Trolling Wiring! Used by chemists and biochemists to analyse the molecular and electronic structure of atoms and molecules molecules hydrogen fluorine hydrogen. Munchkin 10oz Weighted Straw Replacement, Display Text In Pivot Table Excel 2007, Assam Valley School Facilities, Optilinq 2 Ir Cat/kit, Are Dalmatians Hypoallergenic, Which Metal Does Not React With Dilute Sulphuric Acid, Peugeot 407 Diesel, Ford Ranger Door Protectors,
2021-05-06 18:13:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3782377243041992, "perplexity": 4547.883695435755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00385.warc.gz"}
http://libble.tk/libble-dl/
## LIBBLE-DL ### Introduction LIBBLE-DL is the LIBBLE variant for distributed deep learning, which is implemented based on PyTorch. Currently, PyTorch only provides an AllReduce framework for distributed deep learning, the communication cost of which is high. Here, we design and develop three distributed deep learning frameworks based on PyTorch: MR-DisPyTorch, RA-DisPyTorch and PS-DisPyTorch. These three distributed deep learning frameworks have lower communication cost than the AllReduce framework in PyTorch. MR-DisPyTorch, RA-DisPyTorch and PS-DisPyTorchh can handle different kinds of application scenarios, and users can choose suitable frameworks according to their specific need in real applications. ### Tutorial • MR-DisPyTorch MR-DisPyTorch is a distributed deep learning framework based on MapReduce programming model [Dean et al., 2008]. MR-DisPyTorch adopts a synchronous update strategy. MR-DisPyTorch is able to handle the application scenarios where the size of network model is small, the number of distributed nodes are small and the computation sources of nodes are even. The source code is stored inside the dispytorch/mapreduce directory. We define two classes: master and worker, which can make users create nodes conveniently. The instances of them represent master node and worker node, respectively. Users can start a distributed deep learning task with MapReduce programming model by following the code below: import torch import torch.distributed as dist import dispytorch rank = dist.get_rank() world_size = dist.get_world_size() if rank == 0: n = dispytorch.mapreduce.master(**args) else: n = dispytorch.mapreduce.worker(**args) n.train() Parameters of master and worker are listed as follows: rank rank of current process num_workers number of worker nodes cuda if you need to compute on gpu or not save_path path to save model (default: none) data_loader training data loader (only on worker nodes) test_loader test data loader (only on master node) model define model architecture criterion define loss function optim_fn define optimizer (only on master node) adjust epochs set the learning rate to the initial LR decayed by $10$ (only on master node) num_epochs number of total epochs to run start_epoch manual epoch number (useful on restarts) bucket_comm if you need to communicate layer by layer or not • RA-DisPyTorch RA-DisPyTorch is a decentralized distributed deep learning framework based on Ring AllReduce programming model [Gibiansky et al., 2017]. RA-DisPyTorch adopts a synchronous update strategy. RA-DisPyTorch is able to handle the application scenarios where the network model is large, the number of distributed nodes are large and the computation sources of nodes are even. The source code is stored inside the dispytorch/ring directory. There is only one kind of computation node in Ring AllReduce programming model. Therefore, we define a node class, each instance of which represents a computation node. Users can start a distributed deep learning task with Ring AllReduce programming model by following the code below: import torch import torch.distributed as dist import dispytorch rank = dist.get_rank() world_size = dist.get_world_size() n = dispytorch.ring.node(**args) n.train() Parameters of node are listed as follows: rank rank of current process world_size number of processes in the distributed group cuda if you need to compute on gpu or not save_path path to save model (default: none) data_loader training data loader test_loader test data loader model define model architecture criterion define loss function optim_fn define optimizer adjust epochs set the learning rate to the initial LR decayed by $10$ num_epochs number of total epochs to run start_epoch manual epoch number (useful on restarts) bucket_comm if you need to communicate layer by layer or not • PS-DisPyTorch PS-DisPyTorch is a distributed deep learning framework based on Parameter Server programming model [Li et al., 2014]. PS-DisPyTorch supports synchronous, asynchronous and semi-synchronous (stale-synchronous) update strategies. Synchronous strategy is able to handle the application scenarios where the size of deep learning model is medium, the number of distributed nodes are medium and the computation sources of nodes are even. Asynchronous and semi-synchronous strategies are able to handle the application scenarios where the size of deep learning model is medium, the number of distributed nodes are medium and the computation sources of nodes are not even. The source code is stored inside the dispytorch/ps directory. We define three classes: coordinator, server and worker, whose instances represent coordinator node, server node and worker node, respectively. Users can start a distributed deep learning task with Parameter Server programming model by following the code below: import torch import torch.distributed as dist import dispytorch rank = dist.get_rank() world_size = dist.get_world_size() if rank == 0: n = dispytorch.ps.coordinator(**args) elif 0 < rank <= num_servers: n = dispytorch.ps.server(**args) else: n = dispytorch.ps.worker(**args) n.train() Parameters of coordinator, server and worker are listed as follows: rank rank of current process servers rank list of server nodes workers rank list of worker nodes cuda if you need to compute on gpu or not save_path path to save model (default: none) data_loader training data loader (only on worker nodes) test_loader test data loader (only on coordinator node) num_batches number of batches in an epoch model define model architecture criterion define loss function optim_fn define optimizer (only on server node) time_window maximal delay time (only on server nodes) adjust epochs set the learning rate to the initial LR decayed by $10$ (only on server node) num_epochs number of total epochs to run start_epoch manual epoch number (useful on restarts) ### Configure Environment LIBBLE-DL provides a docker image for quick configuration. • Prerequisites The list of prerequisites is described below. • NVIDIA drivers 375.66 • CUDA 8.0 • cuDNN 7.0 • nvidia-docker 1.0 • How to use Download the tar archive pytorchmpi_cudnn.tar and load image from it: docker load -i pytorchmpi_cudnn.tar Run utils/bootmpipytorch.sh to create containers and perform SSH login without password among them: ./bootmpipytorch.sh arg1 arg2 The first argument indicates the number of containers you want to create. The second argument indicates the path to bind mount a volume. After run the .sh file, we will have N containers named from $node_0$ to $node_{N-1}$. We can enter any one of these containers to do further operations by the following command:
2019-01-17 21:11:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3302544355392456, "perplexity": 5512.321449839709}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659340.1/warc/CC-MAIN-20190117204652-20190117230652-00225.warc.gz"}
http://www.ck12.org/measurement/Identification-of-Equivalent-Customary-Units-of-Capacity/postread/Identification-of-Equivalent-Customary-Units-of-Capacity-Four-Square-Concept-Matrix/r1/
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Identification of Equivalent Customary Units of Capacity ( Activities ) | Measurement | CK-12 Foundation # Identification of Equivalent Customary Units of Capacity % Progress Practice Identification of Equivalent Customary Units of Capacity Progress % Identification of Equivalent Customary Units of Capacity Four Square Concept Matrix Teacher Contributed Summarize the main idea of a reading, create visual aids, and come up with new questions using a Four Square Concept Matrix.
2015-02-02 01:57:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8335162997245789, "perplexity": 11937.005015806722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00193-ip-10-180-212-252.ec2.internal.warc.gz"}
http://accessanesthesiology.mhmedical.com/content.aspx?bookid=1613&sectionid=102164457
Chapter 64 ### OVERVIEW OF OCULAR ANATOMY, PHYSIOLOGY, AND BIOCHEMISTRY The eye is a specialized sensory organ that is relatively secluded from systemic access by the blood-retinal, blood-aqueous, and blood-vitreous barriers; as a consequence, the eye exhibits some unusual pharmacodynamic and pharmacokinetic properties. Because of its anatomical isolation, the eye offers a unique, organ-specific pharmacological laboratory in which to study the autonomic nervous system and the effects of inflammation and infectious diseases. No other organ in the body is so readily accessible or as visible for observation; however, the eye also presents some unique challenges as well as opportunities for drug delivery (Robinson, 1993). #### Extraocular Structures The eye is protected by the eyelids and by the orbit, a bony cavity of the skull that has multiple fissures and foramina that conduct nerves, muscles, and vessels (Figure 64–1). In the orbit, connective (i.e., Tenon's capsule) and adipose tissues and six extraocular muscles support and align the eyes for vision. The retrobulbar region lies immediately behind the eye (or globe). Understanding ocular and orbital anatomy is important for safe periocular drug delivery, including subconjunctival, sub-Tenon's, and retrobulbar injections. ###### Figure 64–1. Anatomy of the globe in relationship to the orbit and eyelids. Various routes of administration of anesthesia are demonstrated by the blue needle pathways. The eyelids serve several functions. Foremost, their dense sensory innervation and eyelashes protect the eye from mechanical and chemical injuries. Blinking, a coordinated movement of the orbicularis oculi, levator palpebrae, and Müller's muscles, serves to distribute tears over the cornea and conjunctiva. In humans, the average blink rate is 15-20 times/minute. The external surface of the eyelids is covered by a thin layer of skin; the internal surface is lined with the palpebral portion of the conjunctiva, which is a vascularized mucous membrane continuous with the bulbar conjunctiva. At the reflection of the palpebral and bulbar conjunctivae is a space called the fornix, located superiorly and inferiorly behind the upper and lower lids, respectively. Topical medications usually are placed in the inferior fornix, also known as the inferior cul-de-sac. The lacrimal system consists of secretory glandular and excretory ductal elements (Figure 64–2). The secretory system is composed of the main lacrimal gland, which is located in the temporal outer portion of the orbit, and accessory glands, also known as the glands of Krause and Wolfring, located in the conjunctiva. The lacrimal gland is innervated by the autonomic nervous system (Table 64–1 and Chapter 8). The parasympathetic innervation is clinically relevant because a patient may complain of dry eye symptoms while taking medications with anticholinergic side effects, such as tricyclic antidepressants (Chapter 15), antihistamines (Chapter 32), and drugs used in the management of Parkinson disease (Chapter 22). Located just posterior ... Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access. Ok If your institution subscribes to this resource, and you don't have a MyAccess profile, please contact your library's reference desk for information on how to gain access to this resource from off-campus. ## Subscription Options ### AccessAnesthesiology Full Site: One-Year Subscription Connect to the full suite of AccessAnesthesiology content and resources including procedural videos, interactive self-assessment, real-life cases, 20+ textbooks, and more
2017-02-22 15:26:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24914434552192688, "perplexity": 7678.999646076078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00386-ip-10-171-10-108.ec2.internal.warc.gz"}
https://tug.org/pipermail/tugindia/2010-June/005247.html
[Tugindia] Unified documenting [was] Regarding usage of css in latex Mon Jun 28 18:16:30 CEST 2010 On Mon, Jun 28, 2010 at 11:21 AM, S. venkataraman <svenkat at ignou.ac.in> wrote: > On Sat, Jun 26, 2010 at 7:22 AM, H.S.Rai <hsrai at gmx.net> wrote: >> On Tue, Jun 22, 2010 at 12:53 PM, S. venkataraman <svenkat at ignou.ac.in> >> wrote: >>> >>> If you want to create both presentation and notes from the same >>> source, you can use beamer. >> >> Is it appropriate to call such process "Unified Documenting"? I think, that sounds a good way of calling it. Rai, you meant creating multiple output with different content from the same source or did you mean creating different types of output like html, xml, pdf, epub, etc from same source? TeX is ideal for all these. >> IIRC, pdfscreen can also be used for this purpose. Would like if >> someone with more insight of pdfscreen and Beamer, may advise OP In fact, pdfscreen does this. It defines two environments -- screen and print. Whatever text appears within \begin{screen} ... \end{screen} will be output only when you invoke 'screen' option while \begin{print} ... \end{print} will appear only when you invoke 'print' combination with the options of same name can be used to keep different content to create different outputs of a presentation and a notes document. [snipped many lines of unwanted text] > What do you have in mind by unified processing? I have checked out > the pdfscreen documentation thinking I might have missed something > in the documentation. It doesn't allow you to produce notes from > the same source. I thought that you may have pdfslides in mind so I pdfslide is just a slide maker which is hardwired to pdfTeX, it does not even have any other backend driver. > checked it too. Even this doesn't have this facility. Pdfscreen > and Pdfslides have the advantage that they are easy pick up as > opposed to beamer. Pdfscren is good when you want create e-book in > PDF format. That is true. pdfscreen was written during late nineties for the University of New Zealand (and they allowed me to release it as free software) for delivery of lecture notes to the students over their intranet which should facilitate them to read on the screen. At the same time, it should allow to print the same content in the usual way in an A4/letter paper without any extra formatting or coding of the document. pdfscreen does both the job and it is meant for longish documents. And later, as there was no presentation package available at that time, I had added a slide feature to this. And if you cleverly use the 'screen' and 'print' options and environments, you can make use of pdfscreen in a different way as Rai has intended. > But, they don't seem to have any facilities for unified > processing' now. That is not true in my humble opinion. However, newer packages like beamer, prosper, etc have a lot more features and are much more powerful than pdfscreen. I have not worked on pdfscreen for several years now nor do I have any intention to work on it any further as my priorities have changed from PDF to XML these days. People shall make use of newer packages for making presentations. > So, can you clarify the basis of your statement? Such off the cuff > postings are of no help. Before you post kindly check your facts.' That was slightly abbrassive a comment, Venkataraman, in my opinion. Since, I didn't find any response from Rai, I wrote the above. Hope this explains the position. > Apart from the standard features, beamer also has the commands > \mode<article>, \mode<presentation> and this can have text appear > only in specific versions. Here, I have the same opinion as you, Venkataraman. People should Actually, you don't need anything clever to keep content for different outputs in the same document. You only need the good old comment.sty which offers wonderful commands like \excludecomment and \incudecomment. See the following document: %--------> begin <--------- \documentclass{article} \usepackage{lipsum,comment} \excludecomment{note} \includecomment{presentation} \begin{document} \begin{note} This is note \lipsum[1] \end{note} \begin{presentation} This is presentation. \lipsum[2] \end{presentation} \end{document} %--------> end <--------- If you compile this document, you will get only the presentation part. And if you change the \excludecomment{note} to \includecomment{note} and reverse presentation to \excludecomment, you will get only note part. You need to write a small package with two options -- 'note' and 'presentation' which should do this job. By \usepackage[note]{<package name>} you can switch to note and by invoking presentation, you can typeset presentation part alone. Hope this might help the OP. Best regards -- `
2022-07-02 05:39:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7006644606590271, "perplexity": 4584.232658022614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00469.warc.gz"}
http://krypted.com/windows-server/locate-the-citrix-datastore/
# krypted.com #### Tiny Deathstars of Foulness There are times in a Citrix environment where you might have servers pointing to different data stores. You then might get confused about what box is pointing to what datastore location. To find out, open Powershell on the Citrix server and run the following command: `cat "c:\program files\citrix\independent mananagement architecture\nf20.dsn"` April 24th, 2014 Posted In: Windows Server Tags: , , ,
2017-05-29 19:06:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8386934399604797, "perplexity": 7878.391281361077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612537.91/warc/CC-MAIN-20170529184559-20170529204559-00389.warc.gz"}
https://en.wikipedia.org/wiki/Robinson_projection
# Robinson projection Robinson projection of the world The Robinson projection with Tissot's indicatrix of deformation The Robinson projection is a map projection of a world map which shows the entire world at once. It was specifically created in an attempt to find a good compromise to the problem of readily showing the whole globe as a flat image.[1] The Robinson projection was devised by Arthur H. Robinson in 1963 in response to an appeal from the Rand McNally company, which has used the projection in general-purpose world maps since that time. Robinson published details of the projection's construction in 1974. The National Geographic Society (NGS) began using the Robinson projection for general-purpose world maps in 1988, replacing the Van der Grinten projection.[2] In 1998 NGS abandoned the Robinson projection for that use in favor of the Winkel tripel projection, as the latter "reduces the distortion of land masses as they near the poles".[3][4] ## Strengths and weaknesses The Robinson projection is neither equal-area nor conformal, abandoning both for a compromise. The creator felt that this produced a better overall view than could be achieved by adhering to either. The meridians curve gently, avoiding extremes, but thereby stretch the poles into long lines instead of leaving them as points.[1] Hence, distortion close to the poles is severe, but quickly declines to moderate levels moving away from them. The straight parallels imply severe angular distortion at the high latitudes toward the outer edges of the map – a fault inherent in any pseudocylindrical projection. However, at the time it was developed, the projection effectively met Rand McNally's goal to produce appealing depictions of the entire world.[5][6] I decided to go about it backwards. … I started with a kind of artistic approach. I visualized the best-looking shapes and sizes. I worked with the variables until it got to the point where, if I changed one of them, it didn't get any better. Then I figured out the mathematical formula to produce that effect. Most mapmakers start with the mathematics. — 1988 New York Times article[1] ## Formulation The projection is defined by the table:[7][8][9] Latitude X Y 1.0000 0.0000 0.9986 0.0620 10° 0.9954 0.1240 15° 0.9900 0.1860 20° 0.9822 0.2480 25° 0.9730 0.3100 30° 0.9600 0.3720 35° 0.9427 0.4340 40° 0.9216 0.4958 45° 0.8962 0.5571 50° 0.8679 0.6176 55° 0.8350 0.6769 60° 0.7986 0.7346 65° 0.7597 0.7903 70° 0.7186 0.8435 75° 0.6732 0.8936 80° 0.6213 0.9394 85° 0.5722 0.9761 90° 0.5322 1.0000 The table is indexed by latitude at 5-degree intervals; intermediate values are calculated using interpolation. Robinson did not specify any particular interpolation method, but it is reported that he used Aitken interpolation himself.[10] The X column is the ratio of the length of the parallel to the length of the equator; the Y column can be multiplied by 0.2536[11] to obtain the ratio of the distance of that parallel from the equator to the length of the equator.[7][9] Coordinates of points on a map are computed as follows:[7][9] {\displaystyle {\begin{aligned}x&=0.8487\,RX(\lambda -\lambda _{0}),\\y&=1.3523\,RY,\end{aligned}}} where R is the radius of the globe at the scale of the map, λ is the longitude of the point to plot, and λ0 is the central meridian chosen for the map (both λ and λ0 are expressed in radians). Simple consequences of these formulas are: • With x computed as constant multiplier to the meridian across the entire parallel, meridians of longitude are thus equally spaced along the parallel. • With y having no dependency on longitude, parallels are straight horizontal lines. ## References 1. ^ a b c John Noble Wilford (October 25, 1988). "The Impossible Quest for the Perfect Map". The New York Times. Retrieved 1 May 2012. 2. ^ Snyder, John P. (1993). Flattening the Earth: 2000 Years of Map Projections. University of Chicago Press. p. 214. ISBN 0226767469. 3. ^ "National Geographic Maps – Wall Maps – World Classic (Enlarged)". National Geographic Society. Retrieved 2019-02-17. This map features the Winkel Tripel projection to reduce distortion of land masses as they near the poles. 4. ^ "Selecting a Map Projection". National Geographic Society. Retrieved 2019-02-17. 5. ^ Myrna Oliver (November 17, 2004). "Arthur H. Robinson, 89; Cartographer Hailed for Map's Elliptical Design". Los Angeles Times. Retrieved 1 May 2012. 6. ^ New York Times News Service (November 16, 2004). "Arthur H. Robinson, 89 Geographer improved world map". Chicago Tribune. Retrieved 1 May 2012. 7. ^ a b c Ipbuker, C. (July 2005). "A Computational Approach to the Robinson Projection". Survey Review. 38 (297): 204–217. doi:10.1179/sre.2005.38.297.204. Retrieved 2019-02-17. 8. ^ "Table for Constructing the Robinson Projection". RadicalCartography.net. Retrieved 2019-02-17. 9. ^ a b c Snyder, John P.; Voxland, Philip M. (1989). An Album of Map Projections (PDF). U.S. Geological Survey Professional Paper 1453. Washington, D.C.: U.S. Government Printing Office. pp. 82–83, 222–223. doi:10.3133/pp1453. Retrieved 2019-02-18. 10. ^ Richardson, R. T. (1989). "Area deformation on the Robinson projection". The American Cartographer. 16 (4): 294–296. doi:10.1559/152304089783813936. 11. ^ From the formulas below, this can be calculated as ${\displaystyle {\frac {1.3523}{0.8487\cdot 2\pi }}\approx 0.2536}$.
2020-11-30 15:04:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9035102725028992, "perplexity": 4957.151109966178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141216175.53/warc/CC-MAIN-20201130130840-20201130160840-00254.warc.gz"}
https://www.nextgurukul.in/wiki/concept/cbse/class-7/maths/perimeter-and-area/plane-figures/3957664
Notes On Plane Figures - CBSE Class 7 Maths Perimeter of a closed figure is the distance around it, whereas area is the region enclosed by closed figure. Perimeter of a regular polygon = number of sides x length of one side. Area and perimeter of a rectangle The perimeter of a rectangle is twice the sum of the lengths of its adjacent sides. Perimeter of a rectangle of length 'l' units and breadth 'b' units = 2(l + b). The area of a rectangle is the product of its length and breadth. Area of a rectangle of length 'l' units and breadth 'b' units = l × b. The perimeter of rectangle ABCD = 2(AB + BC). Area of rectangle ABCD = AB x BC. Each diagonal of a rectangle divides it into two triangles that are equal in area. Area and perimeter of a square The perimeter of a square with side s units is the four times the length of its side. Perimeter of a square with side s units = 4 × s The area of a square with side s is is equal to side multiplied by side. Area of a square with side s units = s × s The perimeter of square ABCD = 4AB or 4BC or 4CD or 4DA. Area of square ABCD = AB2 or BC2 or CD2 or DA2. The diagonals of a square divide it into four triangles that are equal in area. A rectangle and a square having the same perimeter need not have the same area. If the perimeter of a figure increases it is not necessary that its area also increases. Area and perimeter of a triangle The perimeter of a triangle is the sum of the lengths of its sides. Perimeter of a triangle with sides a, b and c = (a + b + c). The area of a triangle is the space enclosed by its three sides. Area of a triangle is half of the product of its base and the corresponding altitude. Area of a triangle with b as the base and h as the altitude = $\frac{\text{1}}{\text{2}}$ × bh. Triangles equal in area need not be congruent, but all congruent triangles are equal in area. Area and perimeter of a parallelogram The perimeter of a parallelogram is twice the sum of the lengths of the adjacent sides. The area of a parallelogram is the product of its base and the corresponding altitude. Area of a parallelogram with b as the base and h as the altitude = b × h. Any side of a parallelogram can be considered as the base. The perpendicular drawn on that side from the opposite vertex is known as the height (altitude). The perimeter of parallelogram ABCD = 2(AB + BC) Area of parallelogram ABCD = (AB x DE) or (AD x BF). A parallelogram in which the adjacent sides are equal is called a rhombus. The perimeter and area of a rhombus can be calculated using the same formulae as that for a parallelogram. Conversion of units 1cm = 10 mm 1 cm2 = 100 mm2 1 m2 = 10000 cm2 1 hectare = 10,000 m2 #### Summary Perimeter of a closed figure is the distance around it, whereas area is the region enclosed by closed figure. Perimeter of a regular polygon = number of sides x length of one side. Area and perimeter of a rectangle The perimeter of a rectangle is twice the sum of the lengths of its adjacent sides. Perimeter of a rectangle of length 'l' units and breadth 'b' units = 2(l + b). The area of a rectangle is the product of its length and breadth. Area of a rectangle of length 'l' units and breadth 'b' units = l × b. The perimeter of rectangle ABCD = 2(AB + BC). Area of rectangle ABCD = AB x BC. Each diagonal of a rectangle divides it into two triangles that are equal in area. Area and perimeter of a square The perimeter of a square with side s units is the four times the length of its side. Perimeter of a square with side s units = 4 × s The area of a square with side s is is equal to side multiplied by side. Area of a square with side s units = s × s The perimeter of square ABCD = 4AB or 4BC or 4CD or 4DA. Area of square ABCD = AB2 or BC2 or CD2 or DA2. The diagonals of a square divide it into four triangles that are equal in area. A rectangle and a square having the same perimeter need not have the same area. If the perimeter of a figure increases it is not necessary that its area also increases. Area and perimeter of a triangle The perimeter of a triangle is the sum of the lengths of its sides. Perimeter of a triangle with sides a, b and c = (a + b + c). The area of a triangle is the space enclosed by its three sides. Area of a triangle is half of the product of its base and the corresponding altitude. Area of a triangle with b as the base and h as the altitude = $\frac{\text{1}}{\text{2}}$ × bh. Triangles equal in area need not be congruent, but all congruent triangles are equal in area. Area and perimeter of a parallelogram The perimeter of a parallelogram is twice the sum of the lengths of the adjacent sides. The area of a parallelogram is the product of its base and the corresponding altitude. Area of a parallelogram with b as the base and h as the altitude = b × h. Any side of a parallelogram can be considered as the base. The perpendicular drawn on that side from the opposite vertex is known as the height (altitude). The perimeter of parallelogram ABCD = 2(AB + BC) Area of parallelogram ABCD = (AB x DE) or (AD x BF). A parallelogram in which the adjacent sides are equal is called a rhombus. The perimeter and area of a rhombus can be calculated using the same formulae as that for a parallelogram. Conversion of units 1cm = 10 mm 1 cm2 = 100 mm2 1 m2 = 10000 cm2 1 hectare = 10,000 m2 Previous
2021-06-21 09:36:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 2, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6480578780174255, "perplexity": 254.68754809667672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00283.warc.gz"}
http://www.ncatlab.org/nlab/show/truth+value
(0,1)-category (0,1)-topos # Contents ## Idea Classically, a truth value is either $\top$ (True) or $\bot$ (False). (In constructive mathematics, this is not so simple, although it still holds that any truth value that is not true is false.) More generally, a truth value in a topos $T$ is a morphism $1 \to \Omega$ (where $1$ is the terminal object and $\Omega$ is the subobject classifier) in $T$. By definition of $\Omega$, this is equivalent to an (equivalence class of) monomorphisms $U\hookrightarrow 1$. In a two-valued topos, it is again true that every truth value is either $\top$ or $\bottom$, while in a Boolean topos this is true in the internal logic. Truth values form a poset (the poset of truth values) by declaring that $p$ precedes $q$ iff the conditional $p \to q$ is true. In a topos $T$, $p$ precedes $q$ if the corresponding subobject $P\hookrightarrow 1$ is contained in $Q\hookrightarrow 1$. Classically (or in a two-valued topos), one can write this poset as $\{\bot \to \top\}$. The poset of truth values is a Heyting algebra. Classically (or internal to a Boolean topos), this poset is even a Boolean algebra. It is also a complete lattice; in fact, it can be characterised as the initial complete lattice. As a complete Heyting algebra, it is a frame, corresponding to the one-point locale. When the set of truth values is equipped with the specialization topology, the result is Sierpinski space. A truth value may be interpreted as a $0$-poset or as a $(-1)$-groupoid. It is also the best interpretation of the term ‘$(-1)$-category’, although this doesn't fit all the patterns of the periodic table. homotopy leveln-truncationhomotopy theoryhigher category theoryhigher topos theoryhomotopy type theory h-level 0(-2)-truncatedcontractible space(-2)-groupoidtrue/unit type/contractible type h-level 1(-1)-truncated(-1)-groupoid/truth valuemere proposition, h-proposition h-level 20-truncateddiscrete space0-groupoid/setsheafh-set h-level 31-truncatedhomotopy 1-type1-groupoid/groupoid(2,1)-sheaf/stackh-groupoid h-level 42-truncatedhomotopy 2-type2-groupoidh-2-groupoid h-level 53-truncatedhomotopy 3-type3-groupoidh-3-groupoid h-level $n+2$$n$-truncatedhomotopy n-typen-groupoidh-$n$-groupoid h-level $\infty$untruncatedhomotopy type∞-groupoid(∞,1)-sheaf/∞-stackh-$\infty$-groupoid Revised on June 22, 2014 17:33:24 by Toby Bartels (98.16.175.187)
2014-10-31 17:54:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9151990413665771, "perplexity": 742.34085726249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900032.4/warc/CC-MAIN-20141030025820-00009-ip-10-16-133-185.ec2.internal.warc.gz"}
https://hal.in2p3.fr/in2p3-00950224
New interface # Measurement of charged particle multiplicities and densities in $pp$ collisions at $\sqrt{s}=7\;$TeV in the forward region Abstract : Charged particle multiplicities are studied in proton-proton collisions in the forward region at a centre-of-mass energy of $\sqrt{s} = 7\;$TeV with data collected in 2010 by the LHCb detector. The forward spectrometer allows access to a kinematic range of $2.0<\eta<4.8$ in pseudorapidity, momenta down to $2\;$GeV/$c$ and transverse momenta down to $0.2\;$GeV/$c$. The measurements are performed using minimum-bias events with at least one charged particle in the kinematic acceptance. The results are presented as functions of pseudorapidity and transverse momentum and are compared to predictions from several Monte Carlo event generators. Document type : Journal articles https://hal.in2p3.fr/in2p3-00950224 Contributor : Claudine BOMBAR Connect in order to contact the contributor Submitted on : Friday, February 21, 2014 - 10:15:43 AM Last modification on : Saturday, June 25, 2022 - 9:08:57 PM ### Citation R. Aaij, B. Adeva, M. Adinolfi, A. Affolder, Ziad Zj Ajaltouni, et al.. Measurement of charged particle multiplicities and densities in $pp$ collisions at $\sqrt{s}=7\;$TeV in the forward region. European Physical Journal C: Particles and Fields, 2014, 74, pp.2888. ⟨10.1140/epjc/s10052-014-2888-1⟩. ⟨in2p3-00950224⟩ Record views
2022-12-10 03:27:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8296034932136536, "perplexity": 5495.251222057575}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711637.64/warc/CC-MAIN-20221210005738-20221210035738-00359.warc.gz"}
https://hamaluik.ca/posts/quaternions-as-four-dimensional-complex-numbers/
# Quaternions as Four-Dimensional Complex Numbers Although I have a pretty solid background in math (especially vectors, matrices, and even tensors), I've always somewhat struggled with quaternions. Most sources focus on quaternions as some tool for performing rotations in three-dimensions while avoiding gimbal lock. Which is true, they are that, but they're also more. After reading several articles about quaternions over the past several days, quaternions finally clicked and made sense! I'll try to share that insight with you here, though be warned that my description may be just as confusing (if not more so) than anywhere else. In short, once I really understood that quaternions are simply four-dimensional complex numbers, understanding their creation and use became a lot simpler. Quaternions are basically just four-dimensional vectors, who's orthonormal basis lies in some weird four-dimensional existence. That sounds like a mouthful, and to be honest, it kind of is. Let's take a step back and look at complex numbers. Actually, before that, let's look at orthonormal bases. ## Orthonormal Bases If you don't know what an orthonormal basis is, that's probably just because you don't know their name. To quote wikipedia: In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. That is to say, an orthonormal basis is a set of vectors which are all perpendicular to each other. You almost assuredly know one such basis: the <x, y, z> coordinate system (also called the "Cartesian coordinate system"). Essentially each component of the basis represents a different dimension. There are many other orthonormal bases, for example: 2D Cartesian coordinates, polar coordinates (2D), cylindrical coordinates (3D), and spherical coordinates (3D) to name a few. As it turns out, complex numbers also form an orthonormal basis. However, instead of representing physical dimensions, complex numbers represent a complex plane composed of real and imaginary components representing real and imaginary dimensions. ## Complex Numbers Complex numbers are just two-dimensional vectors which are composed of both real and imaginary dimensions. In the 2D Cartesian coordinate system, vectors are composed of the x and y dimensions. In the complex plane, the imaginary dimension is given the label i, where: $$\hat{i}^2 = -1$$ Which is an important identity to know, however we don't really need to use it often. Where in the Cartesian plane you might write a vector as such: $$\vec{v} = a \hat{x} + b \hat{y}$$ In the complex plane, you might write a vector as: $$\vec{x} = a + b \hat{i}$$ Where a represents the real part of the vector and b represents the imaginary part. ### Rotating with Complex Numbers When rotating a vector in Cartesian coordinates, you can represent the rotation as a combination of cos and sin transforms in the two dimensions: $$R(\theta) = \cos(\theta)\hat{x} + \sin(\theta)\hat{y}$$ Similarly, rotations in the complex plane can be represented as the combinations of cos and sin transforms in the two complex dimensions: $$R(\theta) = \cos\left(\theta\right) + \hat{i}\sin\left(\theta\right)$$ Because of some neat math with complex numbers (including $$i^2 = -1$$ formula above) which I won't repeat here, this can be reduced to: $$R(\theta) = e^{\hat{i} \theta}$$ ## Quaternions as Four-Dimensional Complex Numbers Now that we have an understanding of complex numbers in two dimensions, it's pretty straightforward to extend the concept into the four dimensions necessary for quaternions---essentially all we do is define j² and k² dimensions to cast the vector into, defining the directions according to Hamilton's formula: $$\hat{i}^2 = \hat{j}^2 = \hat{k}^2 = \hat{i}\hat{j}\hat{k} = -1$$ A quaternion can then be written as: $$\vec{q} = w + x\hat{i}^2 + y\hat{j}^2 + z\hat{k}^2$$ Or, more commonly: $$\vec{q} = \left<w, x, y, z\right>$$ Where w corresponds to the real dimension and x, y, and z correspond to the three imaginary dimensions. And that's it. That's all quaternions really are. Of course, quaternions are useful for all sorts of things, owing to some more neat math. ### Real and Pure Quaternions If a quaternion's imaginary components are all equal to zero, then the quaternion is said to be "real": $$\vec{q}_{real} = w$$ Alternatively, if a quaternion's real component is equal to zero, then the quaternion is said to be "pure": $$\vec{q}_{pure} = x\hat{i} + y\hat{j} + z\hat{k}$$ Note that any quaternion can be expressed as the sum of its "real" and "pure" parts: \begin{aligned} \vec{q} &= \vec{q}_{real} + \vec{q}_{pure} \\ &= \left(w\right) + \left(x\hat{i} + y\hat{j} + z\hat{k}\right) \end{aligned} ### Rotations Using Quaternions Since quaternions are composed of a single real component and three orthogonal imaginary components, they can be written similarly to vectors in our 2D complex plane: \begin{aligned} \vec{q} &= w + \left<x, y, z\right> \cdot \left<\hat{i}, \hat{j}, \hat{k}\right> \\ &= w + \vec{u} \cdot \vec{i} \end{aligned} Look familiar? Using the same multiplication formula as before, we get: $$R(\theta) = e^{(\vec{u} \cdot \vec{i})\theta}$$ or: $$R(\theta) = \cos(\theta) + (\vec{u} \cdot \vec{i})\sin(\theta)$$ By multiplying a vector by a quaternion (noting that to satisfy the math, we must use a four-dimensional vector, which we can set to be our three-dimensional vector with the fourth element set to 0), we get another quaternion: \begin{aligned} \vec{p}' &= q\vec{p} \\ &= \left<w, \vec{u}\right>\left<0, \vec{p}\right> \\ &= \left<-\vec{u}\cdot\vec{p}, w\vec{p} + \vec{u}\times\vec{p}\right> \end{aligned} Now, if the quaternion represents a rotation as defined above, the result should represent a rotated version of the vector p. Note that we essentially converted p to a "pure" quaternion, so we would expect p' to be a pure quaternion as well, from which we could extract the rotated vector. Somewhat unfortunately, this isn't the case for all but a few very specific circumstances. Most of the time, the result will be a mixed quaternion (meaning it will have both real and pure components), and the pure portion of it will not represent the origin vector (it will be longer). Fortunately, this can easily by solved by following the multiplication up by another multiplication---this time, by the inverse of q: $$\vec{p}' = q\vec{p}q^{-1}$$ By adding this multiplication in, the resulting p' quaternion will be pure quaternion, with the complex parts representing the vector p rotated by the quaternion q. There's a catch however: since you effectively multiplied the vector twice (once by q and once by the inverse of q), the resulting vector gets rotated by 2θ, meaning to rotate the vector only by θ, you need to construct q as if it was rotated by 0.5θ. #### Constructing a Quaternion as a Rotation Remembering the formula for R(θ) from before, we can construct a rotation quaternion as such: $$q(\theta) = \cos(\theta) + \sin(\theta)\hat{i} + \sin(\theta)\hat{j} + \sin(\theta)\hat{k}$$ However, this suffers from the 2θ issue mentioned above, so we actually want to construct it as so: $$q\left(\theta\right) = \cos\left(\frac{\theta}{2}\right) + \sin\left(\frac{\theta}{2}\right)\hat{i} + \sin\left(\frac{\theta}{2}\right)\hat{j} + \sin\left(\frac{\theta}{2}\right)\hat{k}$$ The resulting quaterion q can now be used to rotate a vector in three dimensions! Not so shabby, eh? To actually implement the rotation however, you'll need a couple more formulas---namely how to multiply quaternions, and how to calculate the inverse of a quaternion. ##### Multiplying Quaternions The derivation of multiplying quaternions is fairly straightforward, if somewhat tedious. To save on tedium, I'll just give you the result here: $$\vec{q}_1 \vec{q}_2 = \left<q_{1,w}q_{2,w} - \vec{q_1}\cdot\vec{q_2}, q_{1,w}\vec{q}_2 + q_{2,w}\vec{q}_1 + \vec{q}_1 \times \vec{q_2}\right>$$ ##### Calculating the Inverse of a Quaternion The inverse of a quaternion is given via the following formula: $$q^{-1} = \frac{q^*}{\left|q\right|^2}$$ Where q* represents the conjugate of the quaternion, and is calculated as such: \begin{aligned} q^* &= q_w - q_x\hat{i} - q_y\hat{j} - q_z\hat{k} \\ &= \left<q_w, -\vec{u}\hat{i}\right> \end{aligned} ## Conclusions • Quaternions are just vectors (in four dimensions) • The four dimensions are: • real • i • j • k • 3D vectors can be written as a quaternion, where the x-y-z components of the vector map to the i-j-k components of the quaternion • Quaternions can be used to rotate other quaternions using a couple of simple formula For full in-depth discussion about quaternions, as well as to check out the sources I used for this, please check these places out:
2018-11-18 10:25:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9930849075317383, "perplexity": 801.1791367770664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744348.50/warc/CC-MAIN-20181118093845-20181118115845-00121.warc.gz"}
https://proxies-free.com/tag/twisted/
## at.algebraic topology – Uses for (Framed) E2 algebras twisted by braided monoidal structure If $$C$$ is a monoidal category (not necessarily a symmetric monoidal category), it’s possible to define the notion of an algebra object $$A$$ in $$C$$, with multiplication operations $$A^{otimes n} (:= Aotimes_C Aotimes_C cdotsotimes_C A)to A.$$ Similarly, if $$C$$ is a braided monoidal category (resp., a ribbon category), one can define a notion of $$E_2$$ DG algebra $$A$$ (resp., framed $$E_2$$ DG algebra $$A$$) “twisted” by $$C$$, consisting of operations $$A^{otimes n}to A$$ compatible with braiding. (Note: I actually don’t know a reference for this, but it follows from standard “homotopy field theory” arguments involving the Ran space.) In particular, if $$C$$ is a braided monoidal (or ribbon) category coming from an associator on a Lie algebra $$g$$ (with choice of Casimir), there is a whole category of “associator-twisted” $$g$$-equivariant $$E_2$$ (resp., framed $$E_2$$) algebras. My question is whether algebras of this type have been encountered before. They feel very CFT-ish, and so I’m particularly curious about physics and knot theory applications. In particular, the framed variant should gove some kind of derived 2D TQFT-style invariants. Any references would be useful. Thanks! ## networking – Why are rollover/console port cables not twisted? ever since I learned about console cables I felt there was something off about them, and only once I got one for a Cisco ASA did I notice what it was, it’s a completely flat cable, end to end! I’m really curious as to why that is, and i haven’t been able to find anything online about it, mostly the difference between them and straight through cables. ## homotopy theory – Using HoTT, why is twisted cohomology of BG group cohomology? I’ve been reading Michael Shulman’s blog posts defining cohomology in homotopy type theory, and I’d like to understand (using HoTT) why cohomology of BG is group cohomology. if I understand correctly, given a parametrized spectrum (i.e. a fibration by spectra) $$E: X to mathsf{Spectra}$$, we define the twisted cohomology of $$X$$ with coefficients in $$E$$ to be $$H^n(X; E) equiv Vert prod_{x:X} Omega^{-n} E_0 Vert_0$$. In particular, if we have a parametrized family $$V: X to mathsf{AbGroup}$$ then we can compose with the Eilenberg-MacLane construction $$H: mathsf{AbGroup} to mathsf{Spectra}$$ to get a parametrized family of Spectra $$HV: X to mathsf{Spectra}$$. The cohomology $$H^n(X; HV)$$ is cohomology with local coefficients, which is the twisted version of ordinary cohomology. Now if we consider the case $$X = BG$$ (i.e. $$BG=K(G,1)$$ ) for $$G$$ a set-group, then a parametrized family $$V: BG to mathsf{AbGroup}$$ is the same as a group representation of $$G$$, since given $$g: bullet = bullet$$, we get a path $$g_*: V(bullet) = V(bullet)$$. Now, if we consider the corresponding twisted cohomology $$H^n(BG; HV) equiv Vert prod_{x:BG} K(V;n) Vert_0$$, why do we get group cohomology? For now let’s just consider $$H^0(BG; HV) equiv Vert prod_{x:BG} V Vert_0 = prod_{x:BG} V$$, where the second equality follows because $$V$$ is a set. In order to get group cohomology, it should be the case that any $$v: prod_{x:BG} V$$ encodes a $$G$$-invariant element of the $$G$$-representation. But it isn’t immediately obvious to me why this should be the case. Any help would be greatly appreciated! ## models – Is there a way to find .obj files of the twisted ones? I am making a game about the twisted animatronics in FNaF, but I can’t put them in because I couldn’t find the .obj files anywhere on the internet, I did find one of them, but it wasn’t colored. the files I need are .obj files of the following animatronics: Twisted Freddy, Twisted Bonnie, Twisted Foxy and Twisted wolf. you can search for these files for the internet, I will be thankful, or you can choose the hard way and make the models themselves. You can search up how they look like on the internet ## reference request – Twisted affine Lie algebras, Lie bracket and normalized standard invariant form I am reading the book: Infinite-Dimensional Lie Algebras (Kac) and the article: Affine Lie algebras and the Virasoro algebras I (Wakimoto). The formulas they wrote for the Lie bracket $$(,)$$, normalized standard invariant form $$(|)$$ of twisted affine Lie algebras of type $$X_N^{(r)}$$ are contradicted to each other: Contradiction1: In the book, page 139, the bracket given by but in the article, page 381, it is given by Here $$X(j)$$ means $$t^j otimes X$$ and $$c_s=rK/m$$ (see the article to verify it). They are totally different. Contradiction2: In the book, page 139, if the normalized standard invariant form is defined by then it contradict to the Lie bracket in the same page, since $$(d’| (t^i otimes x, t^j otimes y)) ne ((d’,t^i otimes x)| t^j otimes y)$$ So, If are there anyone knows the right formulas for the Lie bracket and normalized standard invariant form for twisted affine Lie algebras mentioned in the Theorem 8.7 in the book of Kac? ## ag.algebraic geometry – When is a twisted form coming from a torsor trivial? Consider a sheaf of groups $$G$$, equipped with a left torsor $$P$$ and another left action $$G$$ on some $$X$$. Form the contracted product $$P times^G X := (P times X)/sim$$ where $$sim$$ is the antidiagonal quotient: $$(g.p, x)sim (p, g.x)$$. Q1: When is $$Ptimes^G X$$ trivial? I.e., when do we have an isomorphism $$P times^G X simeq X$$? Partial answer: $$P times^G X simeq X$$ over $$(X/G)$$ iff $$P times (X/G)$$ is a trivial torsor over the stack quotient $$(X/G)$$. Proof: We can rewrite $$P times^G X$$ as a contracted product of two torsors $$(P times (X/G))times^G_{(X/G)} X$$. Then we contract with “$$X^{-1}$$” — the inverse to contracting with $$X$$ as a torsor over $$(X/G)$$ and we win. (as in B. Poonen’s Rational Points on Varieties, section 5.12.5.3) Am I allowed to do this? This argument probably shouldn’t have to appeal to algebraic stacks and may be somewhat dubious. Q2: If I have one isomorphism $$P times^G X simeq X$$, can I choose another one that lies over $$(X/G)$$? Or at least is $$G$$-equivariant? Q3: Is there a natural way to write the triviality of such a twisted form? I first thought $$P times^G X simeq X$$ iff $$P$$ was trivial, which is clearly false for trivial actions on $$X$$. Then I was excited to have the pullback $$* to BG$$ represent triviality of the twisted form $$P times^G X$$ as well as the torsor $$P$$. Is there a natural representative of the sheaf of isomorphisms between $$P times^G X$$ and $$X$$? These can all be sheaves, although I’m primarily interested in $$G = GL_n, PGL_n, SL_n$$, etc. acting on $$X = mathbb{A}^n, mathbb{P}^n$$ as appropriate. More ambitious is $$G = text{Aut}(X)$$ for even simple $$X$$. I’d be happy with answers in any level of generality. Due Diligence Statement: I’m a novice in the area of “twisted forms” of varieties, so I apologize if the above is evident or obtuse. I checked all the “similar questions” listed here and couldn’t find an answer. ## unity – How do I rotate a twisted upper body towards mouse pointer position? I have created a Third Person Controller. The camera is behind the player: I would like to make it so that the player aims at the mouse pointer position. To do that, I use the following code to rotate the chest towards the position: `````` var mousePos = Input.mousePosition; mousePos.z = 10; // Make sure to add some "depth" to the screen point var aim = Camera.main.ScreenToWorldPoint(mousePos); Chest.LookAt(aim); `````` At first I wondered why it doesn’t work as expected. The chest wasn’t rotated towards the target. Then I noticed that the chest is “twisted”. It can be seen well when observed from above: I would like to learn how to handle this in the smartest way. Should I add a vector to the “aim” vector to compensate for the twist or is there a better way that I don’t know yet? Thank you. ## rt.representation theory – Twisted screening operators and twisted free-field realizations of \$mathcal{W}_n\$ algebras Let $$mathcal{g}=mathcal{sl}_{n+1}$$ and I am interested in the principal $$mathcal{W}$$-algebra of $$mathcal{g}$$ at self-dual level i.e. $$k=- h ^{vee} +1$$, usually denoted by $$mathcal{W}_n$$. Now these VOAs can be realized as subalgebras inside the rank $$n$$ Heisenberg (free boson) VOA. It can also be realized as the intersection of the kernels of screening operators, $$mathcal{W}_n cong cap mathrm{Ker} Q_{alpha}$$ where the screening operators are obtained as integrals of vertex operators for every simple root $$alpha$$ of $$mathcal{g}$$ (scaled by some number $$k_{alpha}$$, $$Q_{alpha}= int exp left( k_{alpha}alpharight).$$ In particular these screening operators map highest weight Fock space states to singular vectors of $$mathcal{W}_n$$. I am interested in “twisted” generalization of this picture. For simplicity let $$mathcal{g}=mathfrak{sl}_3$$. Let $$H$$ be a rank 2 Heisenberg algebra. Then we can construct a $$mathbb{Z}_2$$-twisted Fock module, $$M$$ (See for example Doyon for the definition of twisted modules.) generated by integer and half-integer modes $${J_{n}}_{n in mathbb{Z}/2}$$ instead of just integer modes. Question: Is there a generalization of the above picture for twisted modules? 1. Can I construct screening operators from twisted vertex operators and in what sense will these yield singular vectors? 2. Does the intersection of these twisted screenings produce a free-field realization of $$mathcal{W}_3$$ inside the Heisenberg algebra? ## mesh gets twisted wildly when trying to use Unreal’s mannequin’s skeleton I have modelled a simple mannequine and made a skeleton for it in Blender. As far as I can judge, this skeleton copies the Unreal’s standard mannequine’s skeleton perfectly… All the hierarchy and bone names are the same, and Unreal also does not complain when I import this mesh and use Unreal’s skeleton asset for it. However, when I try to play a preview animation on my mesh, it gets twisted wildly. This is a normal state: … and this happens when I play an animation on this asset: I thought it was due to different joint initial transforms, so I tried exporting from Blender with varying bone axes (x-axis along the bone, z-axis along the bone etc.) but it did not help. There is neither imporevement nor mere difference when I change that. Can you tell me possible reasons? Start with a group $$G$$ that acts on a set $$X$$, and a second group $$H$$. We want to consider functions $$varphi: G times X to H$$ such that $$varphi(g g’, x) = varphi(g, g’x) varphi(g’, x)$$ for all $$g$$, $$g’$$, and $$x$$. Note that if $$X$$ is singleton (or if $$G$$ acts trivially), then $$varphi$$ is essentially just an ordinary group homomorphism. Is there a name for a function like this?
2021-06-23 06:22:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 108, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154203534126282, "perplexity": 428.55662844419453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00381.warc.gz"}
https://socratic.org/questions/what-does-it-mean-for-a-linear-system-to-be-linearly-independent
# What does it mean for a linear system to be linearly independent? Oct 22, 2015 Consider a set S of finite dimensional vectors $S = \left\{{v}_{1} , {v}_{2} , \ldots . {v}_{n}\right\} \in {\mathbb{R}}^{n}$ Let ${\alpha}_{1} , {\alpha}_{2} , \ldots . , {\alpha}_{n} \in \mathbb{R}$ be scalars. Now consider the vector equation ${\alpha}_{1} {v}_{1} + {\alpha}_{2} {v}_{2} + \ldots . . + {\alpha}_{n} {v}_{n} = 0$ If the only solution to this equation is ${\alpha}_{1} = {\alpha}_{2} = \ldots . = {\alpha}_{n} = 0$, then the set Sof vectors is said to be linearly independent. If however other solutions to this equation exist in addition to the trivial solution where all the scalars are zero, then the set S of vectors is said to be linearly dependant.
2022-06-27 05:40:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9093851447105408, "perplexity": 192.81058941880224}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103328647.18/warc/CC-MAIN-20220627043200-20220627073200-00065.warc.gz"}
http://mathoverflow.net/questions/66749/schur-multipliers-over-non-algebraically-closed-ground-fields
# Schur multipliers over non-algebraically closed ground fields? Recently some arithmetic dynamicists came to town, bringing with them some interesting problems in arithmetic geometry. I started thinking a bit about one of their problems, and it got me wondering about Schur multiplier groups over an arbitrary field. Traditionally, if $G$ is a group -- let us say it is finite -- then the Schur multiplier group $M(G)$ is $H^2(G,\mathbb{C}^{\times})$, i.e., group cohomology, with $\mathbb{C}^{\times}$ viewed as a trivial $G$-module. This group is also Brauer-like in that it measures obstructions to projective representations of $G$ -- i.e., homomorphisms $\rho: G \rightarrow \operatorname{PGL}_N(\mathbb{C})$ -- to be liftable to honest representations of $G$ -- i.e., homomorphisms $\tilde{\rho}: G \rightarrow \operatorname{GL}_N(\mathbb{C})$. It is not hard to see that you don't actually need to work over $\mathbb{C}$: if $\# G = n$, you can work over any field $K$ such that $K^{\times} = K^{\times n}$ and has primitive $n$th roots of unity. But now suppose I have an arbitrary ground field $K$ and a homomorphism $\rho: G \rightarrow \operatorname{PGL}_N(K)$ which I am wondering lifts to a representation of $G$. What is the theory of this? Two basic questions: 1) Is it still true that the appropriate group to look at is $M_K(G) = H^2(G,K^{\times})$? Added: Let me sharpen this question. The answer below shows that a projective representation gives rise to a class in $M_K(G)$ no matter what the ground field may be. But in the classical case the converse is also true: every element of $M_{\overline{K}}(G)$ arises in this way from a projective representation, uniquely up to projective equivalence. Does that still hold over an arbitrary ground field? I am a bit skeptical at the moment... 2) If the answer to 1) is yes, then it seems that the theory will have a much different flavor over an arbitrary field. (Here I say arbitrary but I am quite willing to assume for the moment that the characteristic of $K$ does not divide the order of $G$, so that we are in the setting of classical representation theory. This assumption will be in force in what follows.) For instance, if $G$ is cyclic of order $n$, then $M_K(G) \cong K^{\times}/K^{\times n}$. This means that over something like a number field there will be many projective representations of finite cyclic groups which do not lift. I think this is correct. In particular, I believe that for the cyclic group of order $2$, the map $G \rightarrow \operatorname{PGL}_2(K)$ associated with the order $2$ linear fractional transformation $z \mapsto \frac{\alpha}{z}$ is liftable to $\operatorname{GL}_2(K)$ iff $\alpha \in K^{\times 2}$. On the other hand I would like to deduce from the theory of "rational Schur multiplier groups" facts like the following: if $G$ is cyclic of odd order $n$ then every projective representation $\rho: G \rightarrow \operatorname{PGL}_2(K)$ lifts to a representation. (Again, in this case, if I am not mistaken, this can be shown by hand without much trouble, but I would like to see it come out of some general Schur-like theory.) In particular are there examples of computations of $M_K(G)$ in the literature for simple easy finite groups $G$, as there are for the usual $M(G)$? - It seems to me that (in a general dimension $d$), life is easier if you have a homomorphism from $G$ to ${\rm PSL}(d,K)$ since you then get a factor set consisting of roots of unity in $K$. In other words, it may that $K^{\times}/(K^{\times})^{d}$ has special relevance for the problem. –  Geoff Robinson Jun 2 '11 at 17:30 For your added question, I just noticed a cheap answer in Machi's new group theory text: take $N=|G|$ and let $\rho$ be the regular representation, twisted by $\alpha \in Z^2(G,K^\times)$, that is $e_g \cdot e_h = \alpha(g,h) e_{gh}$ where $\{e_g : g \in G\}$ is a basis of $K^G$. Then $\rho$ is a projective representation inducing $\bar \alpha \in H^2(G,K^\times)$. $${}$$ I've been wondering about associated covering groups in math.stackexchange.com/questions/423814/… –  Jack Schmidt Jun 18 '13 at 18:17 As far as 1) is concerned : Exercise 6.6.5 in Weibel's Introduction to Homological Algebra book : For any field k and any n, let $\gamma$ denote the class in $H^2(PGL(n,k), k^*)$ corresponding to the extension $$1 \rightarrow k^* \rightarrow GL(n,k) \rightarrow PGL(n,k) \rightarrow 1$$ If $\rho : G \rightarrow PGL(n,k)$ is a projective representation, show that $\rho$ lifts to a linear representation $G \rightarrow GL(n,k)$ if and only if $\rho^{\ast}(\gamma) = 0$ in $H^2(G,k^{\ast})$. Here, $\rho^{\ast}$ is the obvious map from $H^2(PGL(n,k), k^{\ast})$ to $H^2(G, k^*)$ induced by $G \rightarrow PGL(n,k)$. - I'm having trouble with the TeX in the above post for some reason...something screws up when I try to put some things on the same line towards the end of the post, and it doesn't come out right. –  Moshe Adrian Jun 2 '11 at 17:36 @Moshe: thanks for the reference. As I think I implied above, I am not exactly surprised by this, but it's very useful to have something in print... –  Pete L. Clark Jun 2 '11 at 17:51 No problem. By the way, I should have mentioned : to prove this exercise, one considers the pullback of the two morphisms $G \rightarrow PGL(n,k)$ and $GL(n,k) \rightarrow PGL(n,k)$. The details are in exercise 6.6.4, the one right before 6.6.5 :-) –  Moshe Adrian Jun 2 '11 at 17:58 @Moshe: the problem is almost certainly the asterisk *, which is trying to put things in italics. You can always escape the problem by protecting all math with backticks, but at least on some browsers this has the unwanted effect of changing the font-size of the math. You can also get into the habit of using \ast in place of *. –  Theo Johnson-Freyd Jun 2 '11 at 21:52 @Theo : Thanks, I fixed it! –  Moshe Adrian Jun 2 '11 at 22:25
2014-08-20 08:47:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411515593528748, "perplexity": 153.11095247808584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500801235.4/warc/CC-MAIN-20140820021321-00055-ip-10-180-136-8.ec2.internal.warc.gz"}
https://socratic.org/questions/an-isosceles-triangle-has-sides-a-b-and-c-such-that-sides-a-and-b-have-the-same--45
# An isosceles triangle has sides A, B, and C, such that sides A and B have the same length. Side C has a length of 56 and the triangle has an area of 112 . What are the lengths of sides A and B? May 12, 2018 $20 \sqrt{2}$ #### Explanation: Let's renotate and say triangle ABC has $a = b$, $c = 56$ area $A = 112$. Archimedes' Theorem is a modern version of Heron's Formula, $16 {A}^{2} = 4 {a}^{2} {c}^{2} - \left({b}^{2} - {a}^{2} - {c}^{2}\right)$ $16 {\left(112\right)}^{2} = 4 {a}^{2} {\left(56\right)}^{2} - {56}^{4}$ $a = 20 \sqrt{2}$
2022-05-21 00:33:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.861656904220581, "perplexity": 368.29875973437805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534693.28/warc/CC-MAIN-20220520223029-20220521013029-00090.warc.gz"}