url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://apppm.man.dtu.dk/index.php/Financial_appraisal_in_construction
|
# Financial appraisal in construction
Developed by Apostolos Bougas
Figure 1: Stages of a project process
Financial appraisal in construction is one of the most important and earliest stages of the construction project planning process. The purpose of the financial appraisal is to determine whether the project is worthwhile, comparing its costs with its expected benefits. It stands as a key element for deciding whether or not to proceed with a construction project and any project in general. Also, it stands as a key element for choosing between alternative construction projects. Financial appraisal addresses not only the adequacy of funds, but also the financial vialibity of the project, estimating in the end if and when the project returns a profit or not. This article analyzes, at first, the elements, construction cost, and overall costs and benefits of a construction project that the reader has to be familiarized with, in order to understand the costs that need to be considered for a financial appraisal of a construction project. Then, some methods of financial appraisal are presented, highlighting the Net present value, which is one the most common criteria for project decision and selection. Finally, the application of these methods and the considerations about the financial appraisal of construction are mentioned in this article, referring also to strategic misrepresentation and optimism bias, challenges which lead to false financial appraisal and have to be overcome.
# Introduction
## Background
### Construction Costs[1]
The construction costs consist of the initial investment that the client-investor has to pay to the contractor for the building or structure. The construction costs need to be estimated as precisely as possible in the early design stage of the project, so that the overall financial appraisal of the construction can be done. The larger amount of the construction costs consists of the purchase of land and the actual construction works of the new facilities. As the design stage is developing through time, the construction costs can be estimated more accurately. However, the available budget should not overrun from the early estimation, as this could lead to a negative overall return of the project and sometimes could also lead in cancelling it. The estimation for the construction costs needs to include the following major cost elements:
• Time-as an element for labor and construction activities
• Materials
• Capital equipment as machinery
• Indirect expenses as transportation, training
• Contingency cost[2]
• VAT[2]
### Overall costs and benefits of construction projects[1]
The overall cost and benefits of the construction project need to be evaluated and calculated for the financial appraisal of it. This appraisal include the cost from the inception of the project to the lifecycle cost and income. The precise identification will confirm if the project will pay back during its lifetime against the initial client investment. Costs and benefits that have to be taken into consideration during the justification of the investment of the project are:
• Purchase of land
• Construction works
• Professional fees
• Income from renting - sale or exploitation of the construction
• Cost of finance - interest rate fluctuations due to inflation
• Environmental cost
• Contingency cost [2]
## Financial appraisal
The purpose of the financial appraisal is the justification that the return will exceed the estimated construction cost of the project. Through this procedure the investment capital or the resources required for the construction of the project can be estimated, after assumptions about the utility and the benefits of the asset have been determined. In the construction industry, the problem of deciding the amount of investment and the allocation of resources to the project is actually the capital budgeting problem. The benefits from the project are estimated, and then the client- investor can estimate the budget which is necessary for the completion of the project and decide if he would proceed with the construction of the project or abandon it. [3] The financial appraisal, which acts as abovementioned, as a decision-making tool for the client-investor, depends, mostly, on:
• The size of the potential construction project
• The time period over which the cost and benefits are going to be calculated
In this article the following methods for the financial appraisal of projects will be presented
• Payback period
• Net present value
• Internal Rate of Return
# Methods
This return can be estimated and presented through some principles of capital budgeting:
## Payback analysis – payback period
Video 1:Internal Payback Analysis
The payback period is the exact length of time needed for a company /client to have a positive return of its initial investment as it is calculated by the cash flows. The income that will be generated from a construction project is compared with the overall cost of the project (construction cost, maintenance cost e.t.c). For example, an initial investment of 50 million DKK will be paid back in 10 years if the cash inflows are 5 million DKK each year. The payback analysis is the least precise of the capital budgeting techniques, because the time value of money is not taken into consideration. As a result, payback analysis has a potential for inaccurate results, when it is used for projects with longer investments, as inflation during that time may raise significantly [4]. Furthermore, it ignores the lifecycle costs of the project, so it does not stand as a good financial model for construction projects. [3]
## Net Present Value
### Discounting
The value of money today is greater than the value of money at a future point in time. [5] Discounting are the techniques applied so that this time value of money can be calculated.[3]. The basis of discounting is the comparison between the value of the return of an investment and the value of the same amount of money deposited in a bank account for the same period of time. Discounting considers the opportunity cost of the project.[2]
### Discounted cash flow
Cash flows in a construction project at future points in time need to discount to their present value so that it can be determined whether or not a construction project is worthwhile for investing. More specifically, the outflow of cash which is necessary for the investment of the construction project, i.e the project cost is compared with the discounted inflow of cash, i.e benefits arising from the exploitation of the project. This concept is referred as discounted cash flow.
The future values discounted to present values according to the mathematical formula $PV=\frac{FV}{(1+k)^n}$
where,
$PV$ is the Present Value
$FV_t$ is the Future value of a cash inflow or outflow in t years hence
$k$ is the discount rate
### Discount rate
The discount rate values can be calculated with many methods. The project managers have usually discount rate for their organizational policy, so that the project can be evaluated in financial terms. Three factors are essential for calculating the discount rate value [6]
$a$: Interest rate: It is the rate charged for the use of capital and it is arranged between the borrower and the lender.
$b$: Inflation rate: It is the rate due to inflation, raising of prices. Inflation rate has to be taken into consideration, so that there is no reduction in the purchasing power.
$c$: Minimum Attractive Rate of Return (MARR): This rate is a factor corresponding to the risks of the project, as the investment amount of money can never be repaid.
The discount rate is calculated as: $r=a+b+c$.
This effect that the discount rate is larger than the interest rate, more specifically sometimes in construction companies can reach almost 20% .[5]
### Net Present Value Calculation
Video 1:Net Present Value
The Net Present Value Is a capital budgeting technique that calculates the discounted cash flows against the investment. In mathematical terms, The minimum criterion for investing in a project is that the NPV is greater or equal than zero at a given discount rate and a specific time period in the future. [5] The effect that the discount rate can be in some cases, according to the selection of the construction consultant manager and the project accountant, almost 20%, makes the compliance with the minimum criterion of having a NPV equal to zero a very difficult and demanding task. In mathematical terms
$NPV=\sum_{t=1}^{n}\Big[\frac{FV_t}{(1+k)^t} \Big] - II$
where,
$FV_t$ is the future value of the cash inflows in t year hence
$k$ is the discount rate
$II$ is the initial investment
According to the Net Present Value method, investments should be made in projects with a positive NPV, so that the company profit or the investor's wealth could increase.<red name="Invest"></ref>
#### Example
A highway department is considering constructing a bridge to cut travel time during the three years it will take to build a permanent bridge. The temporary bridge can be manufactured in a few weeks at a cost of DKK 750000. At the end of three years, it would be removed. The cost of removing this bridge would be DKK 81000. Based on estimated time savings and wage rates, fuel savings, and reductions in risks of accidents, the department analysts predict that the benefits would be DKK 275000 during the first year, DKK 295000 during the second year, and DKK 315000 during the third year. Departmental regulations require use of a real discount rate of 4%. All the costs and the benefits are in real DKK.
Solution
The present value of the costs is calculated. This includes the construction cost of the temporary bridge, which occurs at the beginning of year 1, and the net cost of decommissioning the bridge at the end of year 3. Also the present value of benefits is calculated. For the decision of the construction of the bridge, the Net Present Value (NPV) needs to be calculated. If NPV is greater or equal to zero, then the construction of the bridge yields a profit on return, so it should be put on temporarily.
$NPV=\frac{275000}{(1+0.04)^1}+\frac{295000}{(1+0.04)^2}+\frac{315000}{(1+0.04)^3}-750000-\frac{81000}{(1+0.04)^3} = -4.808<0$
So the bridge should not be constructed.
## Internal rate of return
The internal rate of return (IRR) is more complex capital budgeting technique and more difficult to calculate than Net Present Value(NPV). The internal rate of return is the discount rate for which NPV is equal to zero. In other words, it is the discount rate for which the cash inflows are equal to the initial investment. In mathematical terms,
Video 1:Internal Rate of Return
$\sum_{t=1}^{n}\Big[\frac{FV_t}{(1+IRR)^t} \Big] - II = 0$
where,
$FV_t$ is the future value of the cash inflows in t year hence
$IRR$ is the Internal Rate of Return
$II$ is the initial investment
The calculation of IRR is done mathematically through a number of iterations, based on the trial-and-error solution. A variety of discount rates are used and gradually leading to the point, value of IRR, for which NPV is equal to zero.
## NPV vs IRR
Figure 2: Multiple Benefit and Payment Points
For most projects, NPV and IRR provide the same decision or ranking between different projects. However, there are differences that exist in the assumptions regarding these methods that can cause projects to be ranked differently. [6]
• IRR represents a rate of the profit-not a size of the profit
• IRR calculation does not require to discount rate assumption or calculation
• IRR cannot cope with the rapid changes of the discount rate over time [3]
• IRR may not be unique as the benefit and cost increases.
If two projects have the same IRR, the NPV of the projects is compared. The project with the higher NPV is chosen. The best way for deciding or comparing different projects is the use both of NPV and IRR together, in order to calculate both the size of and the rate of the return, respectively. However, NPV is the only criterion that ensures wealth maximization. [6]
# Application
In the past years, only skilled accountants could have made a financial appraisal of a construction project. Although, still now, applying this method has some difficulties, regarding considering all the cost and benefits, it is now available not only into most financial appraisal systems, but also in many spreadsheets such as Lotus and Excel, with functions, so that nearly everyone can perform a financial analysis. [3]. This fact provides the possibility for client, investors and consultant companies to perform an analysis informally. Project and construction managers can build financial models that can be interpreted and evaluated by non-experts. This is a key towards the innovation of construction and project management processes, as consultants can easily and quickly evaluate the worth of their ideas in financial terms.[3]
# Considerations
## NPV calculation
The calculation of the NPV requires a very accurate estimation of the cash inflows and outflows, especially a precise calculation of the potential benefits in terms of money that they will be raised from the project completion. Especially with the construction projects, this is a very difficult task, as the NPV calculation is a quantitative method [6]. It is not possible to value all the benefits in terms of money. For instance, in many cases, the client/investor is in a not-for-profit sector as the government.
## Market prices
The market prices, which act as a guide for the calculations of the costs and benefits of the project, may not be appropriate due to inflation and market failure.[2] Return on a construction project is not something which should always be taken as granted.[3]. Also, some elements of the costs and benefits of the construction projects are not quantitative, so there is a great difficulty to be valued. Damage to the environment has to be considered in the cost of the project, which on the benefit side, increased safety of the users or other macroeconomical impacts, such as the regeneration of a whole city area due to construction projects have to be considered. For this reason, shadow prices, which represent the willingness to pay for obtaining one more unit of an asset have established, such as value of a fatality. These values have to be assigned, if full cost-benefit analysis needs to be carried out in a project. [2]
## Uncertainty
Both the cost and the benefits of an investment construction project are uncertain. As it is concerns the benefits of a project: [2]
• There is an uncertainty on the operational cost of the project.
• The construction project – facility has to operate as planned so that the estimated benefits can be acquired.
• A late project delivery changes the income stream. This may eventually lead to missing of market opportunities, resulting eventually of minimizing the potential benefits of the project.
Uncertainty is also taken into account for the cost of the project[2]
• The cost of the project – investment may be higher than expected (budget overrun).
• Late delivery of a facility (building, railway, road e.t.c. ) results in more operational costs in the existing facility (for instance continuation of maintenance of an old road).
# Challenges
## Optimism Bias
Research has shown that 'there is a demonstrated systematic tendency for project appraisers to be overly optimistic' [2]. People tend to judge future events in a more positive and optimistic way than it is instructed by past and actual experience. Psychological reasons are the route for this behavior. In addition, when the project managers make an estimation about the financial aspect of a construction project, overestimate benefits and underestimate costs. Optimism bias is not intentional, actually, it is a self-deception. [7]
## Strategic misrepresentation of financial appraisal
Research has shown that decision-makers project managers act in bad faith when estimating financial project appraisals. Strategic misrepresentation is closely related with the NPV calculation. One of the main uses of NPV calculation is to provide a tool for choosing the most profitable project. As the NPV calculation projects cost and benefits for the project in the future, some benefits and costs are judgements not facts. Also, some benefits can be interpreted in real prices, but instead they are based in shadow pricing techniques. Inevitably, it is clear that NPV calculation is subjective, a fact that managers who want to promote certain construction projects tend to take advantage of it. Research has shown also that strategic misrepresentation is happening not only in private projects, but also in the public sector, as construction projects are interpreted as an action for collecting votes. Strategic misrepresentation can be seen to political and organizational pressures, for instance, competition for obtaining funds or fight for a governmental position, and to lack of incentive alignment. Consequently, strategic misrepresentation are one of the major factors for budget overruns and inaccurate financial appraisals in the construction industry.[2].
# References
1. 1.0 1.1 The Construction Learning Resources: Financial Appraisal, Procurement and Payments [URL:http://www.constructionsite-resources.org/page_12.html]Retrieved on 22 July 2017
2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Winch, G. M. (2010), "Managing Construction projects". Second edition
3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3. Maylor, H. (2010), "Project Management". Fourth Edition
4. Investopedia:Net Present Value[URL:http://www.investopedia.com/terms/n/npv.asp]Retrieved on 22J June 2017
5. 5.0 5.1 5.2 Kerzner, H., Ph.D., (2006), ""Project Management: A systems approach to planning, scheduling and controlling"". Ninth edition
6. 6.0 6.1 6.2 6.3 Lee, S. (2007), "Project Management 2007: Project Financing & Evaluation" Department of Civil and Environmental Engineering, Massachusetts Institute of Technology [URL:http://web.mit.edu/course/1/1.040/www/docs_lectures/Lecture_2_Project_Financing_&_Evaluation.ppt.].Retrieved on 21 June 2017
7. Flyvbjerg, B. (2008) "Curbing Optimism Bias and Strategic Misrepresentation in Planning": Reference Class Forecasting in Practice, European Planning Studies Vol. 16, No. 1
# Annotated bibliography
1. Winch, G. M. (2010), "Managing Construction projects". Second edition
Summary: The book presents a holistic approach of construction management. The basic principles of construction project management are presented along with different tools and techniques that aims to enhance construction performance and provide innovative techniques. The use of information and communication technologies is also a point of interest in the book.
2. Kerzner, H., Ph.D., (2006), "Project Management: A systems approach to planning, scheduling and controlling". Ninth edition
Summary: The book illustrates the basic principles of project management. The book is targeting for enhancing the project skills of not only students, but also executives, pointing out that project management can be related to every profession apart from engineeering, including information systems and business.
3. Maylor, H. (2010), "Project Management". Fourth Edition
Summary: This book cover all the topics of project management, focusing in the theory, as well as in the application and usage of the ideas discussed in it. The book points great emphasis into the 4D-model of the project.
4. Flyvbjerg, B. (2008) "Curbing Optimism Bias and Strategic Misrepresentation in Planning": Reference Class Forecasting in Practice, European Planning Studies Vol. 16, No. 1
Summary: This paper presents the method and illustrates the first instance of reference class forecasting. The paper targets in the inaccuracy of the financial appraisal in planning in construction, explains them in terms of optimism bias and strategic representation and present a method of eliminating this inaccuracy.
|
2019-08-17 17:38:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33375412225723267, "perplexity": 1352.9959506466878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313436.2/warc/CC-MAIN-20190817164742-20190817190742-00446.warc.gz"}
|
https://www.dsemth.com/static/question-sample/partial-variation-in-quadratic-form
|
### Partial variation in quadratic form
Question Sample Titled 'Partial variation in quadratic form'
It is given that ${f{{\left({x}\right)}}}$ is the sum of two parts, one part varies as ${x}$ and the other part varies as ${x}^{{2}}$. Suppose that ${f{{\left({3}\right)}}}=-{30}$ and ${f{{\left({9}\right)}}}=-{36}$ .
(a) Find ${f{{\left({x}\right)}}}$ . (b) Solve the equation ${f{{\left({x}\right)}}}$ $=-{30}$ .
(a) Let ${f{{\left({x}\right)}}}$ $={h}{x}+{k}{x}^{{2}}$ , where ${h}$ and ${k}$ are non-zero constants. 1A ${f{{\left({3}\right)}}}$ $=-{30}$ ${f{{\left({9}\right)}}}$ $=-{36}$ 1M ${h}{\left({3}\right)}+{k}{\left({3}\right)}^{{2}}$ $=-{30}$ ${h}{\left({9}\right)}+{k}{\left({9}\right)}^{{2}}$ $=-{36}$ ${3}{h}+{9}{k}$ $=-{30}$ ${9}{h}+{81}{k}$ $=-{36}$ ${h}+{3}{k}$ $=-{10}$ $\ldots{\left({1}\right)}$ ${h}+{9}{k}$ $=-{4}$ $\ldots{\left({2}\right)}$ ${\left({2}\right)}-{\left({1}\right)}:$ ${6}{k}$ $={6}$ ${k}$ $={1}$ Subsitute ${k}={1}$ into ${\left({1}\right)}:$ ${h}+{3}{\left({1}\right)}$ $=-{10}$ ${h}$ $=-{13}$ ∴ ${h}=-{13}$ and ${k}={1}$ ∴ ${f{{\left({x}\right)}}}=-{13}{x}+{x}^{{2}}$ 1A (b) ${f{{\left({x}\right)}}}$ $=-{30}$ $-{13}{x}+{x}^{{2}}$ $=-{30}$ 1M ${x}^{{2}}-{13}{x}+{30}$ $={0}$ ${\left({x}-{3}\right)}{\left({x}-{10}\right)}$ $={0}$ ∴ ${x}={3}$ or ${x}={10}$ 1A
# 專業備試計劃
Level 4+ 保證及 5** 獎賞
ePractice 會以電郵、Whatsapp 及電話提醒練習
ePractice 會定期提供溫習建議
Level 5** 獎勵:會員如在 DSE 取得數學 Level 5** ,將獲贈一套飛往英國、美國或者加拿大的來回機票,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。
Level 4 以下賠償:會員如在 DSE 未能達到數學 Level 4 ,我們將會全額退回所有會費,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。
# FAQ
ePractice 是甚麼?
ePractice 是一個專為中四至中六而設的網站應用程式,旨為協助學生高效地預備 DSE 數學(必修部分)考試。由於 ePractice 是網站應用程式,因此無論使用任何裝置、平台,都可以在瀏覽器開啟使用。更多詳情請到簡介頁面。
ePractice 可以取代傳統補習嗎?
1. 會員服務期少於兩個月;或
2. 交易額少於 HK\$100。
Initiating...
HKDSE 數學試題練習平台
ePractice 無限操數王
HKDSE 試題導向練習平台
「哪兒有貧窮,哪兒就有愛情。」
|
2020-12-02 13:13:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 65, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381406307220459, "perplexity": 3246.9149309408785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141708017.73/warc/CC-MAIN-20201202113815-20201202143815-00262.warc.gz"}
|
https://physicscatalyst.com/class-7/rational-number-questions.php
|
# Rational numbers worksheet for Class 7
## True or false
Question 1
(a) Every natural number is a rational number but every rational number needs not be a natural number
(b) Zero is a rational number.
(c) Every rational number is a whole number.
(d) Two rational numbers with difference numerators can't be equal.
(e) Every fraction is a rational number.
(f) Sum of two rational numbers is always a rational number.
(g) The rational number $\frac {-3}{5}$ lies to the right of zero on the number line.
(h) Every natural number is a rational number but every rational number need not be a natural number.
(i) $\frac {2}{4}$ is equivalent to $\frac {4}{8}$
(j) The rational numbers $\frac {-11}{-12}$ and $\frac {-7}{8}$ are on the opposite sides of zero on the number line
## Fill in the blanks
Question 2.
(a) A rational number $\frac{p}{q}$ is said to be in the lowest form if p and q have no __________
(b) The rational number $\frac {- 12}{-17}$ Lies to the ____________ of zero on the number line.
(c) If $\frac{p}{q}$ is a rational number, then q can't be ___________
(d) Two rational numbers with different numerators are equal, if their numerators are in the same ______________ as their denominators.
(e) Two rational numbers are equal if they have the same __________ form.
(f) A rational number $\frac{p}{q}$ is negative if p & q are of __________ sign.
(g) If the product of two non-zero rational number is 1, then they are ______________ of each other.
(h) Between any two distinct rational numbers there are ___________ rational numbers.
(i) Additive inverse of $\frac {2}{3}$ is ______.
(j) The reciprocal of ______ does not exist.
## Numerical questions
Question 3.
By what number should we multiply $\frac {-8}{15}$ , so that the product may be 24.
Question 4.
What should be subtracted from $\frac {-3}{4}$ so as to get $\frac{5}{9}$ ?
Question 5.
Subtract $\frac {-3}{8}$ from $\frac {-5}{7}$
Question 6.
The cost of $4\frac{1}{2}$ meters of cloth is Rs. $85\frac{1}{2}$ find the cost of one meter cloth.
Question 7.
Simplify $(\frac {-5}{8} \times \frac {3}{7} \times \frac {4}{-15})+(\frac {4}{7} \times \frac {-21}{8})$.
Question 8.
A stairway consists of 14 stairs, each $32\frac{5}{7}$ cm high. What is the vertical height of the stairways ?
Question 9.
Arrange the rational numbers $\frac {-7}{10}$,$\frac {5}{-8}$,$\frac {2}{-3}$ in the ascending order.
## Multiple Choice Questions
Question 10.
Which of the following rational numbers is equal to its reciprocal?
(a) 1
(b) 2
(c) $\frac {1}{2}$
(d) 0
Question 11.
Which is greater number in the following:
(a) $\frac {-1}{5}$
(b) 0
(c) $\frac {1}{5}$
(d) -5
Question 12.
Which is lowest number in the following:
(a) $\frac {-1}{2}$
(b) 0
(c) $\frac {1}{2}$
(d) -2
Question 13.
Match the Column
(a) a -> ii , b -> iii , c -> iv , d -> i
(b) b -> ii , a -> iii , c -> iv , d -> i
(c) a -> ii , b -> iii , c -> i , d -> iv
(d) a -> i , b -> iii , c -> iv , d -> ii
Question 14.
To reduce a rational number to its standard form, we divide its numerator and denominator by their
(a) LCM
(b) HCF
(c) product
(d) multiple
Reference Books for class 7 Maths
Given below are the links of some of the reference books for class 7 math.
You can use above books for extra knowledge and practicing different questions.
Note to our visitors :-
Thanks for visiting our website.
DISCLOSURE: THIS PAGE MAY CONTAIN AFFILIATE LINKS, MEANING I GET A COMMISSION IF YOU DECIDE TO MAKE A PURCHASE THROUGH MY LINKS, AT NO COST TO YOU. PLEASE READ MY DISCLOSURE FOR MORE INFO.
|
2020-07-07 02:20:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5423145294189453, "perplexity": 1128.5282341643774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00401.warc.gz"}
|
https://www.wikidata.org/wiki/Wikidata_talk:WikiProject_Chemistry
|
# Wikidata talk:WikiProject Chemistry
Old discussions are archived in Archive 2013, Archive 2014, Archive 2015, Archive 2016, Archive 2017.
## Auxiliary matching for the COSING Cosmetic Dataset
Hi Alchemists,
After getting a property for COSING created, I've imported the COSING IDs for matching in Mix N'Match. COSING is a huge cosmetic dataset used EU wide (and beyond). It has a lot of interesting information about stuff we put daily on our bodies, and that is essential to parse cosmetic ingredient lists for Open Beauty Facts.
Some automatic matches have been made by M&M, but it would increase reliability of matches if we could use the following columns as sanity checks, and perhaps to automate matching.
INCI name INN name Ph. Eur. Name CAS No EC No
They are available as open data on the EU Open Data Portal (https://data.europa.eu/euodp/data/dataset/cosmetic-ingredient-database-ingredients-and-fragrance-inventory/resource/33aa4726-d05c-4756-ad91-6c6297de9771) , and in the target page on the EU Commission website (http://ec.europa.eu/growth/tools-databases/cosing/index.cfm?fuseaction=search.details_v2&id=74153)
A step further would be to import the addition columns as statements (eg: HUMECTANT, MASKING, SKIN CONDITIONING,SOLVENT properties, SCCS opinions…)
Poke @Snipre @Magnus Manske -
Notified participants of WikiProject Chemistry Teolemon (talk) 19:55, 1 January 2018 (UTC)
@Teolemon: Optimal importation process is the following:
1) extract WD data in the format Q number/CAS number/EC number in a table A
2) extract EU data in the format INCI name/INN name/Ph. Eur. Name/CAS number/EC number in table B
3) match lines in tables A and B having both same CAS number/EC number and create a new table C with the format Q number/CAS number/EC number/INCI name/INN name/Ph. Eur. Name/function (eg: HUMECTANT, MASKING, SKIN CONDITIONING,SOLVENT properties, SCCS opinions…)
4) match lines in tables A and B having CAS number or EC number but not both and create a new table D with the format Q number/CAS number/EC number/INCI name/INN name/Ph. Eur. Name/function (eg: HUMECTANT, MASKING, SKIN CONDITIONING,SOLVENT properties, SCCS opinions…)
5) Table D has to be checked in order to verify if the WD items are correctly defined and missing or wrong data have to be added/corrected. Once completed/corrected lines of Table D can be added to Table C.
6) Table C can be imported after bot request
7) Table C has to saved somewhere and be used as reference for periodic data check in WD to identify vandalism or wrong data handling. Snipre (talk) 23:58, 2 January 2018 (UTC)
Thanks a lot. Daunting task, but at least the plan is clear :) Teolemon (talk) 12:52, 23 January 2018 (UTC)
The Cosing databases currently comprise 25937 Ingredients (with the name INCI) and substances (with chemical name) components of perfumes or subject to restrictions in the annexes of the European regulation. The INCI names are decided and defined by the INC and the Cosing reports only those recognized in the EU, as the IECIC database reports those recognized in China. The Cosing database is periodically updated. It is of one year ago the addition over 10000 new ingredients. The Cosing number is not clearly identifying, when in wikidata the INCI name is missing as identifier. I proposed to include the INCI name as property.--Rodolfo Baraldini (talk) 12:37, 3 March 2018 (UTC)
## Plural or singular labels
Labels of items like carboxylic acid (Q134856), amine (Q167198) and others that describe groups, classes, families etc. of compounds should be plural or singular? Firstly, I used plural labels in Polish labels, then I was convinced by someone that it would be better to use singular. For some time, hovewer I use pluaral again ;) for Polish labels, because it makes more sense (singular does not in my language; writing in Polish that 'amine' is a 'group of...' is quite funny).
Should there be some guidelines in this case or maybe it's language-specific matter? Now, most of the English labels are singular (like in en.wiki articles), but eg. many Russian labels are plural. I do not know English well enough to say that it's better to write 'amine is a class/group/family' (BTW the use of these three names could also be regulated somehow) or 'amines are a class...'. Wostr (talk) 22:56, 1 January 2018 (UTC)
@Wostr: Help:Label#Labels_in_English Snipre (talk) 19:52, 3 January 2018 (UTC)
@Snipre:, okay... but I can't find the answer on that page (or maybe you want to indicate that every language has to establish its own guidelines? — if so, it's still not clear to me whether to use plural or singular in English?). Wostr (talk) 20:03, 3 January 2018 (UTC)
@Wostr: Sorry, I didn't read the page to find the info. But this shiould be solved in that page. For ontology building rules, label have to be singular if the singular form can be used (some concepts can be always plural). So as we can use the concept amine in the singular form, the singular form should be used. Better open a new section there and if no opposition are raised then we can add this new rule in the rules list. Snipre (talk) 20:36, 3 January 2018 (UTC)
• note: ChEBI uses plural for compound classes (like in [1]). Wostr (talk) 21:43, 4 January 2018 (UTC)
• I think the wikidata (English) label policy comes from English wikipedia: en:Wikipedia:Naming conventions (plurals). That said we should probably have our own independent statement of it. ArthurPSmith (talk) 21:55, 4 January 2018 (UTC)
• Otherwise, there will always be 1=2 (or 2=1).WP is atavism and rudiment, living by its own rules (only human-readable (Q16716513)), which will lead to its collapse (few people need the right thing, because there is a better alternative). --Fractaler (talk) 18:15, 6 January 2018 (UTC)
• There are two main types of exceptions to this rule: Articles on groups or classes of specific things – am I the only person who thinks that en.wiki articles about compound classes and groups are a violation of this? ;) But, in fact, there are some more specific guidelines – en:Wikipedia:Naming_conventions_(chemistry)#Groups_of_compounds from 2008 (hovewer I couldn't find any related discussion in WikiProjects Chemistry/Chemicals). Wostr (talk) 20:20, 6 January 2018 (UTC)
• note 2: Glossary of Class Names of Organic Compounds and Reactive Intermediates Based on Structure (IUPAC Recommendations 1995) uses plural names (probably to make clearer distinction between classes and compound names, e.g. pyridine vs pyridines). Wostr (talk) 19:45, 6 January 2018 (UTC)
• note 3: it's seems that other wikis like de.wiki or it.wiki uses plural names (en:Wikipedia_talk:WikiProject_Chemistry/Archive_26#Plural_names_for_classes_of_compounds), but this should be verified. In pl.wiki there was short discussion, but it seems quite obvious that class names in Polish should be written in plural. Wostr (talk) 20:20, 6 January 2018 (UTC)
• note 4: Glossary of Class Names of Polymers Based on Chemical Structure and Molecular Architecture (IUPAC Recommendations 2009) uses singular class names for polymers to enable an individual polymer within a class to be referred to by using the indefinite article, “a”. For example, poly(3-octylthiophene) is a polythiophene and polythiophene itself is also a polythiophene. That may be related to the use of instance of (P31). Wostr (talk) 20:20, 6 January 2018 (UTC)
We create a knowledge base, a common terminological coordinate system. With its help, we can say now only, for example: dicarboxylic acid (Q422050) (dicarboxylic acid) is carboxylic acid (Q134856), tricarboxylic acid (Q2823314) (tricarboxylic acid) is carboxylic acid (Q134856) (carboxylic acid). And can not say that, for example, "dicarboxylic acid (Q422050) and tricarboxylic acid (Q2823314)" is carboxylic acid (Q134856). Because "group of carboxylic acids"/"carboxylic acids" we have not. --Fractaler (talk) 13:02, 8 January 2018 (UTC)
We do not need 'carboxylic acid' and 'carboxylic acids' items. One of them is redundant as both items describe the same thing – structural class of chemical compounds. I think there is only linguistic problem with proper grammatical number and with is a (as a synonym of instace of); in this case the use of instance of (P31) is IMHO incorrect, as the correct ones are: tricarboxylic acids < subclass of > carboxylic acids and dicarboxylic acids < subclass of > carboxylic acids. Wostr (talk) 14:24, 8 January 2018 (UTC)
Ok, what is, for example, "dicarboxylic acid (Q422050) and tricarboxylic acid (Q2823314)" by your version? --Fractaler (talk) 10:05, 9 January 2018 (UTC)
Classes of chemical compounds, of course. Wostr (talk) 14:22, 9 January 2018 (UTC)
"Dicarboxylic and tricarboxylic acids" is "classes" or "class"? "dicarboxylic and tetracarboxylic acids" is "classes" or "class"? "dicarboxylic, tetracarboxylic and heptacarboxylic acids" is "classes" or "class"? Groups or group? --Fractaler (talk) 14:43, 9 January 2018 (UTC)
There is nothing like Dicarboxylic and tricarboxylic acids right now, so discussion about this makes no sense. Dicarboxylic acids is a class and a subclass of Carboxylic acids class. Same with Tricarboxylic acids and so on. Dicarboxylic acid = Dicarboxylic acids as a chemical class (and speaking 1≠2 in that case would be rhetorical figure only, as there is a linguistic problem which version to use, not substantive or methodological problem and both dicarboxylic acid and dicarboxylic acids are describing the same thing: class of chemical compounds). Wostr (talk) 19:18, 9 January 2018 (UTC)
There is nothing like Dicarboxylic and tricarboxylic acids right now where? --Fractaler (talk) 20:12, 9 January 2018 (UTC)
As a WD item. (And as a separate and well-established chemical class). Wostr (talk) 20:35, 9 January 2018 (UTC)
For a long time in ru-Wikipedia I asked the WP-editors: for whom do you create Wikipedia? They did not even want to think about it. And now there was a powerful competitor. Do you think that we are doing WD-items for Wikidata (Q2013)? We make knowledge base (Q593744), a common hierarchy of terms, common frame of reference (Q184876). So, it is not only in the Wikidata, but also outside it (in the Internet conversation, articles, etc.). Example of use: 1) "Alice gave Bob the malic acid (malic acid (Q190143))". So Alice gave Bob (one) dicarboxylic acid" (dicarboxylic acid (Q422050), 1 acid). 2) "Alice gave Bob citric acid and succinic (succinic acid (Q213050)) acids". So Alice gave Bob carboxylic acids " (several acids, >1). Can you duplicate this message (when 2≠1, 1≠2) so that the meaning does not change if you have carboxylic acid = carboxylic acids (2=1, 1=2)? --Fractaler (talk) 09:30, 10 January 2018 (UTC)
I think that your problem is not of any connection with the title problem and I really don't understand what you're trying to achieve. Wostr (talk) 13:28, 10 January 2018 (UTC)
The task is very simple. You are trying to convey your thoughts to your interlocutor (he is a foreigner and does not understand your language). But you have a universal means, a single system of terms - Wikidata. Try to apply it. For example, replacing the words in the sentences (above) with the Wikidata items so that the foreigner correctly understands you. --Fractaler (talk) 14:28, 10 January 2018 (UTC)
English is not my first language and I also have problems in using it, but I'm trying to use it as best as I can. Hovewer, I won't replace English or any other language with some pseudo-langauge based on P's and Q's, sorry. It seems to me that I would understand Russian better (as my first language is Polish and for a short time I studied Russian) than this kind of discussion. As of the previous comment: I don't think that we will ever have duplicated items like 'carboxylic acid' and 'carboxylic acids' (both describing chemical class) just for the sake of some linguistic problems that are not part of chemical classification. That would be a problem for Wiktionary and maybe in the future Wiktionaries will be integrated into/with Wikidata – that would allow to automatically choose required grammatical number. What this topic is about is to decide (A) whether there should be some consistency in naming chemical classes between languages and (B) if so, should the names be plural or singular. Wostr (talk) 15:28, 10 January 2018 (UTC)
WD-language (only Q*, because P* can be replaced by Q*) is not pseudo-language. It is a common terminological reference system (ie, not only a relative, but an absolute path to the term), the next stage (after the mathematical language, semantic network (Q1045785) and etc.) of the evolution of the universal language. Mathematics to anyone who understands it, can explain why 2≠1, 1≠2. WD can show anyone that "carboxylic acid" is not "carboxylic acids", is not synonymous, is not " linguistic problem", because it is object of group (Q36809769) and group (Q16887380). object of group (Q36809769) is group (Q16887380)? And if so convenient, we can continue here). Fractaler (talk) 08:34, 11 January 2018 (UTC)
The advantage of using singular form is the possibility of automatic text creation using the labels (#label instance is a #label class. Then as WD will be linked to the Wiktionary with creation of new properties for the plural and the female, then the label should be the masculine singular form. Snipre (talk) 12:36, 11 January 2018 (UTC)
I don't think I have any choice in using grammatical gender... And still there is no agreement that we should use instance of (P31) to chemical compounds at all. Wostr (talk) 15:31, 11 January 2018 (UTC)
Do we apply the grammar of the language in the grammar of the Wikidata? On what grounds? singular (Q110786)/plural (Q146786) is grammatical number (Q104083) (grammatical category of nouns, pronouns, and adjective and verb agreement that expresses count distinctions (such as "one", "two", or "three or more")). Does anyone else think that between 2 (3, 4, etc.) and 1 there is no difference? That between singular (Q110786)/plural (Q146786) there is no difference? If there are still doubts, then apply the grammatical categories in the sentences (Wikidata has sentence (non-functional linguistics) (Q41796)?). Yes, singular (Q110786)/plural (Q146786) refer to the sentence, to the full, absolute path (full object name (Q38667285), breadcrumb (Q846205))), and not to the Wikidata's "item/superitem" or "subitem/item", short object name (Q38667440). Just use this cool tool to check for a number. --Fractaler (talk) 14:05, 11 January 2018 (UTC)
@Snipre, Fractaler: Okay, let's reverse this problem. In WD we will have chemical classification of compounds based on structure. I am quite sure of this, as it is the basic classification in chemistry. So, we will have items that are equivalent to chemical classes/groups/or whatever you like to call them. And let's take na example here: there is a WD class that describes every compound having pyridine ring in its structure, so it's something similar to commons:Category:Pyridines. This specific item should be named (in English) pyridine or pyridines? (in Russian) пиридин or пиридины? This item will certainly include pyridine (Q210385) (but at this moment we cannot be sure of the exact relation instance of (P31) or subclass of (P279)). What I can say from Polish language perspective is that having it (in Polish) in singular does not make sense because a group/class of objects have to be plural and with singular labels I would have to come up with some weird descriptions matching singular labels (like kind/type of compound which maybe sounds okay in English but not so well in Polish). Wostr (talk) 15:31, 11 January 2018 (UTC)
@Wostr: Who said that classification based on substructures was a good idea ? When you say "we will have chemical classification of compounds based on structure", I think you already conclude the discussion before it starts. Just go deeper in your example: if a chemical compound has a dozen of functional groups, with your classification we will have a dozen of instance. Is it the correct way to do thing ? And then take a big protein with hundreds of functional groups and other substructures. Do you really think that using a classification can help for complex molecules ? For me we shouldn't use functional groups as classification criteria but as we are doing with elements, we should use a new property like "functional group" or "substructure".
Then for your example of pyridine, just write the relation between the compound pyridine (compound) and the pyridine (class). Can't you write in Polish pyridine (compound) is a pyridine (class) ? So if you can write that sentence with pyridine (class) in the singular form, why do you have so many problem to write it in the label ?
If you still have a problem, then use another simpler concept: a dog is an animal, you can easily write animal in the singular form, and I think this is the case in Polish too. So can you still say after that "singular does not make sense"? And this way I want to have at the lowest level of the classification th euse of "instance of", because with that rule you can always create sentences like A is a B, A is a C,... This allows a easy check of the classifcation links. I can always say carbon dioxyde is a gas, is a chemical compound, is a chemical substance,... if I define carbon dioxyde as instance of something. If I define carbon dioxyde as subclass, how can I test the relations: carbon dioxyde is a suclass of chemical compound, carbon dioxyde is a subclass of chemical substance ? This is not so clear as instance of. Snipre (talk) 22:48, 15 January 2018 (UTC)
@Snipre: chemical classification based on structure is used in every chemical database that I know of, so using it seems obvious to me – in other words: not using it would be a great loss. And about proteins: chemical classification based on structure is used of something defined like 'small compounds'; for proteins etc. other classifications are used. Using has part (P527) or other property is of no use here, would be incorrect – acridine (Q342713) do not has part (P527) pyridine (Q210385); it has part (P527) with pyridine ring without some hydrogen atoms etc.
I can write 'pyridine (compound) is as pyridine (class)', but: (1) description won't match label in singular – pyridine (class) is a class of objects, so it seems natural that should be plural and description should have 'class of compounds etc.', (2) is a is a synonym (one from many) for instance of and it's much better IMHO to write 'pyridine (compound) is an instance of pyridines (class)' than 'pyridine (class), (3) what you wrote is correct only for compounds, not for compounds classes: 'dibenzopyridine (class) is a subclass of pyridine (class)'? (quite nonsensical to me). Wostr (talk) 23:02, 15 January 2018 (UTC)
@Wostr: Please can you spot where I wrote we won't used structures as descriptors for chemicals ? I never we shouldn't used structure, I just say that structure should be described using other properties than instance of/subclass of. What is the big advantage of that ? We won't mix sturcture classification with other classification based on use or reglementation. Just remember that ethanol is not only an instance of chemicla compound or alcohol, but a solvent, a drug, a fuel, ... Your structure classification based on instance/subclass will be mixed with dozen of other classifications.
"chemical classification based on structure is used in every chemical database" What is the interest to do what is already done in other databases ? Will WD just be a mirror of ChEBI ? You really show a poor inovative spirit if your argument is mainly "the others are doing like that so just do it like that". Following that spirit, WP and WD shouldn't never be created as referenece encyclopedias already exist in the past. And finally you always have the same lack of ambitions when you say taht proteins should be classified using in a diffferent way than small molecules. Why do we have to do that ? Shouldn't we be a little more open ? Trying new things ? If you really want to copy ChEBI so better extract directly ChEBI clssification in your wiki and avoid all the import and maintenance work related to keep the ChEBI classification up-to-date in WD. This is non sense to copy CheBI in WD and then WD in WP:pl if you can directly do the import from ChEBI to WP:pl.
And for the label problem, you problem is the way to formulate the label: class of objects. Why can't you use as description pyridine (class) = any compound with a pyridine structure or a compound with a specific substructure having 6 carbons ... ? No need of plural for that. Snipre (talk) 22:45, 16 January 2018 (UTC)
@Snipre: starting from the last mentioned thing: yes, I can. But version with 'any compound...' seems like someone tried to adjust this description to poorly chosen label, but plural corresponds with e.g. IUPAC definitions, also ChEBI has no problem with using plural and is a, second also ;) this distinction (singular – compounds, plural – classes) in natural way helps in choosing the right item.
Hmm, and what is the problem with mixing many classifications in subclass of (P279)/instance of (P31)? In most cases of chemical compounds there should be max a few classes, not dozens. Do I understand correctly that your point is to add P31:'chemical compound' everywhere and use some other property like 'chemical class' to add structural classification? If so, could you propose such property – it would be much easier for me to have some formal basis for adding classes (right now I use class of chemical compounds (Q47154513) and subclass of (P279)/instance of (P31), but I'm almost sure that I will have to modify this in the future – hovewer, thanks to class of chemical compounds (Q47154513) it will be possible to easily obtain all items that are in fact classes not compounds, and modify them).
And about my ambitions and innovative spirit (I assume that this is not argumentum ad personam): (1) the fundamental principle in Wikipedia is no original research, thus I'm not here for inventing anything, only for... repurposing what has been already invented to something what will be best suited to the WD needs. I have no illusions about that the few wikidatians would be able to invent unified and correct chemical classification for all chemical compounds including macromolecules – something that has not been created so far, even by rich chemical corporations and scientific institutions, and what has been described as impossible by many authors (because of too big differences between different forms of what chemists call 'chemical compound').
In my opinion, le mieux est l'ennemi du bien – we do not have any chance for the best, we don't have good. What we could have is something I would call reasonably good – and that means existent classifications like in ChEBI. What we have now in WD is nothing and chasing the best will leave us with nothing. Wostr (talk) 23:19, 16 January 2018 (UTC)
@Wostr: Sorry for the delay of my answer.
* In most cases of chemical compounds there should be max a few classes, not dozens. Wrong, if you have a chemical with 15 different functional groups, then using a classification based on structure you will have 15 instance of. Again you assume that the classification will be used only on small molecules which is not the case. Just think again that we have proteins and other big natural compounds so we have to consider them and not just saying that is another kind of classification: we need something which can cover all items under chemical compounds. So if you don't want to have protein in chemical compounds subclasses, please provide the classification tree integrating chemical compound and protein with the definition allowing to differentiate protein from chemical compounds. We shouldn't create different classifications but one classification.
And I won't create any property if we didn't agree about the need of it. I don't want to start a process if nobody is convinced again its utility. The goal of the discussion is mainly to avoid any useless action by defining a priory what is necessary.
My point is first to start with chemical compound everywhere and then to start the creation of a classification which is based on not on the usual functional group structure. For me a hydroxyl group on a big molecule doesn't allow to say that molecule is an alcohol.
Why do you need to add instance of class of chemical compounds (Q47154513) ? You can retrieve all these items by a query looking for all subclass of chemical compounds including subclass of subclass. See
SELECT ?compound WHERE {
?compound (wdt:P279/wdt:P279) wd:Q11173
}
Try it!
We are using a database so instead of creating unnecessary structure, use database properties and in that case we case use queries instead of useless classification.
So if you don't want to create something new, why do you want to import in WD something which is available and maintained out of WD ? ChEBI is one possible classification and not THE classification. If you want to follow the WP rules correctly so please do it completely: according to NPOV (neutral point of view) we can't apply an unique point of view from an unique reference like ChEBI. And finally I am wondering if WP:pl follow your rule of no original research when creating categories in WP. And as classification in WD is similar to category in WP, I think we can apply the same rules. Your argument is very poor because you criticize the unclear definition of chemical compound and you are using a less clear concept of class of chemical compounds (Q47154513). Are you really coherent ? I don't think so. Snipre (talk) 14:48, 22 January 2018 (UTC)
PS: I have nothing against you personaly I just try to find something which can convince me about your proposition.
@Snipre: that is simply not true — e.g. in monoethanolamine (Q410387) there shouldn't be two separate classes (alcohols and amines), but only one (hydroxyamines). The whole concept of chemical classification based on structure is to create more specified subclasses if there are enough compounds sharing specific structure.
From your SPARQL I get a bunch of unrelated items being minerals etc., but there are in fact many classifications of chemical compounds that should be separated, e.g. class of chemical compounds (Q47154513) is about structural classification (maybe there should be 'structural' in the label), there is also functional classification already present in WD (like 'acids', 'bases', 'oxidants' etc.), there is classification based on use of compound (pigments, bla bla). Adding instance of (P31) with specific item about class (like class of chemical compounds (Q47154513)) is IMHO the easiest way to retrieve only items being classes in specific classification.
And the structural classification can be used on macromolecules too, but in different way (as it is done already in science): e.g. by indicating amino acids building blocks (not every functional group in every amino acid) or other macrostructural feature (not every funct. group that feature is composed of). But at some point transition between classification of small compounds and macromolecules, even the most smooth transition, is some kind of boundary between very accurate structural indication for small compounds and something like estimation for macromolecules).
And unclear definition of concepts like class of chemical compounds (Q47154513) is IMHO an advantage here (until we establish solid model for classification), because classification based on such items will be correct either we model chemical compounds as molecular entities or we choose to model these item as chemical substances. Wostr (talk) 16:48, 22 January 2018 (UTC)
Now for pyridine (Q210385): properties (mass (P2067), etc). It is mass (P2067) of what? 1 (pyridine, пиридин), >1 (pyridines, пиридины), compound, compounds, substance, substances?--Fractaler (talk) 17:13, 15 January 2018 (UTC)
@Fractaler: why pyridine (Q210385) is a problem here? Mass of pyridine is a mass of molecule (if in Daltons [u]) or mass of a mole of molecules (if in moles [mol]). But I don't understand why you asking me this? The problem is not how to name pyridine (Q210385) (it will always be pyridine/пиридин because its about molecule/compound). The problem here is how to name pyridines (Q47317020) — item that describes class of compounds = all compounds that have pyridine ring in their structure (so this class includes e.g. bromazepam (Q422435), 2,4,6-trimethylpyridine (Q409155), 2,6-pyridinedicarboxylic acid (Q417164), 2,6-lutidine (Q209284), 4-methylpyridine (Q2189778) and many, many others). It's similar to Wikipedia categories: ru:Категория:Пиридины is for every compound from pyridine class = compound having pyridine ring in the structure. Wostr (talk) 19:38, 15 January 2018 (UTC)
pyridines (Q47317020) (pyridines, "class of chemical compounds with pyridine ring"): what does "compounds with pyridine ring" mean here? --Fractaler (talk) 09:10, 16 January 2018 (UTC)
@Fractaler: that compounds have pyridine ring (i.e. pyridine core without one or more hydrogen atoms) as part of their structure. Wostr (talk) 16:02, 16 January 2018 (UTC)
@Wostr: The structure of compound is the same as the structure of molecule? Can compound has ring? Or molecule of pyrimidine has piri idine ring?--Fractaler (talk) 18:35, 16 January 2018 (UTC)
@Fractaler: sorry, but I have an impression that either you're not a chemist, or we have too big language barrier here. Structure of compound is the same as structure of molecule (because in the most popular definition of compound, it is substance composed of one kind of molecules, and the terms 'structure of compound' and 'structure of molecule' are used interchangeably). And yes, compound/molecule can have a ring, e.g. toluene has benzene ring and methyl group. But no, pyrimidine does not have pyridine ring – pyrimidine ring has two heteroatoms in its ring, pyridine only one heteroatom in its ring. Wostr (talk) 18:48, 16 January 2018 (UTC)
@Wostr: substance composed of one kind of molecules. Agree. So, substance consists of molecules. Molecule consists of atomes. So, we have two levels, and of course thenthen on this levels objects can not have the same structures (it is not fractals). --Fractaler (talk) 19:07, 16 January 2018 (UTC)
@Fractaler: I really don't know what are you getting at? IUPAC uses 'compound' in both meanings, as most chemists do. So e.g. 'alkynes' are a subclass of 'acetylenes' (both are chemical classes) — (1) 'alkynes' (molecules in which there is one C≡C) is a subclass of 'acetylenes' (molecules in which there is one or more C≡C); (2) 'alkynes' (substances composed of molecules in which there is one C≡C) is a subclass of 'acetylenes' (substances composed of molecules in which there is one or more C≡C). The exact meaning depends on the chosen definition and classification tree, but on this level remain the same. So, if we choose that we classify all 'chemical compounds' as 'molecules' (cf. discussion about definition of chemical compound), then classification will be based on this. If we choose otherwise ('chemical compounds' are 'substances composed of one kind of molecules), then classification will be the same, with the same definitions, and the same connections. Wostr (talk) 19:17, 16 January 2018 (UTC)
@Wostr: Does IUPAC make any knowledgebase or we? IUPAC is just for rules, for notability,for living of item in the WD-space. To do it legitimic.Also as link to WD, very cool source and so on by notability. --Fractaler (talk) 19:29, 16 January 2018 (UTC)
@Fractaler: at this point I think that further discussion makes really no sense, sorry. Wostr (talk) 19:31, 16 January 2018 (UTC)
@Fractaler: Wikidata's idea is to be a secondary source. The definitions of other authorities matter a great deal. ChristianKl❫ 14:32, 17 January 2018 (UTC)
Of course. I mean: no Wikidata:Notability (IUPAC, other cool sources), no life in WD for item. Fractaler (talk) 14:39, 17 January 2018 (UTC)
Ok, but idea that structure of molecule = structure of compound is wrong. We can ask other editors here about this. --Fractaler (talk) 19:44, 16 January 2018 (UTC)
@Wostr: IUPAC choses not to use the term chemical compound and doesn't have any definition for it. It does have a concept of inclusion compounds that seems to me like it includes multiple molecules. ChristianKl❫ 14:32, 17 January 2018 (UTC)
After reading a bit more it seems that neither ChEBI nor IUPAC have a concept of a "chemical compound". Why should we have one? Would it make sense to rename the item into "pure substance"?ChristianKl❫ 15:30, 17 January 2018 (UTC)
/conflict/ @ChristianKl: yep, and that's why when IUPAC definition contains 'compound' (and IUPAC uses compound very often, yet without defining it), it's the matter of context which version (substance vs molecule) should be used – so in WD it is the matter of defining 'chemical compound' or the chosen model of chemistry top-level items the problem, i.e. aromatic compound (Q19834818) will always be a valid chemical class for e.g. benzene (Q2270) or pyridine (Q210385) – but both items can be modelled either as a substance or as a molecule (the discussion about how should we treat items about chemical compounds – as items about molecules or items about substances composed of such molecules – is still ongoing: Wikidata talk:WikiProject Chemistry/Proposal:Models). Wostr (talk) 15:38, 17 January 2018 (UTC)
Not rename, as 'pure substance' is not a synonym of 'chemical compound' (pure substance includes also chemical elements), but 'chemical compound' can be just ignored in modelling chemical items (but it is still notable concept and should have its item in WD) and other terms may be used. Wostr (talk) 15:40, 17 January 2018 (UTC)
If it's a notable concept, why doesn't ChEBI and IUPAC define it? It seems to me like they don't have a concept in their database with that name because the term has no clear meaning and is used with different meanings and they prefer to have terms with clear meanings.
A "inclusion compound" for example is not a single molecule or even multiple molecules of the same type but a complex. ChristianKl❫ 18:17, 17 January 2018 (UTC)
@ChristianKl: It is one of most basic concepts in chemistry, but... it has more than one definition (some widely used, some used only in narrow fields). And I don't advocate for using 'chemical compound' concept as a base for classification – but using 'compound' as a part of many terms and definitions is unavoidable.
And you are of course right that IUPAC does not want to define it – if clear definition is required, IUPAC uses e.g. 'molecular entity', 'chemical substance', 'chemical species' etc. But when this is not required (i.e. when something can be related to both 'molecular entity' and 'chemical substance'), IUPAC uses 'chemical compound' a lot. This is the case of the whole chemical nomenclature, where chemical names are not limited to molecular entities but are valid also for chemical substances composed of such entities.
You are right too about the fact that the most widely used definition of a compound ('chemical substance composed of molecules') is not strictly used by IUPAC. And there are many more similar examples: salts and hydrates are not molecules either (but are usually included in something called 'chemical compounds'). Frustrated Lewis pairs are not compounds either, etc. etc.
But I think that has nothing to the title problem ;) You can write about this in Wikidata talk:WikiProject Chemistry/Proposal:Models and I would be grateful if you do so, because there are not many users in this WikiProject (that are active in the discussions) and I think it would be easier to come to conslusions with more users.
And, frankly speaking, I do not know why this discussion in this topic came to this point. Fractaler is asking question, which are IMHO not related to the title problem and I just don't know what is wrong or what are his proposals. I decided to not participate in discussions with him, as I see that he's indefinitely blocked on his home project for disrupting actions and I really don't have time to solve his enigmatic and philosophical questions. Wostr (talk) 23:44, 17 January 2018 (UTC)
@Wostr:I see that he's indefinitely blocked on his home project for disrupting actions: You see? for disrupting actions? Proof, please. Otherwise I will consider it as the distribution of unverified information (real chemists, like other real scientists, fake information do not spread). And how can you discuss something here, if you use the terms (plural label, singular label), which have not yet been defined? Mankind has long passed the stage of such philosophical discussions. --Fractaler (talk) 09:30, 18 January 2018 (UTC)
@Fractaler: sorry, I really don't have time to answer questions/comments that (1) are not related to the problem or I can't find any relation to the problem – maybe there is some relation, but I wrote a few times, that I don't understand how your questions are related to anything in this topic; (2) are full of Q's and P's (like your comment on January 11th about grammatical numbers) what makes them really hard to understand; (3) includes only enigmatic questions without any proposals or even indications of what is wrong. Since I'm a volunteer, like all of us here, I also have the right not to answer to your questions and comments — the right I intend to use until your questions will be substantive and clearly related to the problem. As for the proof you want: your block log on ru.wiki is not a secret. Wostr (talk) 18:48, 18 January 2018 (UTC)
Excuse me, yes, I have such a drawback (to lead a person to an idea). Ok, my idea was (I probably needed to say this at the beginning): pyridine (Q210385) (pyridine, molecule (Q11369) (molecule), молекула пиридина) is component (Q1310239) (component, компонент) of pyridines (set of all pyridine molecules, пиридины, группа всех пиридиновых молекул).
block log said "disrupting actions" and I see disrupting actions (in my opinion, maybe this is the result of my poor knowledge of English) it sounds differently (has a different meaning). Fractaler (talk) 07:45, 19 January 2018 (UTC)
@Fractaler: so your idea is something like:
1. existing item pyridine (Q210385) would describe one molecule (molecule (Q11369)) of pyridine
2. there will be also another item (not existing now) that would describe pyridine/pyridines as a portion of matter (chemical substance (Q79529) → set of molecules of pyridine (Q210385))
Do I get this right? Wostr (talk) 19:55, 19 January 2018 (UTC)
Yes! (again sorry for using that bad method "step by step"). --Fractaler (talk) 20:04, 19 January 2018 (UTC)
Okay, so there is also the third thing. The 1st is 'item about one molecule' (this is obviously singular, like pyridine (Q210385)), the 2nd is 'item about set of molecules of one type' (the grammatical number here is not so obvious: if it was about countable number of molecule, then it would certainly be plural, like two pyridines, five pyridines etc., but as it's a set of uncountable and not specified number of molecules, the grammatical number may be language-specific, e.g. in Polish it would be singular).
The above two things cane be discussed here: Wikidata talk:WikiProject Chemistry/Proposal:Models – I think there was a comment similar to your idea. But it's not clear yet, if we should use only 1st (molecule items), only 2nd (substance items) or maybe both.
The 3rd thing is: pyridines (Q47317020). It's not the 1st nor the 2nd thing. This is for grouping different kind of molecules (or substances if we choose not to use molecule items at all) into classes on the basis of their structure – so it's some abstract class, not something what can be achieved in reality. Let's take an example based on your idea:
But all three items are pyridines (Q47317020) = their molecules have pyridine ring in structure (or rather all three items belongs to chemical class named pyridines (Q47317020)). So it's here the problem: should this 3rd abstract class be пиридин or пиридины? Wostr (talk) 00:14, 20 January 2018 (UTC)
@Wostr: about grammatical number may be language-specific - it seems that while it is necessary to point at the place of the label: one molecule X, a small amount of (small) molecules X, many molecules (large group) X, all molecules X (the whole group) X.
about based on your idea - this is not my idea, it is the idea of/from chemical elements. There we had the same problem (the problem was due to homonymy (Q21701659)) - an element or elements, an atom or atoms, an isotope or isotopes, etc. As soon as the words are indicated (the totality of all atoms of a certain kind, the atom from this set, the molecule of such atoms, the totality of such molecules), the problem disappears. So, α-picoline (Q2216745), 4-ethylpyridine (Q27257452), triprolidine (Q417654),..., X = 1) molecule of X, 2) all molecules of X, 3) something other (because homonymy (Q21701659) now).
their molecules have pyridine ring in structure, so, how about "molecule with pyridine ring in structure"/"molecules with pyridine ring in structure"?
So, to continue, it is better to move into WikiProject Chemistry|the project? or while continuing here, and there to make specific proposals? Fractaler (talk) 09:37, 20 January 2018 (UTC)
## potassium ferrocyanide
We have potassium ferrocyanide (Q422017) and no label (Q27279279). In the 1st we have all sitelinks and Wikipedia-imported properties about anhydrous form and PubChem-imported properties about trihydrate, the 2nd is PubChem-created item for anhydrous form. Which is better: move all properties, sitelinks, labels etc. about anhydrous form to no label (Q27279279) or move PubChem-imported data between these two items? (we have also no label (Q27110378), but this has to be merged when the above problem is fixed). Wostr (talk) 20:27, 27 January 2018 (UTC)
Usually the correct way is to respect the label: if the label is different from the data, this often means that the data import was not correct as people don't check what is the real compound defined by the item. Snipre (talk) 08:46, 29 January 2018 (UTC)
Okay, I'll fix this that way, thanks. Wostr (talk) 18:57, 30 January 2018 (UTC)
potassium ferrocyanide (Q422017), no label (Q27279279) and no label (Q27110378) merged; data about trihydrate moved to new item potassium ferrocyanide trihydrate (Q47520593). It appears that the 1st step was that some bot added wrong PubChem CID (but correct CAS#), then another bot changed data (including CAS#) based on PubChem CID.
However, in potassium ferrocyanide (Q422017) there are doubled ChemSpider IDs, InChI, SMILES now – that is beacause in databases like PubChem/ChemSpider there are different items about the same compounds, but represented in different way (like [2] and [3] in this case). I don't know whether I should delete one ID or maybe add some exceptions to unique value constraint? Wostr (talk) 19:29, 30 January 2018 (UTC)
## Help I made a mess!
Hi. I'm not sure if this is the best place to ask, but it has been brought to my attention that a big edit i did on a batch of items for Human Genes has gone wrong. I was trying to copy the en aliases to cy using quickstatements but some alias have become fragmented, so '9-cis,12-cis,15-cis-octadecatrienoic' has become '9-cis' '12-cis' and '15-cis' for example. At the very least i need help to get these removed so that i can start from scratch. Or if some one knows how to transfer the alias programmatically that would be even better. I can provide a list of Q's for effected items. Please can someone help? Best Jason.nlw (talk) 16:34, 30 January 2018 (UTC)
If you can create a list of the bad aliases along with the list of Q's then a bot could remove those aliases. Or if you think it's ok a bot could remove all the cy-language aliases for those items. About how many items were affected? ArthurPSmith (talk) 16:57, 30 January 2018 (UTC)
It effects up to 888 items, and around 9000 alias (Genes have a lot!) I couldn't easily prepare a list of effected aliases so i think it would be best to remove all cy alias from those 888 items and i will start again. None of these had aliases before my edit yesterday. Here is a list of the items. Thanks for your help! Jason.nlw (talk) 17:23, 30 January 2018 (UTC)
Hmm, I've been trying to fix these, but so far the bot approaches I've taken don't work. Something special about removing aliases? Both Quickstatments and WikidataIntegrator don't seem to want to do that. I will look at it again but maybe you should post a request on Wikidata:Bot requests to get it done sooner... ArthurPSmith (talk) 19:57, 31 January 2018 (UTC)
Thanks ArthurPSmith, i will post a bot request as suggested. Thanks again for your help. Jason.nlw (talk) 09:54, 1 February 2018 (UTC)
## chemical formula (P274)
It is now set with single value constraint (Q19474404), but clearly there is more than one way to write chemical formula (I noticed that when I tried to add inorganic formula [cation-anion]). Shouldn't this be removed? And the Hill formulas should be tagged with criterion used (P1013) = Hill system (Q900739) (and maybe other formulas too, but with different items like 'inorganic formula' [I don't know at this moment whether this kind of formulas have any official name])? So it would be mandatory qualifier constraint (Q21510856) and one-of constraint (Q21510859) (Hill system (Q900739) and others). Wostr (talk) 21:00, 30 January 2018 (UTC)
@Wostr: We already discussed a little about the problem (see Wikidata_talk:WikiProject_Chemistry#New_property_for_composition). First step: definition of the different kind of chemical formulas in order to see if new properties are required or not. Then if no need of new property, we should replace the constraint of an unique chemical formula by the constraint of the qualifier criterion used (P1013) with the different possibilities. We can perhaps do a bot request to add criterion used (P1013) = Hill system (Q900739) to all statements chemical formula (P274) having Pubchem as reference. Snipre (talk) 08:18, 31 January 2018 (UTC)
I'll check in a few days if there are any official names for different formula writing systems. Wostr (talk) 15:37, 31 January 2018 (UTC)
## New Wikidata aware <chem/> tags
In wikitext <chem /> tags represent chemical sum Formulae. For instance <chem>H2O</chem> rendered as
${\displaystyle {\ce {H2O}}}$
represents Q283. However the rendering mechanism has some issues. Most fundamentally it is based on mhchem version 2, which is not optimal and can not be updated to mhchem v3. Maybe @mhchem: can add a link to the details about the incompatibilities.
In Wikidata there is already a property P274 which expresses the sum formula as UTF chars. This information can currently be displayed using either the parser function invoke or the lua module wikidata.
My goal is to improve the situation, by adding a better version of chem tags and use information from Wikidata in Wikipedia. I would like to find a page where I can interact with the community${\textstyle {}^{\text{TM}}}$ to brainstorm how this could be done best. Is this the correct place? From a technical perspective, I see two main questions:
1. What grammar should be used to encode chemical structures?
2. Where should the data be stored (inside the tag or in wikidata)?
--Physikerwelt (talk) 12:01, 21 February 2018 (UTC)
@Physikerwelt: This is probably the right place to discuss this. What do you see as the problem(s) with chemical formula (P274)? We also have general formula (P1673) which works similarly, and chemical structure (P117) which links to an image. ArthurPSmith (talk) 18:26, 21 February 2018 (UTC)
Physikerwelt, what do you mean by grammar in relation to chemical formulae? Wostr (talk) 19:33, 21 February 2018 (UTC)
I am not entirely sure if there are problems with the properties mentioned above. I clearly see the svg image as a disadvantage since it's hard to change. For instance adding another element to the example linked in chemical structure (P117) [4] would require the user to download the svg change it and upload a new svg and link that. With mhchem one can express more than just sum formulae such as "chemical equations" [5] ${\displaystyle {\ce {CO2 + C -> 2 CO}}}$ or even more complex structures
${\displaystyle {\ce {Zn^{2}+<=>[+2OH^{-}][+2H^{+}]{\underset {amphoteres\ Hydroxid}{Zn(OH)2\downarrow }}<=>[+2OH^{-}][+2H^{+}]{\underset {Hydroxozikat}{[Zn(OH)4]^{2-}}}}}}$
Are there any SPARQL queries using chemical properties, that would probably give a better intuition what is already good and what could be improved. --Physikerwelt (talk) 16:36, 26 February 2018 (UTC)
And how it is possible to obtain such structural formula in mchem? Wostr (talk) 00:01, 10 March 2018 (UTC)
@Physikerwelt: A good thing would be to discuss a new « chemical formula » datatype with the dev team : @Lea Lacroix (WMDE): author talk page 18:41, 21 March 2018 (UTC)
## peramivir hydrates
Are these two (peramivir trihydrate (Q47495829) and peramivir tetrahydrate (Q27158395)) duplicates? Some ext-ids indicate so, but e.g. PubChem pages are for trihydrate and tetrahydrate (but maybe it's a mistake – names in PubChem indicate trihydrate)? Wostr (talk) 21:28, 23 February 2018 (UTC)
Never looked at the names in PubChem: this is not under the control of PubChem team and nobody checked if the name is relevant with th formula.
Pubchem has 2 different strutures (see InChIKey) so 2 items are necessary. So th ecorrect way is to delete all redundant ID and to rename peramivir tetrahydrate (Q27158395) peramivir tetrahydrate. Snipre (talk) 22:57, 23 February 2018 (UTC)
Okay, thanks for the changes. Wostr (talk) 00:12, 27 February 2018 (UTC)
## Beilstein numbers
I wonder why we have Beilstein Registry Number (P1579) named that suggests it contains numbers in Beilstein database? As far as I know there is no such numbers right now and the Elsevier's Reaxys use only 'Reaxys Registry Number'. What's more this property is used in items that are not in Beilstein, but in other databases available trough Reaxys; also, some of these numbers comes from sources where it is indicated that the number is Reaxys Registry Number not Beilstein. I think we should rename this property accordingly. Wostr (talk) 00:12, 27 February 2018 (UTC)
@Wostr: Do you know if the Reaxys databse use the same numbers that the Beilstein database ? I mean if compound x had Beilstein number YYY in the Beilstein database, does Reaxys reuse that number YYY as Reaxys number ? Snipre (talk) 14:18, 13 March 2018 (UTC)
AFAIK (from the time I had access to Reaxys, which was about half a year ago) there is no such thing as Beilstein database or Beilstein numbers right now – there is only Reaxys (with all the information from Beilstein, Gmelin etc. databases included in it and Reaxys numbers). But to be sure, I will ask a person who have this access. Wostr (talk) 14:45, 13 March 2018 (UTC) PS Also, sometimes there are both ids (Reaxys and Beilstein) in ChEBI and both are the same number, but nevertheless I asked about it. Wostr (talk) 14:51, 13 March 2018 (UTC)
Yep, the Beilstein RN is now Reaxys RN (or Reaxys ID) and the numbers are the same. Also, some examples: in ChEBI [6], [7]; an source [8]; funnily, we have some Beilstein RN imported from sources where it is described as Reaxys RN, see e.g sulfuric acid (Q4118). I think we should relabel this property and Beilstein RN should be an alias. That's, however, not the case of Gmelin numbers (Gmelin number (P1578)), as these are different from Reaxys/Beilstein numbers. Wostr (talk) 16:59, 13 March 2018 (UTC)
Yes, Reaxys is a superset of beilstein now, using the same numbers. Most of these Beilstein numbers I imported from ChEBI. So maybe we should relabel Beilstein to Reaxys. Sebotic (talk) 13:33, 5 April 2018 (UTC)
## GHS data after creation of Property:P4952
As I see that Snipre is making some progress in relation to this property, I have to ask about the proper value in safety classification and labelling (P4952), because the proposition that we should use e.g. safety classification and labelling (P4952) = Regulation (EC) No. 1272/2008 (Q2005334) may cause some problems. I'm placing this in subsection, because I'm planning to compile a list of needed changes and needed new items, which I place in the next subsections to discuss.
### 1. Value in Property:P4952
If we use safety classification and labelling (P4952) = Regulation (EC) No. 1272/2008 (Q2005334) in items, it can have some implications in the future, because very few people understand which H-phrases one should choose from the source and place in WD. As an example for further discussion, the GHS classification and labelling for 2,2,4-trimethylpentane (Q209130) taken from Sigma-Aldrich SDS for European Union, relatively up-to-date (2017) [9]:
• classes and categories (classification): Flammable liquids (Category 2); Aspiration hazard (Category 1); Skin irritation (Category 2); Specific target organ toxicity - single exposure (Category 3); Acute aquatic toxicity (Category 1); Chronic aquatic toxicity (Category 1)
• H-phrases (classification): H225, H304, H315, H336, H400, H410
• H-phrases (labelling): H225, H304, H315, H336, H410
• EUH-phrases (labelling): none
• P-phrases (labelling): P210, P261, P273, P301 + P310, P331, P501
• GHS pictograms (labelling): 02, 07, 08, 09
• signal word (labelling): Danger
So, the options I see are:
1. use safety classification and labelling (P4952) = Regulation (EC) No. 1272/2008 (Q2005334)
• it will have to be clearly indicated that GHS hazard statement (obsolete) (P728) is only used for: H-phrases (labelling).
• in this option it will not be possible to add both classification and labelling data in one item (so the TomT0m's method for classification using subclass of (P279) would have to be adopted).
2. use safety classification and labelling (P4952) = GHS labelling (Q50490754) (and if we agree to add GHS classification using P4792, also safety classification and labelling (P4952) = GHS classification (Q50490688))
3. use safety classification and labelling (P4952) = Qxxx (Qxxx created as a subclass of e.g. Regulation (EC) No. 1272/2008 (Q2005334) and GHS labelling (Q50490754): GHS labelling according to CLP Regulation)
• there will be no need for qualifiers, but we would need a few new items for each document (USA, EU, Japan, etc., etc.)
• if we agree to add GHS classification using P4792, we would have two items for each country, e.g. Qxxx: GHS labelling according to CLP Regulation and Qyyy: GHS classification according to CLP Regulation.
But maybe there is some other way which I don't see? Or maybe some problems may be eliminated in a way I'm not familiar with? Wostr (talk) 19:04, 14 March 2018 (UTC)
@Wostr: Do we need to do the difference ? You never find all labelling data (signal word, GHS pictograms, H-phrases, P-phrases, EUH-phrases) under classification so if you have only H-phrases without other data this means that the editor took the information from the wrong section. Then if the editor mixed H-phrases from classification section and other labelling data from labelling section then this is not our fault: if someone doesn't understand the difference between both sections then we can't teach everyone about everything. I prefer to specify in the property page the rules of use (meaning that P4952 used with Regulation (EC) No. 1272/2008 (Q2005334) implies that only labelling data from labelling section) and that's it. Snipre (talk) 14:23, 21 March 2018 (UTC)
@Snipre: the problem is that I've corrected dozens of GHS data in Wikipedia, because someone added wrong H-phrases (because I didn't know there is a difference etc.), so that's why I am a bit oversensitive on this. And we don't have to make distinction by safety classification and labelling (P4952) = GHS labelling (Q50490754), we can agree that GHS hazard statement (obsolete) (P728) should be used for labelling H-phrases and add some complex constraints (that would catch situations where there is a probability that classification H-phrases has been added; if it's possible of course to make such constraints, e.g. if there is Hxxx and Hyyy then...). That may be however kind of confusing if we agree in the future that classification (classes, categories) should be added by safety classification and labelling (P4952) too – then is should be noted somewhere that: H-phrases in safety classification and labelling (P4952) are for labelling and H-phrases for classification have to be taken from GHS categories items by some query. Maybe Wikidata usage instructions (P2559) can be of some use here. Wostr (talk) 17:57, 21 March 2018 (UTC)
### 2. NFPA 704
Do we agree to file a bot request for merging existing NFPA 704 data into new property? And, of course, adding constraint to NFPA 704 properties that from now these properties should be used as qualifiers only?
The proposed model (identical like in the property's discussion):
Wostr (talk) 19:09, 14 March 2018 (UTC)
• As there is no answer for my bot request (migration NFPA 704 from an old model to the new), I'll try to do the most of these edits myself using QuickStatements (and the rest manually). This will take some time and will result in a situation in which for a few days some part of NFPA 704 data will be present in WD in an old model (every NFPA 704 property separated) and some in new model (every NFPA 704 property as a qualifier of safety classification and labelling (P4952)). Wikipedias using NFPA 704 data has been notified ~week ago about the change. If anyone have any comments about this, please let me know. Wostr (talk) 09:56, 27 April 2018 (UTC)
• Most of the NFPA 704 data has been changed to the new model. The completed batch included P143-sourced NFPA 704 data only (most of NFPA 704 data we have): ~150 items with full NFPA 704 labelling (4 properties) and ~1040 items with 3 properties (without NFPA 704 Special/Other). There is over 100 items in which NFPA 704 is incomplete/unsourced/sourced in a way that was not easy to convert using QuickStatements/etc. — these I'll try to edit manually (after update of constraint violations pages). Wostr (talk) 00:32, 5 May 2018 (UTC)
### Agreement to distinguish between system and document
Do we agree to use legal documents or standard documents instead of classification systems for safety classification and labelling (P4952) ?
For example:
Globally Harmonized System of Classification and Labelling of Chemicals (Q899146) is a system but can have different applications depending on the country. For EU, US and China at least some differences can appear due to different regulatory application texts. An we can't rely on the source to determine the good application text. For example an international company has to issue a MSDS for each country where its chemical is sold according to the local regulatory text. So for one product sold by one company, we can have at least 4 MSDS with slight differences (one for US, one for EU, one for China and one following the UN documentation). I don't know for other countries and I hope contributors can help me to define which text is relevant for each country.
Then if we agree for that solution for Globally Harmonized System of Classification and Labelling of Chemicals (Q899146), do we agree to use the same distinction for other safety classification system like NFPA 704 (Q208273) ? NFPA 704 (Q208273) is for the system and we have to create a new item for the document which describe the NFPA 704 system ? Snipre (talk) 14:48, 21 March 2018 (UTC)
That solution would solve two problems; normally we should use system item in safety classification and labelling (P4952) with some qualifier to distinguish between different jurisdictions. Don't know though if we should e.g. for UE GHS distinguish between different ATPs? With NFPA 704 the problem is that the document is NFPA 704 (it's a NFPA standard and 704 is a code for this standard) which introduces system (AFAIK usually called NFPA 704 too) to determine which categories should be used in NFPA 704 hazard diamond. So in the case of NFPA 704 I think we already have the document item.
The problem is for GHS, because I really don't know how the GHS for US and other countries placed in legal acts – if it's a single document we can use just one item for specific country or maybe there were more than one documents in different times. Fortunately, in Russian Wikipedia there is no GHS in their infoboxes so there won't be mass uploads of their unsourced data – but nevertheless I'l try to determine how it is done in Russia (AFAIK GHS in Russia will be mandatory from 2020? 2021?). Wostr (talk) 18:13, 21 March 2018 (UTC)
@Wostr: I don't like to mix different types of items as value for safety classification and labelling (P4952):
No mixing of concepts, that's the rule to avoid bad infering later. Snipre (talk) 20:33, 21 March 2018 (UTC)
Okay, I know what you mean. We should establish some constraint in this property, because we will have 'NFPA 704' item (about system), 'NFPA 704: Standard System for the Identification of the Hazards of Materials for Emergency Response' item (about standard) and a few 'NFPA 704: Standard System for the Identification of the Hazards of Materials for Emergency Response (version xxxx)' about editions of this standard. It won't be clear for people to understand which item they should use. And, if I understand this correctly, only the edition items will be correct? However, this will be somewhat not consistent with using Regulation (EC) No. 1272/2008 (Q2005334) – there were several amendments to this regulation (most of them called ATPs) which were introducing some changes to the UE GHS. There are situations where GHS data according to CLP Regulation after X ATP is different than GHS data (for the same substance) after X+1 ATP. So, should we make items for different ATPs and use them in safety classification and labelling (P4952)? Wostr (talk) 23:06, 21 March 2018 (UTC)
@Wostr: You clearly described the problem and no we won't use the versions because there is no way to define which version was used to define the classification/labelling of a compound. Only the fundamental document is mentioned in the SDS, not the version. If I list the versions, this is just to have an idea about the up-date of the fundamental document: if you have no up-date since 10-20 years, perhaps a new fundamental document is used. Snipre (talk) 11:03, 22 March 2018 (UTC)
• This and this may be of some help. BTW I think that – when we agree on all issues regarding this property – we could establish the full instruction here and just transclude relevant sections of this instruction to all properties discussions (rather than write instructions one by one). Wostr (talk) 14:16, 22 March 2018 (UTC)
### GHS statements
I've created items for GHS pictograms, H and P statements (see here). I will add items for EUH/AUH statements and for obsolete H/P statements the next week. Also, I'll try to convert old GHS data to the new model so as to GHS hazard statement (obsolete) (P728) and GHS precautionary statements (obsolete) (P940) could be deleted. Wostr (talk) 19:47, 17 April 2018 (UTC)
## EC Inventory
The EC Inventory is a database that contains 106,211 unique substances/entries. Has it been (partially/fully) imported? EC ID (P232) is currently used in 20,339 items. --Leyo 12:08, 9 April 2018 (UTC)
@Leyo: No, and I prefer to avoid any large data import before a good curation of the existing items:
- we still have 1122 items sharing the same CAS number and 196 items with 2 different CAS numbers (see report)
- 82 items sharing the same EC number (see "Single_value"_violations report)
- 88 items sharing the same InChIKey and 396 items having 2 different InChIKey (see [10])
Just adding large amount of data in the current situation will create more mess.
If you really want to work with the above source, you can extract the EC number and the CAS number from WD items having one values for these two properties and check if both values are the same in the EC inventory database, then create a list of conflicts and we will curate that list. Snipre (talk) 13:45, 9 April 2018 (UTC)
Items with CAS number issues or having EC numbers already shall not be changed.
Unfortunately, I am not really skilled in doing tasks like the one you proposed efficiently. --Leyo 14:20, 9 April 2018 (UTC)
@Leyo: So you can see what is the future need for WD: datasets comparison and analysis of possible matching: if we have 4 datasets and for one entry, 3 datasets have the same data, can we conclude that the entry is the same for all datasets ? And can we do the same if only 2 datasets have the same data ?
But ebefore doing that kind of job we have to clean our reference dataset, WD, and be sure that we don't have 2 items for the same chemical or one item mixing data about 2 chemicals. Snipre (talk) 14:52, 9 April 2018 (UTC)
Just to be clear: I was not suggesting to create any new items, but to import the EC number to existing items lacking a EC number based on the CAS number in an item. Items with CAS number issues are to be skipped. I don't think that such an import would cause a many issues. If so, I will fix them manually. --Leyo 15:00, 9 April 2018 (UTC)
@Leyo: This is not only a question of new items, this is a question of adding the data to the right place. You have in any case to do a choice in the data import process:
• use the CAS numbers in WD as matching parameter and then add the corresponding EC number from the EC inventory database
• use the EC numbers in WD as matching parameter and then add the corresponding CAS number from the EC inventory database
In each case you need to curate the existing items having some constraint violations before to be able to run that process import. If you have 2 items with the same CAS number, do you want to add the EC number to both items without checking if the CAS number id correctly used ?
If you try to use the name or the chemical formula to match the WD items with the EC inventory database, in the best case you will find no correlation, in the worse case you will add the data to the wrong item (typical example: an item with the English label describing an isomer but the item data are describing the isomers mixture).
If you want to be convincing about the relevance of your proposition, perhaps can you describe the process you will use to add the data ? Just to explain my position: one year ago, more than 1000 constraint violations were reported for CAS numbers. With the help of several contributors, we were able to reduce that number to less than 600. I don't want to see that number growing again just because someone wants to add data without taking care about consequences. I am direct because I spent a lot of time to curate data and I am tired to try to improve WD when others just play with data without any care.
I prefer few data with low errors than a lot of data with a lot of errors. Snipre (talk) 19:58, 9 April 2018 (UTC)
Most of your questions have already been answered. Didn't I express myself clearly? --Leyo 12:46, 10 April 2018 (UTC)
@Leyo: Sorry I missed the "Items with CAS number issues are to be skipped". I would propose to do the invers: use the EC number as matching parameter and add the CAS number. CAS number is not a reliable parameter especially not in WD. Snipre (talk) 11:16, 13 April 2018 (UTC)
Well, I intend adding EC numbers. There are currently 72,137 items with a CAS number, but only 20,336 with a EC number. I wonder how many items contain the latter, but not the former. --Leyo 12:14, 13 April 2018 (UTC)
The problem is that CAS numbers are not reliable mainly because we don't an official open source for CAS numbers. Snipre (talk) 13:47, 13 April 2018 (UTC)
By the way can you extract the ECHA InfoCard ID from ECHA database and add it to the corresponding EC number ? Snipre (talk) 11:19, 13 April 2018 (UTC)
A while ago, ECHA InfoCard ID (P2566) was added to items based on the CAS number by a bot. --Leyo 12:14, 13 April 2018 (UTC)
|
2018-05-28 05:39:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5785967111587524, "perplexity": 3044.285880667807}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794871918.99/warc/CC-MAIN-20180528044215-20180528064215-00286.warc.gz"}
|
https://developers.arcgis.com/python/api-reference/1.9.0/arcgis.mapping.ogc.html
|
# arcgis.mapping.ogc module¶
## CSVLayer¶
class arcgis.mapping.ogc.CSVLayer(url_or_item, gis=None, **kwargs)
Bases: arcgis.mapping.ogc._base.BaseOpenData
Represents a CSV File Hosted on a Server.
Argument Description url_or_item Required String or Item. The web address or Item to the CSV resource. gis Optional GIS. The GIS used to reference the service. The arcgis.env.active_gis is used if not specified. copyright Optional String. Describes limitations and usage of the data. delimiter Optional String. The separator value. This can be the following: , (comma), ‘ ‘ (space), | (pipe), \r (tab), or ; (semicolon). fields Optional List. An array of dictionarys containing the field information. opacity Optional Float. This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque. scale Optional Tuple. The min/max scale of the layer where the positions are: (min, max) as float values. sql_expression Optional String. Optional query string to apply to the layer when displayed on the widget or web map. title Optional String. The title of the layer used to identify it in places such as the Legend and Layer List widgets.
property copyright
property delimiter
Gets/Sets the delimiter for the CSV Layer. The default is ,
Values Description , Comma ” “ space ; semicolon pipe r tab
Returns
string
property df
returns the CSV file as a DataFrame
Returns
Pandas’ DataFrame
property fields
Returns the fields values for the CSV source.
Returns
list of strings
property latitude
The latitude field name. If not specified, the class will look for following field names in the CSV source: “lat”, “latitude”, “y”, “ycenter”, “latitude83”, “latdecdeg”, “POINT-Y”.
property longitude
The longitude field name. If not specified, the CSVLayer will look for following field names in the CSV source: “lon”, “lng”, “long”, “longitude”, “x”, “xcenter”, “longitude83”, “longdecdeg”, “POINT-X”.
property opacity
This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque.
Returns
Float
property properties
Returns the properties of the Layer.
Returns
PropertyMap
property renderer
Get/Set the Renderer of the CSV Layer
Returns
InsensitiveDict
property scale
Gets/Sets the Min/Max Scale for the layer
property sql_expression
The SQL where clause used to filter features on the client. Only the features that satisfy the definition expression are displayed in the widget. Setting a definition expression is useful when the dataset is large and you don’t want to bring all features to the client for analysis. The sql_expressions may be set when a layer is constructed prior to it loading in the view or after it has been loaded into the class.
Returns
String
property title
The title of the layer used to identify it in places such as the Legend and LayerList widgets.
Returns
String
## GeoJSONLayer¶
class arcgis.mapping.ogc.GeoJSONLayer(url=None, data=None, **kwargs)
Bases: arcgis.mapping.ogc._base.BaseOGC
The GeoJSONLayer class is used to create a layer based on GeoJSON. GeoJSON is a format for encoding a variety of geographic data structures. The GeoJSON data must comply with the RFC 7946 specification which states that the coordinates are in spatial reference: WGS84 (wkid 4326).
Argument Description url Optional string. The web location of the GeoJSON file. data Optional String or Dict. A path to a GeoJSON file, the GeoJSON data as a string, or the GeoJSON data as a dictionary. copyright Optional String. Describes limitations and usage of the data. opacity Optional Float. This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque. renderer Optional Dictionary. A custom set of symbology for the given geojson dataset. scale Optional Tuple. The min/max scale of the layer where the positions are: (min, max) as float values. title Optional String. The title of the layer used to identify it in places such as the Legend and Layer List widgets.
property copyright
property opacity
This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque.
Returns
Float
property properties
Returns the properties of the Layer.
Returns
PropertyMap
property renderer
Gets/Sets the renderer for the layer
property scale
Gets/Sets the Min/Max Scale for the layer
property title
The title of the layer used to identify it in places such as the Legend and LayerList widgets.
Returns
String
property url
Get/Set the data associated with the GeoJSON Layer
Returns
String
class arcgis.mapping.ogc.GeoRSSLayer(url, **kwargs)
Bases: arcgis.mapping.ogc._base.BaseOGC
It exports custom RSS tags as additional attribute fields in the form of simple strings or an array of JSON objects.
Argument Description url Required string. The URL of the GeoRSS sevice. copyright Optional String. Describes limitations and usage of the data. line_symbol Optional Dict. The symbol for the polyline data in the GeoRSS. opacity Optional Float. This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque. point_symbol Optional Dict. The symbol for the point data in the GeoRSS. polygon_symbol Optional Dict. The symbol for the polygon data in the GeoRSS. title Optional String. The title of the layer used to identify it in places such as the Legend and LayerList widgets. scale Optional Tuple. The min/max scale of the layer where the positions are: (min, max) as float values.
property copyright
property line_symbol
Gets/Sets the Line Symbol for Polyline Geometries
Returns
InsensitiveDict
property opacity
This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque.
Returns
Float
property point_symbol
Gets/Sets the Point Symbol for Point Geometries
Returns
InsensitiveDict
property polygon_symbol
Gets/Sets the Polygon Symbol for Polygon Geometries
Returns
InsensitiveDict
property properties
Returns the properties of the Layer.
Returns
PropertyMap
property scale
Gets/Sets the Min/Max Scale for the layer
property title
The title of the layer used to identify it in places such as the Legend and LayerList widgets.
Returns
String
## OGCFeatureService¶
class arcgis.mapping.ogc.OGCFeatureService(url, gis=None)
Bases: object
Represents the Hosted OGC Feature Server
Argument Description url Required String. The web address endpoint. gis Optional GIS. The connection object.
property collections
Yields all the OGC Feature Service Layers within the service.
Returns
Iterator[OGCCollection]
property conformance
Provides the API conformance with the OGC standard.
Returns
Dict[str, Any]
property properties
returns the service properties
## OGCCollection¶
class arcgis.mapping.ogc.OGCCollection(url: str, gis: arcgis.gis.GIS = None)
Bases: object
Represents a single OGC dataset
Argument Description url Required String. The web address endpoint. gis Optional GIS. The connection object.
get(feature_id: int) → Dict[str, Any]
Gets an individual feature on the service
Returns
Dict[str, Any]
property properties
returns the service properties
query(query: str = None, limit: int = 10000, bbox: List[float] = None, bbox_sr: int = None, time_filter: str = None, return_all=False, **kwargs) → Union[Dict[str, Any], pandas.core.frame.DataFrame]
Queries the OGC Feature Service Layer and Returns back the information as a Spatially Enabled DataFrame.
Argument Description query Optional String. A SQL based query applied to the service. limit Optional Integer. The number of records to limit to. The default is 10,000. bbox Optional List[float]. The bounding box to limit search in. bbox_sr Optional Integer. The coordinate reference system as a WKID. time_filter Optional String. The dates to filter time by.
Returns
Union[Dict[str, Any], pd.DataFrame]
## WMSLayer¶
class arcgis.mapping.ogc.WMSLayer(url, version='1.3.0', gis=None, **kwargs)
Bases: arcgis.mapping.ogc._base.BaseOGC
Represents a Web Map Service, which is an OGC web service endpoint.
Argument Description url Required string. The administration URL for the ArcGIS Server. version Optional String. The version number of the WMS service. The default is 1.3.0. gis Optional GIS. The GIS used to reference the service by. The arcgis.env.active_gis is used if not specified. copyright Optional String. Describes limitations and usage of the data. scale Optional Tuple. The min/max scale of the layer where the positions are: (min, max) as float values. opacity Optional Float. This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque. title Optional String. The title of the layer used to identify it in places such as the Legend and Layer List widgets.
property copyright
property layers
returns the layers of the WMS Layer
property opacity
This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque.
Returns
Float
property properties
Returns the properties of the Layer.
Returns
PropertyMap
property scale
Gets/Sets the Min/Max Scale for the layer
property title
The title of the layer used to identify it in places such as the Legend and LayerList widgets.
Returns
String
## WMTSLayer¶
class arcgis.mapping.ogc.WMTSLayer(url, version='1.0.0', gis=None, **kwargs)
Bases: arcgis.mapping.ogc._base.BaseOGC
Represents a Web Map Tile Service, which is an OGC web service endpoint.
Argument Description url Required string. The web address of the endpoint. version Optional String. The version number of the WMTS service. The default is 1.0.0 gis Optional GIS. The GIS used to reference the service by. The arcgis.env.active_gis is used if not specified. copyright Optional String. Describes limitations and usage of the data. opacity Optional Float. This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque. scale Optional Tuple. The min/max scale of the layer where the positions are: (min, max) as float values. title Optional String. The title of the layer used to identify it in places such as the Legend and Layer List widgets.
property copyright
property opacity
This value can range between 1 and 0, where 0 is 100 percent transparent and 1 is completely opaque.
Returns
Float
property properties
Returns the properties of the Layer.
Returns
PropertyMap
property scale
Gets/Sets the Min/Max Scale for the layer
property title
The title of the layer used to identify it in places such as the Legend and LayerList widgets.
Returns
String
|
2022-01-19 17:19:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22071588039398193, "perplexity": 7544.101308691393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301475.82/warc/CC-MAIN-20220119155216-20220119185216-00650.warc.gz"}
|
https://en.m.wikipedia.org/wiki/Inclusion_map
|
# Inclusion map
In mathematics, if ${\displaystyle A}$ is a subset of ${\displaystyle B,}$ then the inclusion map (also inclusion function, insertion,[1] or canonical injection) is the function ${\displaystyle \iota }$ that sends each element ${\displaystyle x}$ of ${\displaystyle A}$ to ${\displaystyle x,}$ treated as an element of ${\displaystyle B:}$
${\displaystyle A}$ is a subset of ${\displaystyle B,}$ and ${\displaystyle B}$ is a superset of ${\displaystyle A.}$
${\displaystyle \iota :A\rightarrow B,\qquad \iota (x)=x.}$
A "hooked arrow" (U+21AA RIGHTWARDS ARROW WITH HOOK)[2] is sometimes used in place of the function arrow above to denote an inclusion map; thus:
${\displaystyle \iota :A\hookrightarrow B.}$
(However, some authors use this hooked arrow for any embedding.)
This and other analogous injective functions[3] from substructures are sometimes called natural injections.
Given any morphism ${\displaystyle f}$ between objects ${\displaystyle X}$ and ${\displaystyle Y}$, if there is an inclusion map into the domain ${\displaystyle \iota :A\to X,}$ then one can form the restriction ${\displaystyle f\,\iota }$ of ${\displaystyle f.}$ In many instances, one can also construct a canonical inclusion into the codomain ${\displaystyle R\to Y}$ known as the range of ${\displaystyle f.}$
## Applications of inclusion maps
Inclusion maps tend to be homomorphisms of algebraic structures; thus, such inclusion maps are embeddings. More precisely, given a substructure closed under some operations, the inclusion map will be an embedding for tautological reasons. For example, for some binary operation ${\displaystyle \star ,}$ to require that
${\displaystyle \iota (x\star y)=\iota (x)\star \iota (y)}$
is simply to say that ${\displaystyle \star }$ is consistently computed in the sub-structure and the large structure. The case of a unary operation is similar; but one should also look at nullary operations, which pick out a constant element. Here the point is that closure means such constants must already be given in the substructure.
Inclusion maps are seen in algebraic topology where if ${\displaystyle A}$ is a strong deformation retract of ${\displaystyle X,}$ the inclusion map yields an isomorphism between all homotopy groups (that is, it is a homotopy equivalence).
Inclusion maps in geometry come in different kinds: for example embeddings of submanifolds. Contravariant objects (which is to say, objects that have pullbacks; these are called covariant in an older and unrelated terminology) such as differential forms restrict to submanifolds, giving a mapping in the other direction. Another example, more sophisticated, is that of affine schemes, for which the inclusions
${\displaystyle \operatorname {Spec} \left(R/I\right)\to \operatorname {Spec} (R)}$
and
${\displaystyle \operatorname {Spec} \left(R/I^{2}\right)\to \operatorname {Spec} (R)}$
may be different morphisms, where ${\displaystyle R}$ is a commutative ring and ${\displaystyle R}$ is an ideal of ${\displaystyle R.}$
|
2023-02-06 17:24:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9772937893867493, "perplexity": 511.62624803434664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00386.warc.gz"}
|
https://www.albert.io/ie/algebra/verifying-zeros-of-polynomial-functions-1
|
?
Free Version
Easy
# Verifying Zeros of Polynomial Functions 1
ALGEBR-MYRBNY
Determine which of the choices below are solutions to the polynomial equation:
$$y=x^3-5x^2-x+5$$
A
$x=-1$
B
$x=1$
C
$x=-5$
D
$x=5$
|
2016-12-09 07:40:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917527318000793, "perplexity": 7199.1385272949965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542687.37/warc/CC-MAIN-20161202170902-00174-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://garage.readthedocs.io/en/v2000.0.0/_autoapi/garage/envs/metaworld_set_task_env/index.html
|
# garage.envs.metaworld_set_task_env¶
Environment that wraps a MetaWorld benchmark in the set_task interface.
class MetaWorldSetTaskEnv(benchmark=None, kind=None, wrapper=None, add_env_onehot=False)
Environment form of a MetaWorld benchmark.
This class is generally less efficient than using a TaskSampler, if that can be used instead, since each instance of this class internally caches a copy of each environment in the benchmark.
In order to sample tasks from this environment, a benchmark must be passed at construction time.
Parameters
• benchmark (metaworld.Benchmark or None) – The benchmark to wrap.
• kind (str or None) – Whether to use test or train tasks.
• wrapper (Callable[garage.Env, garage.Env] or None) – Wrapper to apply to env instances.
• add_env_onehot (bool) – If true, a one-hot representing the current environment name will be added to the environments. Should only be used with multi-task benchmarks.
Raises
ValueError – If kind is not ‘train’, ‘test’, or None. Also raisd if add_env_onehot is used on a metaworld meta learning (not multi-task) benchmark.
property num_tasks(self)
Part of the set_task environment protocol.
sample_tasks(self, n_tasks)
Part of the set_task environment protocol. To call this method, a benchmark must have been passed in at environment construction.
Parameters
Returns
Return type
set_task(self, task)
Part of the set_task environment protocol.
Parameters
property action_space(self)
akro.Space: The action space specification.
property observation_space(self)
akro.Space: The observation space specification.
property spec(self)
EnvSpec: The envionrment specification.
property render_modes(self)
list: A list of string representing the supported render modes.
step(self, action)
Step the wrapped env.
Parameters
action (np.ndarray) – An action provided by the agent.
Returns
The environment step resulting from the action.
Return type
EnvStep
reset(self)
Reset the wrapped env.
Returns
The first observation conforming to
observation_space.
dict: The episode-level information.
Note that this is not part of env_info provided in step(). It contains information of he entire episode, which could be needed to determine the first action (e.g. in the case of goal-conditioned or MTRL.)
Return type
numpy.ndarray
render(self, mode)
Render the wrapped environment.
Parameters
mode (str) – the mode to render with. The string must be present in self.render_modes.
Returns
the return value for render, depending on each env.
Return type
object
visualize(self)
Creates a visualization of the wrapped environment.
close(self)
Close the wrapped env.
|
2023-02-07 19:09:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4152839481830597, "perplexity": 10605.706367847759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500628.77/warc/CC-MAIN-20230207170138-20230207200138-00771.warc.gz"}
|
http://zbmath.org/?q=an:1243.65084
|
zbMATH — the first resource for mathematics
Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Cone of non-linear dynamical system and group preserving schemes. (English) Zbl 1243.65084
Summary: The first step in investigating the dynamics of a continuous time system described by a set of ordinary differential equations is to integrate to obtain trajectories. In this paper, we convert the non-linear dynamical system $\stackrel{˙}{x}=F\left(x,t\right),x\in {ℝ}^{n}$, into an augmented dynamical system of Lie type $\stackrel{˙}{X}=A\left(X,t\right)X,X\in {𝕄}^{n+1},A\in so\left(n,1\right)$ locally. In doing so, the inherent symmetry group and the (null) cone structure of the non-linear dynamical system are brought out; then the Cayley transformation and the Padé approximants are utilized to develop group preserving schemes in the augmented space. The schemes are capable of updating the augmented state point to locate automatically on the cone at the end of each time increment. By projection we thus obtain the numerical schemes on state space $x$, which have the form similar to the Euler scheme but with stepsize adaptive. Furthermore, the schemes are shown to have the same asymptotic behavior as the original continuous system and do not induce spurious solutions or ghost fixed points. Some examples are used to test the performance of the schemes. Because the numerical implementations are easy and parsimonious and also have high computational efficiency and accuracy, these schemes are recommended for use in the physical calculations.
MSC:
65L06 Multistep, Runge-Kutta, and extrapolation methods 70-08 Computational methods (mechanics of particles and systems) 37D45 Strange attractors, chaotic dynamics 37M99 Approximation methods and numerical treatment of dynamical systems 70K55 Transition to stochasticity (chaotic behavior)
|
2014-03-11 17:09:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7485517859458923, "perplexity": 3293.9613067472105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011237144/warc/CC-MAIN-20140305092037-00065-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.esaral.com/q/solve-the-following-15380
|
# Solve the following :
Question:
Suppose the friction coefficient between the ground and the ladder of the previous problem is $0.540 .$ Find the maximum weight of a mechanic who could go up and do the work from the same position of the ladder.
Solution:
Here,$\mu=0.54$
Translatory Equilibrium
$N_{1}=m g+16 g{ }_{-(i)}$
$N_{2}=f f=\mu N_{1}-$ (ii)
$N_{2}\left(10 \sin 53^{\circ}\right)=m g(8 \cos 53)+16 g \cos 53^{\circ}$-(iii)
$m=44 \mathrm{~kg}$
|
2023-02-06 20:02:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6438419818878174, "perplexity": 379.2446801169391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00056.warc.gz"}
|
https://www.vedantu.com/question-answer/which-of-the-following-has-the-strongest-bond-class-12-chemistry-cbse-5f626a8696466665ae05aa07
|
Questions & Answers
Question
Answers
# Which of the following has the strongest bond?A.HFB.HClC.HBrD.HI
Answer Verified
Hint: For the bond to be stronger, the bond enthalpy should be more and acidic strength should be low. When we go down in the periodic table, the atoms get larger and acidic strength gets stronger which means the bond strength decreases.
Complete step by step answer:
The bond strength depends upon the following factors:
When bond order is large, then bond length will be small. The stronger will be the nature of bond.
When bond angle is small, larger will be the bond length. Then the strength of the bond will be weaker.
The stability of acids decreases due to decrease in bond dissociation enthalpy of $H - X$ bond from $HF$ to $HI$. Therefore, the acidic strength increases from $HF$ to $HI$.
The order of acidic character of halogens is $H - I < H - Br < H - Cl < H - F$
Since, $H - I$ is the strongest acid the bond enthalpy decreases and hence $H - I$ is the weakest bond.
Therefore, $H - F$ is the weakest acid the bond enthalpy increases so, $H - F$ is the strongest bond.
Therefore, option (A) is correct.
Note:The bond strength depends upon the bond enthalpy.Bond strength can be explained, in chemistry, as the strength with which a chemical bond holds two atoms together. This is conventionally measured in terms of the amount of energy, in kilocalories per mole, required to break the bond.
|
2020-09-18 21:09:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669220805168152, "perplexity": 944.5128234361475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188841.7/warc/CC-MAIN-20200918190514-20200918220514-00553.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/218023-unimodular-row-reduction-systems-linear-diff-eq-constant-coefficients.html
|
## Unimodular Row Reduction: Systems of Linear Diff. Eq. with Constant Coefficients
The letter D is used to denote differentiation of a function of t.
x and y are both functions of t.
Using unimodular row reduction, I want to solve the system:
(D² – 1)x + (D² – D)y = –2 sin(t)
(D² + D)x + D²y = 0
I have already reduced the system to:
(D+1)x + Dy = 2sin(t)
0x + 0y = cos(t)
I notice that from the second equation, the system is consistent only if cos(t) = 0 in which case y will be a free variable, but how do I proceed from there to determine the solution to the system?
|
2016-12-03 20:06:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359434604644775, "perplexity": 716.8819479164458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541134.5/warc/CC-MAIN-20161202170901-00226-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://mathoverflow.net/questions/129092/the-topology-of-power-series-ring?answertab=votes
|
# the topology of power series ring
Hi, everyone.
Let $A$ be a complete DVR with uniformizer $t$, $R:=A[[X]]$. What is the natural topology of $R$ ?
-
The ring $R$ is complete both for the $\langle X \rangle$-adic topology and for the $\langle t, X \rangle$-adic topology. So it depends on what you want to study about $R$. – Leo Alonso Apr 29 '13 at 14:20
I'm sure an answer can be very quickly found at math.stackexchange.com – Olivier Apr 29 '13 at 14:21
The question is subtler than it appears, depending on the topology you choose its formal spectrum is completely different. – Leo Alonso Apr 29 '13 at 14:27
I think there is no "natural" topology, as Leo says. – Filippo Alberto Edoardo Apr 29 '13 at 15:39
discrete valuation ring I suppose, en.wikipedia.org/wiki/Discrete_valuation_ring – Pietro Majer Apr 29 '13 at 19:52
|
2015-07-06 02:54:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183753728866577, "perplexity": 895.9803440756957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097861.54/warc/CC-MAIN-20150627031817-00009-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://mathematica.stackexchange.com/questions/69587/exctrating-data-information-from-a-file-txt
|
# Exctrating data information from a file.txt
I really did previous long search in the site and I couldn't find any answer that might help me. So I am asking it.
I have the following data set, https://onedrive.live.com/redir?resid=4BEAF2395C9CC198%215058 (this is only a very small part, as a sample ok?), so I tried the following codes ,that i copy from the forum:
$1 = OpenRead[ "C:\\Users\\decicco\\SkyDrive\\Documentos\\ProjetoFinal\\Simbad\\Teste_\ dataMining.txt"];$2 = ReadList[$1, String]; Close[$1];
Flatten@StringCases[$2, "Coordinates(Gal,ep=J2000,eq=2000): " ~~ (x : NumberString ...) ~~ (y : ___ ~~ NumberString ...) -> ToExpression@{x, y}] Out[64]= {"307.0804", " +06.8343 ", "307.7283", " 10.4014 "} Paralaxes erro e qualidade In[125]:= Flatten@StringCases[$2,
"Parallax: " ~~ (x : ___ ~~ NumberString ...) ~~
"[" ~~ (y : ___ ~~ NumberString ...) ~~ "]" ~~ (z : _?LetterQ ...) ->
ToExpression@{x, y, z}]
Out[125]= {"0.65 ", "0.44", "", "0.95 ", "0.36", ""}
Tipo Espectral
Flatten@StringCases[$2, "Spectral type: " ~~ x : ___ -> x] Out[108]= {"B0.5Ia C ~ ", "B2.5Ib C ~ "} Identificadores de catálogos = Flatten@StringCases[$2,
RegularExpression["(?m)^Identifiers "] ~~ "(" ~~ DigitCharacter ~~
") :" ~~ __ ~~ RegularExpression["(?m)^Notes "]]
Out[146]= {}
So as you can see from the outputs they are not very well done.
What I need:
As you can see in the link, each star is marked like that: Object HR 5027 ---, and the nomenclature HR 5027 is the star that I need the informations, Galactic positions, Parallax, Spectral type and Identifiers. As I can do a list or table like that, or similar:
{{Star 1 , Galactic Coordinate ->......, ->.........;Paralaxes-> ...., error-> ...., quality-> (a letter can be A, B or C);Identifiers-> ....,.....,......, etc. },{Star 2 , Galactic Coordinate ->......, ->.........;Paralaxes-> ...., error-> ...., quality-> (a letter can be A, B or C);Identifiers-> ....,.....,......, etc. }]
I really do not know about associations (<....->....>), and I do not know if is the case here. Is there any good tutorial tha I can learn about associations( as I have always to work with large database with many subitens)?
26/12/2014:
I did some progress: -TAKING THE COORDINATES, TWO FOR EACH STAR
Flatten@StringCases[$2, "Coordinates(Gal,ep=J2000,eq=2000): " ~~ (x : NumberString ...) ~~ (y : ___ ~~ NumberString ...) -> ToExpression@{x, y}] {"307.0804", " +06.8343 ", "307.7283", " 10.4014 "} -TAKING PARALAX , ERROR AND QUALITY FOR EACH STARS Flatten@StringCases[$2,
"Parallax: " ~~ (x : ___ ~~ NumberString ...) ~~
"[" ~~ (y : ___ ~~ NumberString ...) ~~ "]" ~~
RegularExpression["\\s"] ~~ (z : WordCharacter ...) ->
ToExpression@{x, y, z}]
{"0.65 ", "0.44", "A", "0.95 ", "0.36", "A"}
-TAKING THE SPECTRAL TYPES :
Flatten@
StringCases[$2, "Spectral type: " ~~ x : Except["~"] ... -> x] Out[21]= {"B0.5Ia C ", "B2.5Ib C "} but, my tries for getting identifiers for each stars did not work: Flatten@ StringCases[$2,
"Identifiers" ~~ RegularExpression["\\s+"] ~~ "(" ~~
DigitCharacter ... ~~ "):" ~~ WordBoundary ~~
x : LetterCharacter ... ~~ WordBoundary ~~ "Notes" :> x]
Out[121]= {}
• I am not sure if I understood your problem right: seeking a solution to extract certain information stored in a text file. I.e. a kind of pattern search in the text file? Dec 26 '14 at 17:38
• Yes, exactly, I have a big .txt, contaninig informations about stars, and I need to get the coordinates, paralax (including the error measuremnts) spectral types and finally Identifiers Dec 26 '14 at 19:37
O.K. I give it a try.... I have saved your data in a file "stars.txt" on my computer. Now I do the following (... yes... a brute force attack...) ;-) Here a step by step approach...
in = Import["stars.txt", "Words"]; (* Import the textfile* )
tempi = StringPosition[in, "Identifiers"] ; (*Position of "Identifieres" *)
pi = Position[(Length /@ tempi), 1] // Flatten (*get the position of "Identifiers" *)
tempn = StringPosition[in, "Notes"] ; (* same for "Notes" *)
pn = Position[(Length /@ tempn), 1] // Flatten;
take = {pi, pn}\[Transpose] ;(* build the "cut-off" coordinates *)
cut = in[[#[[1]] ;; #[[2]]]] & /@ take; (* cut of desired areas of the list of words *)
cut = Drop[#, 2] & /@ cut; (* Drop "Identifiers, (30):" and so on *)
Drop[#, -1] & /@ cut (* Drop the word "Notes"*)
Now one has the area between "Identifieres" and "Notes" as a list of words and can process these. I´m still not sure whether this might help you (and I´m not familiar with star-positions and so on... ). But now you can build pairs of this list an so get the desired information, put it in a Dataset or what else.
• Thanks for the help. BUt This code separates the letter an number, will not work. Well the identifiers are organized in columns, so Is there a way to get each element, from the row of each column? Dec 27 '14 at 12:02
• You can get the same organization of the data, quite easily. Name the last output (the Drop... Line) data and then: data = Partition[#,2]& /@ data and finally Map[neat,data,{2}] where neat[{x_,y_}]:=StringJoin[x," ",y] and the identifiers are as in the text file. Dec 27 '14 at 13:32
• Yes, mgmamer, it worked fine!! Thanks!! Dec 27 '14 at 13:52
One approach is to import the data and then parse it using Mathematica's string commands. To get you started, here is the import command and a search for the locations where the word "Object" appears:
q = Import["file.txt"];
StringPosition[q, "Object"]
You can replace "Object" with whatever word you want (like "Coordinates") and find where in the input string the various words are. There are many other commands you might find useful for parsing text, including 'StringSplit' and 'StringReplace'. You can find a list of the commands that start with String using
?String*
• Hi I know the commands, I think your approach not so adequate as the parametres repeat at each star from the data.txt. What I need is a code that I can get the coordinates, paralaxes, spectral type ana identifiers, for each star and consolidated them one big dataset or association. And do the same operations for other catalogues, anda finally join the big datasets catalogues in only one Data set, that contains the various catalogues with its own stars + parameters.The sample that I put here, it isform one specific catalogue, but there are many more. Dec 26 '14 at 20:32
• Then you need to find something that defines where the data is in the input: I was suggesting that you might calibrate it with respect to words. You could also try position (how many characters/words in succession). There must be some structure in the input, this is what you mist exploit. Dec 26 '14 at 21:28
|
2021-10-25 20:59:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2994348406791687, "perplexity": 4069.5735089822006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00331.warc.gz"}
|
https://en.wikipedia.org/wiki/Cascade_algorithm
|
In the mathematical topic of wavelet theory, the cascade algorithm is a numerical method for calculating function values of the basic scaling and wavelet functions of a discrete wavelet transform using an iterative algorithm. It starts from values on a coarse sequence of sampling points and produces values for successively more densely spaced sequences of sampling points. Because it applies the same operation over and over to the output of the previous application, it is known as the cascade algorithm.
## Successive approximation
The iterative algorithm generates successive approximations to ψ(t) or φ(t) from {h} and {g} filter coefficients. If the algorithm converges to a fixed point, then that fixed point is the basic scaling function or wavelet.
The iterations are defined by
${\displaystyle \varphi ^{(k+1)}(t)=\sum _{n=0}^{N-1}h[n]{\sqrt {2}}\varphi ^{(k)}(2t-n)}$
For the kth iteration, where an initial φ(0)(t) must be given.
The frequency domain estimates of the basic scaling function is given by
${\displaystyle \Phi ^{(k+1)}(\omega )={\frac {1}{\sqrt {2}}}H\left({\frac {\omega }{2}}\right)\Phi ^{(k)}\left({\frac {\omega }{2}}\right)}$
and the limit can be viewed as an infinite product in the form
${\displaystyle \Phi ^{(\infty )}(\omega )=\prod _{k=1}^{\infty }{\frac {1}{\sqrt {2}}}H\left({\frac {\omega }{2^{k}}}\right)\Phi ^{(\infty )}(0).}$
If such a limit exists, the spectrum of the scaling function is
${\displaystyle \Phi (\omega )=\prod _{k=1}^{\infty }{\frac {1}{\sqrt {2}}}H\left({\frac {\omega }{2^{k}}}\right)\Phi ^{(\infty )}(0)}$
The limit does not depends on the initial shape assume for φ(0)(t). This algorithm converges reliably to φ(t), even if it is discontinuous.
From this scaling function, the wavelet can be generated from
${\displaystyle \psi (t)=\sum _{n=-\infty }^{\infty }g[n]{\sqrt {2}}\varphi ^{(k)}(2t-n).}$
Successive approximation can also be derived in the frequency domain.
|
2017-04-30 12:01:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630636930465698, "perplexity": 453.66907798137953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125074.20/warc/CC-MAIN-20170423031205-00420-ip-10-145-167-34.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/595047/deciding-whether-two-metrics-are-topologically-equivalent-in-the-space-c10-1/595815
|
# Deciding whether two metrics are topologically equivalent in the space $C^1([0,1])$
Consider the space $C^1([0,1])$ and the function $d:C([0,1])\times C([0,1]) \to \mathbb R$ defined as $d(f,g)=|f(0)-g(0)|+sup_{x \in [0,1]}|f'(x)-g'(x)|$. Decide whether the metrics $d$ and $d_{\infty}$ are topologically equivalent in $C^1([0,1])$ (where $d_{\infty}=sup_{x \in [0,1]}|f(x)-g(x)|)$
My attempt at a solution:
If two metrics are topologically equivalent, then they have the same convergent sequences. Honestly, I couldn't do anything. I am trying to define a sequence of functions $\{f_n\}_{n \in \mathbb N}$ such that $f_n \to f$ in, for instance, $(C^1([0,1]),d)$ but $f_n \not \to f$ in $(C^1([0,1]),d_{\infty})$. Could it be this two metrics are topologically equivalent? If this is the case, how could I prove it? If not, I would appreciate any hint to find an adequate sequence of functions that works for what I am trying to prove.
-
How about trying to figure out the open sets in each of the topologies? If there is some set that is open in one topology but not the other, then the topologies are different. – user99680 Dec 6 '13 at 4:01
Hmm, I am not sure if this set would work, but the set $U=\{f \in C^1[0,1]: f(x) \neq 0 \space \forall \space x \in [0,1]\}$ is open with $d_{\infty}$. Up to now, I couldn't prove/disprove is open with $d$. – user100106 Dec 6 '13 at 4:12
Let me check if it works. – user99680 Dec 6 '13 at 4:23
Set $f_n(x) = \frac{1}{n} \sin(nx)$. It converges to zero in $(C^1([0,1]),d_{\infty})$ but it doesn't converge to zero in $(C^1([0,1]),d)$, because there is always a point $x$ such that $f'_n(x) = 1$.
|
2015-09-02 07:18:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8818643689155579, "perplexity": 93.00600964645291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00277-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/5370
|
## Towards Robust Measurement of Pelvic Parameters from AP Radiographs using Articulated 3D Models
Please always quote using this URN: urn:nbn:de:0297-zib-53707
• Patient-specific parameters such as the orientation of the acetabulum or pelvic tilt are useful for custom planning for total hip arthroplasty (THA) and for evaluating the outcome of surgical interventions. The gold standard in obtaining pelvic parameters is from three-dimensional (3D) computed tomography (CT) imaging. However, this adds time and cost, exposes the patient to a substantial radiation dose, and does not allow for imaging under load (e.g. while the patient is standing). If pelvic parameters could be reliably derived from the standard anteroposterior (AP) radiograph, preoperative planning would be more widespread, and research analyses could be applied to retrospective data, after a postoperative issue is discovered. The goal of this work is to enable robust measurement of two surgical parameters of interest: the tilt of the anterior pelvic plane (APP) and the orientation of the natural acetabulum. We present a computer-aided reconstruction method to determine the APP and natural acetabular orientation from a single, preoperative X-ray. It can easily be extended to obtain other important preoperative and postoperative parameters solely based on a single AP radiograph.
$Rev: 13581$
|
2017-05-30 12:58:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32700198888778687, "perplexity": 3888.3750980527307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463615105.83/warc/CC-MAIN-20170530124450-20170530144450-00346.warc.gz"}
|
http://math.stackexchange.com/questions/128706/the-exact-definition-of-an-orthonomal-basis/128710
|
# The exact definition of an orthonomal basis?
In the area of bilinear forms, my lecture notes say that there is a basis $\{e_i\}$ of $V$ with respect to which $\tau (e_i, e_j) = \delta_{ij}$ where $$\delta_{ij} = \begin{cases} 1 & i=j \\ 0 & i \neq j \end{cases}$$ and that a basis of a Euclidean space $V$ with this property is called an orthonormal basis of $V$. Does this mean that there is only one 'orthonormal basis' which is $(1,0,\dots,0), (0,1,\dots,0),\dots$ etc?
-
No. For example, $\{(1/\sqrt2, 1/\sqrt 2)(1/\sqrt2, -1/\sqrt 2)\}$ is an orthonormal basis of $\Bbb R^2$. – David Mitra Apr 6 '12 at 15:02
Note the condition $\tau(e_i,e_j)=\delta_{ij}$ describes how the basis elements act on each other; it is not describing what the basis elements look like exactly. – David Mitra Apr 6 '12 at 15:08
en.wikipedia.org/wiki/Orthonormal_basis – user2468 Apr 6 '12 at 15:18
The general feeling is, that an orthonormal basis consists of vectors that are orthogonal to one another and have length $1$. The standard basis is one example, but you can get any number of orthonormal bases by applying an isometric operation to this basis: For instance, the comment of David Mitra follows by applying the matrix $$M := \frac{1}{\sqrt{2}} \cdot \begin{pmatrix} 1 & \hphantom{-} 1 \\ 1 & -1 \end{pmatrix} = \begin{pmatrix} \cos(\pi/4) & \sin(\pi/4) \\ \sin(\pi/4) & -\cos(\pi/4) \end{pmatrix}$$ to the standard basis. Observe that $M$ is just a reflection along the line with angle $\pi/8$. You can also obtain other orthonormal bases by applying rotations or, more general, by applying orthogonal transformations to the standard basis.
|
2016-02-10 20:35:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786368012428284, "perplexity": 160.45935690065798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160582.7/warc/CC-MAIN-20160205193920-00093-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://yierlin.me/research/
|
Research
1. Short time large deviations of the KPZ equation (with Li-Cheng Tsai), [arXiv:2009.10787] Accepted by Communications in Mathematical Physics (2021)
Abstract We establish the Freidlin–Wentzell Large Deviation Principle (LDP) for the Stochastic Heat Equation with multiplicative noise in one spatial dimension. Analyzing this variational problem under the narrow wedge initial data, we prove a quadratic law for the near-center tail and a 5/2 law for the deep lower tail for the KPZ equation under the short time scaling.
2. Lyapunov exponents of the half-line SHE, 2020, [arXiv:2007.10212] Submitted
Abstract We prove an upper tail large deviation of the half-line KPZ equation, with rate function to be 2/3s3/2 .
3. Lyapunov exponents of the SHE for general initial data (with Promit Ghosal), 2020 [arxiv:2007.06505] Submitted
Abstract We consider the (1+1)-dimensional stochastic heat equation (SHE) with multiplicative white noise and the Cole-Hopf solution of the Kardar-Parisi-Zhang (KPZ) equation. We show an exact way of computing the Lyapunov exponents of the SHE for a large class of initial data which includes any bounded deterministic positive initial data and the stationary initial data. As a consequence, we derive exact formulas for the upper tail large deviation rate functions of the KPZ equation for general initial data.
4. The stochastic telegraph equation limit of the stochastic higher spin six vertex model [arXiv:2005.00620] [journal version] Electronic Journal of Probability (2020), Vol. 25, no. 148, 1-30.
Abstract In this paper, we prove that the stochastic telegraph equation arises as a scaling limit of the stochastic higher spin six vertex (SHS6V) model with general spin I/2,J/2. This extends results of Borodin and Gorin which focused on the I=J=1 six vertex case and demonstrates the universality of the stochastic telegraph equation in this context. We also provide a functional extension of the central limit theorem obtained in [Borodin and Gorin 2019, Theorem 6.1].
5. KPZ equation limit of stochastic higher spin six vertex model. Mathematical Physics, Analysis and Geometry (2020), Vol 23, no. 1, 1-118 [arXiv:1905.11155] [journal version]
Abstract We consider the stochastic higher spin six vertex (SHS6V) model introduced in [Corwin-Petrov, 2016] with general integer spin parameters $I,J$. Starting from near stationary initial condition, we prove that the SHS6V model converges to the KPZ equation under weakly asymmetric scaling.
6. Markov duality for stochastic six vertex model Electronic Communications in Probability (2019), Vol 24, no. 67, 1-17 [arXiv:1901.00764] [journal version]
Abstract We prove that Schütz’s ASEP Markov duality functional is also a Markov duality functional for the stochastic six vertex model. We introduce a new method that uses induction on the number of particles to prove the Markov duality.
7. Second order behavior of the block counting process of beta coalescents (with Bastien Mallein) Electronic Communications in Probability (2017), Vol 22, no. 61, 1-8 [arXiv:1606.06998] [journal version]
Abstract The Beta coalescents are stochastic processes modeling the genealogy of a population. They appear as the rescaled limits of the genealogical trees of numerous stochastic population models. In this article, we take interest in the number of blocs at small times in the Beta coalescent. Berestycki, Berestycki and Schweinsberg proved a law of large numbers for this quantity. Recently, Limic and Talarczyk proved that a functional central limit theorem holds as well. We give here a simple proof for an unidimensional version of this result, using a coupling between Beta coalescents and continuous-time branching processes.
|
2021-02-27 07:20:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605855703353882, "perplexity": 701.9046454137966}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358203.43/warc/CC-MAIN-20210227054852-20210227084852-00519.warc.gz"}
|
https://zenodo.org/record/1445563/export/csl
|
Project deliverable Open Access
# MP0 for each case study
Mikkelsen, Nina
### Citation Style Language JSON Export
{
"publisher": "Zenodo",
"DOI": "10.5281/zenodo.1445563",
"language": "eng",
"title": "MP0 for each case study",
"issued": {
"date-parts": [
[
2018,
2,
28
]
]
},
"abstract": "<p>This report is the detailed description of the current state of affairs in each FarFish case study (CS). It also adresses potential improvements by suggesting case study specific objectives. The MP0 will be a significant chapter in the management plan invitation to be sent to the operators, after the dialogue process including authorities and operators. MP0 describes the current status in the fishery in question and is the background for the development of the MP1 (the tailor-made good practice recommendation). MP0s focus on the current state of affairs, the main problems faced and form the basis for the suggested case study (CS) objectives. In advance of the project meeting in November 2017, a common template was designed to collect and compile data from the different CSs. Most of the CS leaders were present at the meeting and provided useful information. After some minor revisions of the template, the MP0s were prepared in collaboration with the CS leaders and FarFish partners. The MP0sinclude suggestionsfor Responsive Fisheries Management System (RFMS) agencies (authorities, operators) and comprehensive contact information for relevant stakeholders. Further, the MP0 compile the available information on the current state of the fisheries, geographical and biological boundaries, management, assessment, preliminary value chain information, the identified challenges, the suggested CS objectives and potential improvements made by FarFish. The potential for improvement using new or existing approaches/tools are suggested for all CSs, although preliminary as a thorough examination of data availability and quality is required. In two CSs with sustainable fisheries partnership agreements (SFPA), where several species are targeted by different fleets, the development of a CS specific MP0 covering all the target species was considered unattainable. Consequently the CS leaders asked to prioritize which fishery to address in the MP0 based on their challenges. Hence, the MP0s focus on the following fisheries in the CS; mixed fishery in the South East Atlantic (FAO 47), mixed fishery in the South West Atlantic (FAO 41), the tuna fishery in Cape Verde (SFPA), the black hake fishery in Senegal (SFPA), the shrimp fishery in Mauritania (SFPA) and the tuna fishery in Seychelles (SFPA). The identified challenges in these fisheries and the suggested CS objectives are relevant for the upcoming identification of indicators and outcome targets (OT), which is the next step in the RFMS process.</p>",
"author": [
{
"family": "Mikkelsen, Nina"
}
],
"note": "This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 727891.",
"version": "01",
"type": "report",
"id": "1445563"
}
151
127
views
|
2021-04-11 15:45:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1884121149778366, "perplexity": 5208.443316604078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00110.warc.gz"}
|
https://science.nu/lektion/importing-loading-data-to-r/?amazonai-language=sv
|
Pågår
avsnitt 8 av 17
Pågår
##### Johan Svensson mars 29, 2020
Numerous R packages have built in datasets. For example, in the survivalpackage you will find both the lung dataset (lung cancer study) and colon dataset (colon cancer study). In fact the default installation of R comes with a package called datasets which is automatically loaded when you start R. You can actually view all the datasets available in the datasets package by typing the following command:
data()
Which displays the following output (only first 22 datasets shown):
Built in datasets are meant to be used as practice data; e.g to learn plotting data, modelling data, creating functions etc. If you wish to use any of the datasets in the datasets package you need to load it using the data() function. For example, this will load the lung dataset:
This will load the dataset but it will remain inactive until you actually use it. You will notice the note <Promise> on the right hand side of the AirPassengers dataset in the environment pane, as shown below:
Once you use the dataset it will be loaded and active. Don’t worry about the <Promise> note, just go ahead and use the dataset as desired.
IF you wish to use a dataset located in a package, you need to load the package first. For example, the lung dataset is located in the survivalpackage. These commands will load the survival package, then load the lung dataset and finally print a summary of the variables in the datset:
## inst time status age
## Min. : 1.00 Min. : 5.0 Min. :1.000 Min. :39.00
## 1st Qu.: 3.00 1st Qu.: 166.8 1st Qu.:1.000 1st Qu.:56.00
## Median :11.00 Median : 255.5 Median :2.000 Median :63.00
## Mean :11.09 Mean : 305.2 Mean :1.724 Mean :62.45
## 3rd Qu.:16.00 3rd Qu.: 396.5 3rd Qu.:2.000 3rd Qu.:69.00
## Max. :33.00 Max. :1022.0 Max. :2.000 Max. :82.00
## NA's :1
## sex ph.ecog ph.karno pat.karno
## Min. :1.000 Min. :0.0000 Min. : 50.00 Min. : 30.00
## 1st Qu.:1.000 1st Qu.:0.0000 1st Qu.: 75.00 1st Qu.: 70.00
## Median :1.000 Median :1.0000 Median : 80.00 Median : 80.00
## Mean :1.395 Mean :0.9515 Mean : 81.94 Mean : 79.96
## 3rd Qu.:2.000 3rd Qu.:1.0000 3rd Qu.: 90.00 3rd Qu.: 90.00
## Max. :2.000 Max. :3.0000 Max. :100.00 Max. :100.00
## NA's :1 NA's :1 NA's :3
## meal.cal wt.loss
## Min. : 96.0 Min. :-24.000
## 1st Qu.: 635.0 1st Qu.: 0.000
## Median : 975.0 Median : 7.000
## Mean : 928.8 Mean : 9.832
## 3rd Qu.:1150.0 3rd Qu.: 15.750
## Max. :2600.0 Max. : 68.000
## NA's :47 NA's :14
For each variable the summary() function provides seven parameters. This is informative but not that tidy and neat that we would wish for a publication. Don’t worry, there is a package for that. You can create a descriptive table in just two lines of code, as follows:
#install.packages("table1")
library(table1)
##
## Attaching package: 'table1'
## The following objects are masked from 'package:base':
##
## units, units<-
# Show descriptive data for age, status and time, according to sex
table1(~ age + status + time | sex, data=lung)
1
(n=138)
2
(n=90)
Overall
(n=228)
age
Mean (SD)63.3 (9.14)61.1 (8.85)62.4 (9.07)
Median [Min, Max]64.0 [39.0, 82.0]61.0 [41.0, 77.0]63.0 [39.0, 82.0]
status
Mean (SD)1.81 (0.392)1.59 (0.495)1.72 (0.448)
Median [Min, Max]2.00 [1.00, 2.00]2.00 [1.00, 2.00]2.00 [1.00, 2.00]
time
Mean (SD)283 (213)339 (203)305 (211)
Median [Min, Max]224 [11.0, 1020]293 [5.00, 965]256 [5.00, 1020]
However, this chapter is not about generating descriptive data so let’s get back to importing data. Although built in datasets are useful you will need to import your own data at some point. There are several useful packages for importing data to R. Base R (the default installation) does provide several functions for importing data. In addition the readr package and readxl package are also very useful and they are both included in the tidyverse.
## Prerequisites
### File formats
There are many file formats for rectangular data (data with columns and rows). Such data may also be called flat files since it is two-dimensional (rows and columns). Some software, such as SPSS, SAS, STATA, Excel etc, use specific file formats which cannot be used by other programs. This can become an issue when transfering data between software. The best method for avoiding such issues is to use a universal file format, which is a file format where values are separated by a delimiter. Such files are referred to as delimiter files or delimited text files. Although it may sound complicated, it’s extremely simple. A delimited text file uses a delimiter to separate the values in each row. In other words, it is simply a text file which uses a delimiter to organize the data. Each line in a delimited text file corresponds to one row in a two-dimensional data table. Any character may be used to separate the values, but the most common delimiters are the commatab and semicolon:
#### CSV files (Comma Separated Values)
In a comma separated values (CSV) file the data items are separated using commas as a delimiter, while in a tab-separated values (TSV) file, the data items are separated using tabs as a delimiter. Column headers (i.e variable names) can be included as the first line, and each subsequent line is a row of data. The lines are separated by newlines.
For example, the following fields in each record are delimited by commas, and each record by new lines.
date, patient, diagnosis
25 May, Uma, Rheumatoid arthritis
15 July, Robin, Heart disease
Note that the first row represents the column names (i.e variable names).
Virtually any statistical software can import and export CSV files. Hence, CSV is a pure, simple and safe way of keeping and transfering data. One drawback of the CSV format is that it does not keep information on the variables themselves; e.g in Excel you can define characteristics of a specific column (you can specify the variable as numeric, integer, date time etc). This information cannot be stored in a CSV file and is therefore lost when exporting to CSV. Moreover, you need to re-specify such variable definitions after importing the CSV file.
The above CSV file would be imported to the following table in R:
datepatientdiagnosis
25 MayUmaRheumatoid arthritis
15 JulyRobinHeart disease
You should always prefer to obtain data in CSV format and export data to CSV when you transfer data. The CSV format is one of the most common forms of data storage. There are, however, packages that enable you to import other file formats, such as Excel files, SAS files etc.
Importing data to R requires that you know where the data is located. R can import data from your computer or from online data bases. Most users will just import data located on their computers. This is where your working directory comes in.
What is a working directory? For the vast majority of data analysts, the working directory is simply the folder were all files related to the project are located. This could for example be a folder on your desktop. It is recommended that you create a folder for your project and put all files related to the project in that folder. That folder could include the data files, a log file (where you can keep a diary of what you’re doing) and other files related to the project. Keeping all your files in one place will save you time and headache going forward.
Start by freating a folder called MyProject (e.g on your desktop). Then we will tell R where that folder (i.e MyProject) is located. This is done using the function setwd(), which is short for set working directory, as follows:
setwd("/Users/Desktop/MyProject")
The file path may look different on your computer, depending on your system. To make sure that R now uses the correct working directory, you use the getwd() function, which returns the path to the current working directory:
getwd()
#> "/Users/Desktop/MyProject"
If you want to change your working directory to a folder located within MyProject, you simply specify that in the path. For example, let’s say you create a new folder within the MyProject folder and that new folder is named Data files, which you want to use as your working directory. To do this you simply write the following command:
setwd("/Users/Desktop/MyProject/Data files")
### Must you use a working directory?
No, but it is recommended that you do so. You can, however, read files from anywhere on your computer by specifying the full path to that file. Hence, you can set a working directory but still read files from other folders on your computer.
## Getting started
In this chapter, you’ll learn how to import rectangular data files in R with the readr package and readxl package, which are part of the core tidyverse. If you haven’t already, start by installing and loading tidyverse.
#install.packages("tidyverse")
library(tidyverse)
readr imports flat files and converts them to a special type of data frame called tibble. A tibble is a data frame with a structure suitable for use with other tidyverse packages. For most purposes a tibble is simply a data frame. readr includes the following main parsers (functions for importing data):
• read_csv() for reading csv files that use comma (,) as the separator. These files use period (.) as the decimal place for numeric variables.
• read_csv2() for reading csv files that use semicolon (;) as the separator. These files use comma (,) as the decimal place for numeric variables.
• read_tsv() for reading tabs separated files
• read_fwf() for reading fixed-width files
• read_log() for reading web log files
• read_delim() reads in files with any delimiter, but the delimiter must be explicitly specified.
These functions all have similar syntax: once you’ve learned one, you can use the others with ease.
The following example shows a CSV file suitable for import using read_csv():
name, bloodvalue,
Janet, 80.1,
Adam’s blood value is 90.5 and Janet’s is 80.1. Columns are separated using the comma (,) symbol and decimals use period (.). The next example shows a CSV file suitable for import using read_csv2():
name; bloodvalue;
Janet; 80,1;
Decimal places for Adam’s and Janet’s blood values are denoted using the comma symbol (,).
Henceforth, we will focus on read_csv() since knowing this function is mandatory and will also enable you to use other importing functions (parsers). To get help with the read_csv() function you simply run the command ?read_csv(). The question mark (?) tells R that you want to view the instructions for the function. Here is the full specification of the read_csv() function. In the Help pane you’ll see the following:
Note the Usage heading, which shows you all the arguments that can be specified:
read_csv(file, col_names = TRUE, col_types = NULL,
locale = default_locale(), na = c("", "NA"), quoted_na = TRUE,
quote = "\"", comment = "", trim_ws = TRUE, skip = 0,
n_max = Inf, guess_max = min(1000, n_max),
progress = show_progress(), skip_empty_rows = TRUE)
The first argument is file, which is simply the file path. If the file you wish to import is located in your current working directory, then you just type the file name. If the file is not located in your current working directory, then you must specify the full path to the file.
Example 1: import the file data.csv, which is located in the current working directory
heights <- read_csv(file="data.csv")
Example 2: import the file data.csv, which is located in a subfolder called myfiles, which is located in the current working directory:
# Data is located in the folder "My data" on the desktop
heights <- read_csv(file="/myfiles/data.csv")
Example 3: import the file data.csv, which is located in a folder on my desktop:
# Data is located in the folder "My data" on the desktop
heights <- read_csv(file="/Desktop/data.csv")
In other words, if your specification in the argument file does not contain an absolute path, the file name is relative to the current working directory.
There are several things to note.
1. When you use R functions, you have to make sure that you have specified all arguments that must be specified. In most cases, only some of the arguments are mandatory to specify. Those who are not mandatory to specify will be set automatically to their default settings by R. You can see what the default settings are by viewing the function specification under the heading Usage. Refer to the Usage instructions, you’ll see that col_names is set to TRUE as the default settings, which means that R will let the first row of the input data be used as the column names, and will not be included in the data frame.
2. In the examples above, we are only supplying one input to the function, and that is the input to the file argument. We could have written read_csv("data.csv") instead of read_csv(file="data.csv"). Since we have only supplied one input, R will automatically assign that to the frist argument, which is file and therefore it would still have been correct. These two following examples would also provide identical results:
heights <- read_csv(file="data.csv", col_names = TRUE)
heights <- read_csv("data.csv", TRUE)
If you want to skip naming the arguments, then you must supply their input in the exact order that they appear in the Usage instructions.
When you run read_csv() it prints out a column specification that gives the name and type of each column. It is important that you check that readr has interpreted the variables correctly.
In the following example, we will actually give read_csv() a CSV file directly (inline CSV), which is useful for testing purposes.
read_csv("name, bloodvalue, sex, education
Janet, 80.1, female, 2")
## # A tibble: 2 x 4
## name bloodvalue sex education
## <chr> <dbl> <chr> <int>
## 1 Adam 90.5 male 1
## 2 Janet 80.1 female 2
Note the following:
• read_csv() used the first line of the data for the column names.
• the columns name and sex are classified as <chr> which means character.
• the column bloodvalue is classified as <dbl>which means double. Numeric variables with decimals are referred to as double.
• the column education is classified as <int>which means integer. Numeric variables without decimals are integers.
### When the first row is not column names
The data might not have column names. You can use col_names = FALSE to tell read_csv() not to treat the first row as headings, and instead label them sequentially from X1 to Xn:
read_csv("Adam, 90.5, male, 1
Janet, 80.1, female, 2", col_names=FALSE)
## # A tibble: 2 x 4
## X1 X2 X3 X4
## <chr> <dbl> <chr> <int>
## 1 Adam 90.5 male 1
## 2 Janet 80.1 female 2
You can also pass col_names a character vector which will be used as the column names:
read_csv("Adam, 90.5, male, 1
Janet, 80.1, female, 2", col_names=c("name", "bloodvalue", "sex", "education"))
## # A tibble: 2 x 4
## name bloodvalue sex education
## <chr> <dbl> <chr> <int>
## 1 Adam 90.5 male 1
## 2 Janet 80.1 female 2
### Specifying missing values
Most software for data analysis use some symbol for missing data. Missing data must always be explicitly classified as missing. R uses the symbol NA (Not Available) to indicate that a value is missing. Let’s read in the same data again but set Adam’s blood value to missing:
read_csv("name, bloodvalue, sex, education
Janet, 80.1, female, 2")
## # A tibble: 2 x 4
## name bloodvalue sex education
## <chr> <dbl> <chr> <int>
## 1 Adam NA male 1
## 2 Janet 80.1 female 2
The function detected that the blood value on the first row (Adam) was missing and set it to NA. If the data you’re importing has already specified missing data with a symbol or character, then you can pass that to the na argument of the read_csv() function. In the following example, the character string “MISSING” represents missing data and we will tell read_csv() that this is the case:
read_csv("name, bloodvalue, sex, education
Janet, 80.1, female, 2", na="MISSING")
## # A tibble: 2 x 4
## name bloodvalue sex education
## <chr> <dbl> <chr> <int>
## 1 Adam NA male 1
## 2 Janet 80.1 female 2
This is all you need to know to read most CSV files. You can also easily apply what you’ve learned to the other functions (e.g read_tsv()).
### Specifying the separator
CSV files may use other separators, such as semicolon. If that is the case then you need to use the read_delim() function and specify the separator, which in the following case is semicolon (;):
read_delim("name; bloodvalue; sex; education
Janet; 80.1; female; 2", delim=";")
## # A tibble: 2 x 4
## name bloodvalue sex education
## <chr> <chr> <chr> <chr>
## 1 Adam " 90.5" " male" " 1"
## 2 Janet " 80.1" " female" " 2"
The function detected that the blood value on the first row (Adam) was missing and set it to NA. If the data you’re importing has already specified missing data with a symbol or character, then you can pass that to the na argument of the read_csv() function. In the following example, the character string “MISSING” represents missing data and we will tell read_csv() that this is the case:
read_csv("name, bloodvalue, sex, education
Janet, 80.1, female, 2", na="MISSING")
## # A tibble: 2 x 4
## name bloodvalue sex education
## <chr> <dbl> <chr> <int>
## 1 Adam NA male 1
## 2 Janet 80.1 female 2
This is all you need to know to read most CSV files. You can also easily apply what you’ve learned to the other functions (e.g read_tsv()).
### readr compared to base R
There is actually built-in functions to read CSV files. The base function is read.csv(). The reasons why we do not use read.csv() is the following: * The built-in functions are slower than readr. * readr provides a progress bar for large fiels so that you can see the import progress. * If speed is the most important aspect, see the fread() function in the data.table package.
• They produce tibbles, they don’t convert character vectors to factors, use row names, or munge the column names. These are common sources of frustration with the base R functions.
## Other types of data
To get other types of data into R, we recommend the packages listed below:
• haven reads SPSS, Stata, and SAS files.
• readxl reads excel files (both .xls and .xlsx).
• DBI, along with a database specific backend (e.g. RMySQLRSQLiteRPostgreSQL etc) allows you to run SQL queries against a database and return a data frame.
For hierarchical data: jsonlite for json, and xml2 for XML. Examples at https://jennybc.github.io/purrr-tutorial/.
5/5 (1 Review)
|
2020-03-29 02:02:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3226492404937744, "perplexity": 3692.577494679978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00002.warc.gz"}
|
https://soilphysics.ucmerced.edu/bibliography/2018-Dijkema-et-al.html
|
## Water Distribution in an Arid Zone Soil: Numerical Analysis of Data from a Large Weighing Lysimeter
#### Dijkema, J. and Koonce, J.E. and Shillito, R.M. and Ghezzehei, T.A. and Berli, M. and van der Ploeg, M.J. and van Genuchten, M.Th.
Vadose Zone Journal, volume 17(1) , 2018.
### Abstract
Although desert soils cover approximately one third of the Earth’s land surface, surprisingly little is known about their physical properties and how those properties affect the ecology and hydrology of arid environments. The main goal of this study was to advance our understanding of desert soil hydrodynamics. For this purpose, we developed a process-based component within HYDRUS-1D to describe the moisture dynamics of an arid zone soil as a function of water fluxes through the soil surface. A modified van Genuchten model for the dry end of the soil water retention curve was developed to better capture the basic flow processes for very dry conditions. A scaling method was further used to account for variabilities in water retention because of changes in the bulk density vs. depth. The model was calibrated and validated using hourly soil moisture, temperature, and mass data from a 3-m-deep weighing lysimeter of the Scaling Environmental Processes in Heterogeneous Arid Soils facility at the Desert Research Institute (Las Vegas, NV). Measurements and simulations during a 1-yr period agreed better under precipitation (wetting) than under evaporation (drying) conditions. Evaporation was better simulated for wet than for dry soil surface conditions. This was probably caused by vapor-phase exchange processes with the atmosphere, which were unaccounted for and need to be further explored. Overall, the model provides a promising first step toward developing a more realistic numerical tool to quantify the moisture dynamics of arid ecosystems and their role in climate change, plant growth, erosion, and recharge patterns.
## Citations
#### Cite as:
Dijkema, J. and Koonce, J.E. and Shillito, R.M. and Ghezzehei, T.A. and Berli, M. and van der Ploeg, M.J. and van Genuchten, M.Th., Water Distribution in an Arid Zone Soil: Numerical Analysis of Data from a Large Weighing Lysimeter, Vadose Zone Journal, 17(1) 2018.
#### BibTex
@article{2018-Dijkema-et-al,
author = {Dijkema, J. and Koonce, J.E. and Shillito, R.M. and Ghezzehei, T.A. and Berli, M. and van der Ploeg, M.J. and van Genuchten, M.Th.},
date-modified = {2018-11-14 13:15:15 -0800},
doi = {10.2136/vzj2017.01.0035},
|
2021-03-09 07:35:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3149908483028412, "perplexity": 8067.097016879629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00000.warc.gz"}
|
http://hartramsey.media/future-larsa-kzs/relative-atomic-mass-e06b7e
|
Mass of an atomic particle is called the atomic mass. Give your answer to 1 decimal place. Chemical elements listed by atomic mass The elements of the periodic table sorted by atomic mass. It has no unit. [3]:17 An atom of an element with a certain number of neutrons is called an isotope. An atom consists of a small, positively charged nucleus surrounded by electrons. click on any element's name for further information on chemical properties, environmental data or health effects.. Symbol: Ar Abbreviation: r.a.m. [4] For example, the element thallium has two common isotopes: thallium-203 and thallium-205. This is because the chlorine-35 isotope is much more abundant than the chlorine-37 isotope. Relative atomic mass is the same as atomic weight, which is the older term. The number of protons an atom has determines what element it is. 1 u = 1.66 × 10 -27 kg This list contains the 118 elements of chemistry. It can be best defined as \frac{1}{12}of the mass of a carbon-12 atom in its ground state. This page was last changed on 26 January 2021, at 01:14. For example, chlorine has two major isotopes. The table shows the mass numbers and abundances of naturally occurring copper isotopes. Relative atomic mass or atomic weight is the average atomic mass divided by one unified atomic unit. Hence the relative atomic mass of the mass m is defined as: $A_r = \dfrac{m}{m_u}$ The quantity is now dimensionless. The mass of an atom can be accounted for by the sum of the mass of protons and neutrons which is almost equal to the atomic mass. This is because the proportions of each isotope are slightly different at different locations. The carbon-12 atom, $$_{6}^{12}\textrm{C}$$ is the standard atom against which the masses of other atoms are compared. Note: the average atomic weight is dimensionless quantity while atomic mass has the dimension of unified mass unit (u), But both has the same numerical value. The relative atomic mass. Determine the mass in grams of the following: a. The relative atomic mass of an element is the average mass of its atoms, compared to 1/12th the mass of a carbon-12 atom. Sample exam questions - atomic structure and the periodic table - AQA, Home Economics: Food and Nutrition (CCEA). • However, this does not tell us the mass in grams. The normal unit of atomic mass has been one-twelfth of the atomic mass of the carbon-12 isotope since the year 1961. Read about our approach to external linking. Key Takeaways: Atomic Mass Versus Mass Number The mass number is the sum of the number of protons and neutrons in an atom. Atomic weight, also referred to as relative atomic mass, is the ratio of the mean mass of the atoms of a chemical element to a certain standard. Relative atomic mass values are ratios;[3]:1 relative atomic mass is a dimensionless quantity. 2.00 mol N i. The numbers of subatomic particles in an atom can be calculated from its atomic number and mass number. [3]:17 For example, if a sample of thallium is made up of 30% thallium-203 and 70% thallium-205. 3.01 × 1023 atoms Cl i. is the standard atom against which the masses of other atoms are compared. The mass of one hydrogen atom was assigned 1 unit. The. This sma… Electrons contribute so little mass that they aren't counted. Look it up now! The standard atomic weight for each element is on the periodic table. Determine the amount in moles of the following: Two samples of an element that consists of more than one isotope, collected from two widely spaced sources on Earth, are expected to have slightly different relative atomic masses. [1] Individual samples of an element could have a relative atomic mass different to the standard atomic weight for the element. Relative atomic mass Atoms with an Ar of less than this have a smaller mass than a carbon atom. of an element is the average mass of its atoms, compared to 1/12th the mass of a carbon-12 atom. 17.7 grams 2. First, determine the fractional percent of each isotope in the substance. Chlorine naturally exists as two isotopes. The abundance of chlorine-35 is 75% and the abundance of chlorine-37 is 25%. $A_{r} = \frac{(69 \times 63)+(31 \times 65)}{(69+31)}$, $A_{r} = \frac{4347+2015}{100} = \frac{6362}{100}$. For example, a sample from another planet could have a relative atomic mass very different to the standard Earth-based value. .5000 moles b. It is a weighed average of the different isotopes of an element. The sum of relative isotopic masses of all atoms in a molecule is the relative molecular mass. To see all my Chemistry videos, check outhttp://socratic.org/chemistryWhat is atomic mass? Every particle of matter has some amount of mass associated with it whether small or large. It is the ratio of the average mass per atom of an element from a given sample to 1/12 the mass of a carbon-12 atom. .249 moles 3. The atomic mass of an isotope and the relative isotopic mass refers to a … The relative molecular mass (Mr) of an element is the average mass of one molecule of the element/compound when compared with the mass of an atom of carbon-12, which taken as 12 units. However, most elements in nature consist of atoms with different numbers of neutrons. 1 with 75.77 percent of atoms and 1 with 24.23 percent of atoms. In other words, in every 100 chlorine atoms, 75 atoms have a mass number of 35, and 25 atoms have a mass number of 37. b) Atomic numbers, mass numbers and isotopes; An atom is named after the number of protons in its nucleus. Atomic mass is the weighted average mass of an atom of an element based on the relative natural abundance of that element's isotopes. The relative atomic mass of an element is the weighted average of the masses of the isotopes in the naturally occurring element relative to the mass of an atom of the carbon-12 isotope which is taken to be exactly 12. Atoms with an Ar that is more than this have a larger mass than a carbon atom. How to calculate average atomic mass. n. (Chemistry) the ratio of the average mass per atom of the naturally occurring form of an element to one-twelfth the mass of an atom of carbon-12. The abundance of chlorine-35 is 75% and the abundance of chlorine-37 is 25%. Former name: atomic weight. 28.0 grams b. Standard atomic weight values are published at regular intervals by the Commission on Isotopic Abundances and Atomic Weights of the International Union of Pure and Applied Chemistry (IUPAC). The relative atomic mass scale is now based on an isotope of carbon, namely, carbon-12, nuclide symbol, which is given the value of 12.0000 amu. Molecular Weight, Atomic Weight, Weight vs. Mass. The unit 'amu' is now being replaced by a lower case u, where u is the symbol for the unified atomic mass unit. The word relative in relative atomic mass refers to this scaling relative to carbon-12. Both isotopes of thallium have 81 protons, but thallium-205 has 124 neutrons, 2 more than thallium-203, which has 122. This is a list of chemical elements, sorted by atomic mass (or most stable isotope) and color coded according to type of element.Each element's atomic number, name, element symbol, and group and period numbers on the periodic table are given. It indicates how many times an element's average atom is weightier compared to one-twelfth of a carbon atom-12 from a given sample. (chlorine-37). (chemistry) the mass of an atom of a chemical element expressed in atomic mass units Familiarity information: RELATIVE ATOMIC MASS used as a noun is very rare. Calculate the relative atomic mass of copper. Award winning periodic table, by relative atomic mass, with user-friendly element data and facts. As this unit is confusing and against the standards of modern metrology, the use of relative mass is discouraged. Carbon is given an A r value of 12. relative atomic mass For the natural isotopic composition of each element it shows "name", "atomic number", "symbol", "atomic weight" (or relative atomic mass) and a link to the element's "isotopes". This is commonly expressed as per the international agreement in terms of a unified atomic mass unit (amu). Everything else is measured relative to this quantity. A scaffolded worksheet giving students practise in calculating relative atomic mass from masses of isotopes and percentage abundance. [1][2] In other words, a relative atomic mass tells you the number of times an average atom of an element from a given sample is heavier than one-twelfth of an atom of carbon-12. The mass number is a count of the total number of protons and neutrons in an atom's nucleus. The relative atomic mass, Ar, of an element is calculated from: Chlorine naturally exists as two isotopes, $$_{17}^{35}\textrm{Cl}$$ (chlorine-35) and $$_{17}^{37}\textrm{Cl}$$ (chlorine-37). 12.15 g Mg i. relative atomic mass (physics) Ratio of the atomic mass of one atom of an isotope to 1/12 (one twelfth) the mass of a Carbon-12 atom. The mass of an atom when compared to a standard atom is known as its relative atomic mass (Ar). Each isotope has its own mass, called its isotopic mass. The nucleus contains protons and neutrons; its diameter is about 100,000 times smaller than that of the atom. A relative atomic mass (also called atomic weight; symbol: Ar) is a measure of how heavy atoms are. Molecular M ass (M r) is the sum of all the relative atomic masses for all the atoms in a given formula. From Simple English Wikipedia, the free encyclopedia, International Union of Pure and Applied Chemistry, "Atomic weight: The Name, its History, Definition, and Units", https://simple.wikipedia.org/w/index.php?title=Relative_atomic_mass&oldid=7329713, Creative Commons Attribution/Share-Alike License. Dictionary entry overview: What does relative atomic mass mean? Answers provided. This is not quite correct, because relative atomic mass is a less specific term that refers to individual samples. • What it tells us is the relative masses of atoms – or relative atomic mass (A r) • The element carbon is the atom against which the mass of all other atoms are compared. For covalent compounds it is called the Relative Molecular Mass. Relative atomic mass definition at Dictionary.com, a free online dictionary with pronunciation, synonyms and translation. Thus, the atomic mass of a carbon-12 atom is 12 Da, but the relative isotopic mass of a carbon-12 atom is simply 12. Therefore one atom of carbon, isotopic mass 12, equals 12 u, or, In other words, a relative atomic mass tells you the number of times an average atom of an element from a given sample is heavier than one-twelfth of an atom of carbon-12. The relative atomic mass of an element is the average mass of its atoms, compared to 1/12th the mass of a carbon-12 atom. The relative molecular mass of a molecule is equal to the sum … a. The atomic mass unit (u) is defined as a mass equivalent to 1 / 12 of the mass of one atom of carbon-12. This quantity takes into account the percentage abundance of all the isotopes of an element which exist. The relative atomic mass of Copper is therefore (70 / 100 x 63) + (30 / 100 x 65) = 63.6. Often, the term relative atomic mass is used to mean standard atomic weight. An atomic mass unit is thus defined as 1/12 th of the mass of one atom of carbon-12. Atoms consist of a nucleus containing protons and neutrons, surrounded by electrons in shells. The atomic mass or relative isotopic mass refers to the mass of a single particle, and therefore is tied to a certain specific isotope of an element. The formula for relative atomic mass is; Atomic mass (m a) is the mass of an atom. The relative atomic mass is represented by the symbol Ar. To two decimal places, what is the relative atomic mass and the molar mass of the element potassium, K? The relative isotopic mass of an isotope is roughly the same as its mass number, which is the number of protons and neutrons in the nucleus. Like relative atomic mass values, relative isotopic mass values are ratios with no units. At first, chemists use the hydrogen atom as the standard atom because it is the lightest. relative atomic mass For the natural isotopic composition of each element it shows "name", "atomic number", "symbol", "atomic weight" (or relative atomic mass) and a link to the element's "isotopes". To calculate the relative atomic mass, Ar, of chlorine: $A_{r} = \frac{total~mass~of~atoms}{total~number~of~atoms} = \frac{(75 \times 35)+(25 \times 37)}{(75+25)}$, $A_{r} = \frac{2625+925}{100} = \frac{3550}{100}$. So, average atomic weight of carbon is 12.011 12 u ÷ 1 u = 12.011 12. relative atomic mass. A relative atomic mass (also called atomic weight; symbol: A r) is a measure of how heavy atoms are. The atomic mass unit (amu) is the unit of relative atomic mass. Everything is made up of atoms. We can find the relative atomic mass of a sample of an element by working out the abundance-weighted mean of the relative isotopic masses. Notice that the answer is closer to 35 than it is to 37. A single atom has a set number of protons and neutrons, so the mass is unequivocal (won't change) and is the sum of the number of protons and neutrons in the atom. Relative Atomic Mass 1 • The deflection in the mass spectrometer varies with the mass of the atom. It is the ratio of the average mass per atom of an element from a given sample to 1/12 the mass of a carbon-12 atom. Video $$\PageIndex{6}$$: Watch this video for a review of relative atomic mass and isotopes. 1.50 × 1023 atoms F i. There is no unit as it is a relative value. A standard atomic weight is the mean value of relative atomic masses of a number of normal samples of the element. Our tips from experts and exam survivors will help you through. A relative isotopic mass is the mass of an isotope relative to 1/12 the mass of a carbon-12 atom. The relative atomic mass (A r) of an element is the average mass of the naturally occurring atoms of the element. In other words, in every 100 chlorine atoms, 75 atoms have a mass number of 35, and 25 atoms have a mass number of 37. • RELATIVE ATOMIC MASS (noun) The noun RELATIVE ATOMIC MASS has 1 sense: 1. Two decimal places, what is the average mass of its atoms, compared to 1/12th mass. Like relative atomic mass proportions of each isotope in the mass of one hydrogen atom the...: atomic mass is used to mean standard atomic weight is the average atomic weight of is! B ) atomic numbers, mass numbers and isotopes atom 's nucleus from experts and exam survivors help!, if a sample from another planet could have a larger mass than a carbon.. Been one-twelfth of the relative atomic mass and isotopes ; an atom has determines what element it called! Used to mean standard atomic weight b ) atomic numbers, mass numbers and isotopes sample of thallium have protons. For the element potassium, K molecular weight, weight vs. mass 1 } { 12 } of the:!, most elements in nature consist of atoms with different numbers of subatomic particles in an.... { 1 } { 12 } of the mass of a number of neutrons the. \Pageindex { 6 } \ ): Watch this video for a review of relative mass is the atomic! Symbol: a r value of 12 than the chlorine-37 isotope often, the use of relative mass... Does relative atomic mass ( noun ) the noun relative atomic mass are... But thallium-205 has 124 relative atomic mass, surrounded by electrons in shells of one atom an... Relative molecular mass atom 's nucleus ; an atom has determines relative atomic mass element it is to.! A less specific term that refers to this scaling relative to carbon-12 both isotopes of an element they n't. Health effects of protons and neutrons in an atom consist of a nucleus protons... 100,000 times smaller than that of the mass of a unified atomic unit of matter has amount! A dimensionless quantity Nutrition ( CCEA ) atomic particle is called the atomic and! Of 12 both isotopes of an element by working out the abundance-weighted mean the! } of the different isotopes of thallium have 81 protons, but thallium-205 has 124,... The noun relative atomic mass is the mean value of relative atomic mass masses. The masses of all the relative atomic mass of a carbon-12 atom ratios! • However, this does not tell us the mass spectrometer varies with mass! Given formula hydrogen atom was assigned 1 unit ]:17 an atom is known as its relative mass. Year 1961 is 75 % and the abundance of that element 's name for further information on properties... More than this have a relative atomic masses of other atoms are compared exam survivors help! The older term However, most elements in nature consist of a carbon-12 atom standards of modern metrology the! Assigned 1 unit element thallium has two common isotopes: thallium-203 and thallium-205 's nucleus is to.! Are slightly different at different locations in grams protons in its ground.... Experts and exam survivors will help you through chlorine-35 is 75 % and the mass... The atom fractional percent of atoms with an Ar that is more than this a... Slightly different at different locations as 1/12 th of the carbon-12 isotope the... Than a carbon atom-12 from a given sample diameter is about 100,000 times smaller than of. Molecular M ass ( M r ) is a relative isotopic masses of thallium have 81 protons, thallium-205! As its relative atomic mass is a dimensionless quantity was last changed on 26 January 2021, 01:14. Ar of less than this have a relative value Earth-based value most elements in consist! To two decimal places, what is the same as atomic weight the... ]:17 an atom of an atom consists of a number of neutrons is called the relative atomic is. The normal unit of atomic mass values are ratios with no units element on! Relative natural abundance of chlorine-35 is 75 % and the abundance of all atoms a. Percent of atoms with an Ar that is more than thallium-203, which has 122 's atom! That is more than thallium-203, which is the older term the year 1961 percentage abundance chlorine-37... ( CCEA ) out the abundance-weighted mean of the number of normal of. Is 12.011 12 of neutrons is called the atomic mass an element on! Times smaller than that of the total number of protons an atom of carbon-12 atomic structure and molar... Properties, environmental data or health effects than that of the mass of one atom of element! The proportions of each isotope has its own mass, called its isotopic mass are... Is on the periodic table is known as its relative atomic mass deflection in the substance an... An atom isotopic mass values are ratios with no units given an r. Times smaller than that of the mass of an element which exist term that refers to this scaling to. Mean value of relative mass is used to mean standard atomic weight, atomic weight the... This quantity takes into account the percentage abundance symbol: Ar ) is the average mass of nucleus... A count of the following: a r value of 12 has neutrons... If a sample of an element, Home Economics: Food and Nutrition ( CCEA ) has 122 atoms... Mass of an atom is weightier compared to a standard atomic weight for each element is the standard atom which. Mass 1 • the deflection in the substance chlorine-35 is 75 % and the molar mass of one atom! Amount of mass associated with it whether small or large the year 1961 mass atoms with different numbers of particles... To 35 than it is to 37 the number of protons and neutrons surrounded. Determine the mass number the mass of an element spectrometer varies with the mass an... M r ) is a dimensionless quantity from masses of a carbon-12 atom to. Aqa, Home Economics: Food and Nutrition ( CCEA ) an element covalent. No units the numbers of subatomic particles in an atom smaller than of! % thallium-203 and 70 % thallium-205 atomic unit a dimensionless quantity to this scaling to! Of other atoms are element thallium has two common isotopes: thallium-203 and 70 % thallium-205 mass, its... 1 unit particle is called the atomic mass and the abundance of that 's... Are slightly different at different locations the nucleus contains protons and neutrons in an atom when to... 1/12Th the mass in grams relative molecular mass percent of atoms and 1 with percent. Up of 30 % thallium-203 and 70 % thallium-205 atoms consist of a carbon-12 atom percent of isotope! In calculating relative atomic mass ( also called atomic weight is the older term unified... Structure and the molar mass of one hydrogen atom was assigned 1 unit certain number protons! Ar that is more than thallium-203, which has 122 of one hydrogen atom was assigned 1 unit 1/12th mass. Common isotopes: thallium-203 and thallium-205 exam questions - atomic structure and the periodic sorted... One-Twelfth of the total number of protons and neutrons in an atom of an element 's average atom named. Is weightier compared to 1/12th the mass in grams of the periodic table sorted by atomic mass 1 the! Because relative atomic masses for all the relative isotopic mass is the atomic! Isotopic mass values are ratios ; [ 3 ]:1 relative atomic mass ( also called weight! The proportions of each isotope in the mass of an atom is named the... Information on chemical properties, environmental data or health effects scaffolded worksheet giving students practise in calculating relative atomic divided... B ) atomic numbers, mass numbers and abundances of naturally occurring copper isotopes of.! Sample exam questions - atomic structure and the molar mass of a nucleus containing and. On any element 's average atom is named after the number of normal samples of an element use. Mass ( also called atomic weight changed on 26 January 2021, at 01:14 chlorine-37 is %! As it is to 37 on any element 's average atom is known its! One atom of an element by working out the abundance-weighted relative atomic mass of the element thallium two... The isotopes of an atom when relative atomic mass to 1/12th the mass spectrometer varies with mass... Older term than a carbon atom of chlorine-37 is 25 % 12.011 12 u ÷ 1 u = 12.011.... Mass or atomic weight ; symbol: Ar ) 4 ] for example, term! B ) atomic numbers, mass numbers and isotopes ; an atom can be calculated from atomic! Isotopes: thallium-203 and 70 % thallium-205 ass ( M r ) is dimensionless. M r ) is the same as atomic weight for the element, environmental data or effects..., 2 more than thallium-203, which has 122 weighted average mass of its atoms, compared to 1/12th mass! Term relative atomic mass from masses of a carbon-12 atom a r ) is a less specific term that to! Relative to 1/12 the mass of the atom ( noun ) the noun relative atomic is!, at 01:14 thallium has two common isotopes: thallium-203 and 70 % thallium-205 mass. To 1/12 the mass of a number of protons and neutrons ; its diameter is about 100,000 times than... Noun ) the noun relative atomic mass has been one-twelfth of the atom [ 3 ] for!, because relative atomic mass of an element 's isotopes atoms, compared to a standard atom because it the! Scaling relative to carbon-12 no unit as it is a relative isotopic mass values relative! Assigned 1 unit smaller mass than a carbon atom-12 from a given sample the percentage abundance:!
|
2022-01-17 22:58:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6209930777549744, "perplexity": 996.7750175321845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00170.warc.gz"}
|
https://www.msperlin.com/blog/post/2017-12-30-looking-back-2017/
|
# Looking back in 2017 and plans for 2018
As we come close to the end of 2017, its time to look back. This has been a great year for me in many ways. This blog started as a way to write short pieces about using R for finance and promote my book in an organic way. Today, I’m very happy with my decision. Discovering and trying new writing styles keeps my interest very much alive. Academic research is very strict on what you can write and publish. It is satisfying to see that I can promote my work and have an impact in different ways, not only through the publication of academic papers.
My blog is build using a Jekyll template, meaning the whole site, including individual posts, is built and controlled with editable text files and Github. All files related to posts follow the same structure, meaning I can easily gather the textual data and organize it in a nice tibble. Let’s first have a look in all post files:
post.folder <- '~/GitRepo/msperlin.github.io/_posts/'
my.f.posts <- list.files(post.folder, full.names = TRUE)
my.f.posts
## character(0)
I posted 0 posts during 2017. Notice how all dates are in the beginning of the file name. I can easily convert that to a Date object using as.Date. Let’s organize it all in a nice tibble.
library(tidyverse)
## ── Attaching packages ────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.2.0 ✔ purrr 0.3.2
## ✔ tibble 2.1.3 ✔ dplyr 0.8.1
## ✔ tidyr 0.8.3 ✔ stringr 1.4.0
## ✔ readr 1.3.1 ✔ forcats 0.4.0
## ── Conflicts ───────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::lag() masks stats::lag()
df.posts <- tibble(ref.date = as.Date(basename(my.f.posts)),
ref.month = format(ref.date, '%m'),
content = sapply(my.f.posts, function(x) paste0(readLines(x), collapse = '\n') ),
char.length = nchar(content)) %>% # includes output code in length calculation..
filter(ref.date > as.Date('2017-01-01') | ref.date < as.Date('2018-01-01') ) # not really necessary but keep it for future
glimpse(df.posts)
## Observations: 0
## Variables: 4
## $ref.date <date> ##$ ref.month <chr>
## $content <list> [] ##$ char.length <int>
Fist, let’s look at the frequency of posts by month:
print( ggplot(df.posts, aes(x = ref.month)) + geom_histogram(stat='count'))
## Warning: Ignoring unknown parameters: binwidth, bins, pad
It is not accidental that january was the month with the highest number of posts. This is when I had material reserved for the book. June and July (0!) were the worst months as I traveled a lot. In June I attended R and Finance in Chicago, SER in Rio de Janeiro and in July I was visiting Goethe University in Germany for the whole month. On average, I created 0 posts per month overall, which fells quite alright. I hope I can keep that pace for the upcoming years.
As for the length of posts, below we can see a nice pattern for its distribution conditional on the months of the year.
print(ggplot(df.posts, aes(x=ref.month, y = char.length)) + geom_boxplot())
I was not very productive from may to august, writing a few and short posts, when comparing to other months. This was probably due to my travels.
# Plans for 2018
Despite the usual effort in research and teaching, my plans for 2018 are:
• Work on the second edition of the portuguese book. It significantly lags the english version in content and this need to be fixed. I already have some ideas laid out for new chapters and new packages to cover. I’ll write more about this update as soon as I have it figured out.
• Start a portal for financial data in Brazil. I want to make it easy for people to visualize and download organized financial data, specially those without programming experience. It will include the usual datasets such as prices in equity/bond/derivative markets for various frequencies, historical yield curves, financial statements of companies, and so on. The idea is to offer the datasets in various file formats, facilitating its use in research.
That’s it. If you got this far, happy new year! Enjoy your family and the holidays!
|
2019-06-26 17:53:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2661883533000946, "perplexity": 4022.7477741268426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00527.warc.gz"}
|
http://spmaddmaths.onlinetuition.com.my/2014/01/spm-practice-2-geometric-progression_6257.html
|
SPM Practice 2 (Geometric Progression) : Question 9
Question 9
In a geometric progression, the first term is 18 and the common ratio is r.
Given that the sum to infinity of this progression is 21.6, find the value of r.
[ Remember the formula for sum to infinity ${S}_{\infty }=\frac{a}{1-r},\text{}\text{}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-1 ]
|
2017-04-26 05:59:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.823925793170929, "perplexity": 1403.0899300330057}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121165.73/warc/CC-MAIN-20170423031201-00028-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.antiderivativecalculator.net/triple-integral-calculator
|
# Triple Integral Calculator
Input three-variable function, choose the respective variables, and click calculate to find definite or indefinite type of triple integral
inf = ∞ , pi = π and -inf = -∞
This will be calculated
## Triple integral calculator with steps
Triple integral calculator integrates the given multivariable functions with respect to its corresponding integrating variables. This triple integration calculator solves the given function in three dimensions.
## How does this triple integral calculator work?
To integrate the three-variable function, follow the below steps.
• Select the type of integral i.e., definite or indefinite.
• Input the three-variable function.
• Use the keypad icon to enter math keys.
• Enter the upper and lower limits of all the integrating variables in case of the definite integral.
• Choose the integrating variable’s order.
• Press the load example button to use the sample examples.
• Press the calculate button.
• Hit the clear button to calculate another three-variable function.
## What is triple integral?
Triple integrals are the correspondent of double integrals for 3-D. It is a way of adding up infinitely many infinitesimal quantities associated with points in a 3-D region. Triple integral is widely used to find the mass of the body that has a variable density.
It can calculate the triple variable function by using two methods.
(i) Definite integral
(ii) Indefinite integral
The general equation for integrating three-variable function by using definite integrals is:
$\int \int _B\int f\left(x,y,z\right)dV=\int _{z_1}^{z_2}\int _{y_1}^{y_2}\:\int _{x_1}^{x_2}f\left(x,y,z\right)dxdydz$
The general equation for integrating three-variable function by using indefinite integrals is:
$\int \int \int f\left(x,y,z\right)dxdydz$
• $f\left(x,y,z\right)$ is the given three-variable function.
• $x_1\&x_2$ are the upper and lower limits of x, $y_1\&y_2$ are the upper and lower limits of y, and $z_1\&z_2$ are the upper and lower limits of z.
• dx, dy, & dz are the integrating variables of the three-variables function.
## How to calculate triple integrals?
Let’s take some examples to learn how to integrate the three-variable function.
Example
Integrate 3x+y+z with respect to x, y, & z having limits 0 to 1, 1 to 2, 2 to 3 respectively.
Solution
Step 1: Write the definite integral notation with the given function.
$\int _2^3\int _1^2\int _0^1\left(3x+y+z\:\right)dxdydz$
Step 2: Now integrate the definite integral with respect to x.
$\int _2^3\int _1^2\left(\int _0^1\left(3x+y+z\:\right)dx\right)dydz$
$\int _2^3\int _1^2\left(\int _0^13x\:dx+\int _0^1y\:dx+\int _0^1z\:dx\right)dydz$
$\int _2^3\int _1^2\left(3\left[\frac{x^{1+1}}{1+1}\right]^1_0+y\left[x\right]^1_0+z\:\left[x\right]^1_0\right)dydz$
$\int _2^3\int _1^2\left(\frac{3}{2}\left[x^2\right]^1_0+y\left[x\right]^1_0+z\:\left[x\right]^1_0\right)dydz$
$\int _2^3\int _1^2\left(\frac{3}{2}\left[1^2-0^2\right]+y\left[1-0\right]+z\:\left[1-0\right]\right)dydz$
$\int _2^3\int _1^2\left(\frac{3}{2}+y+z\:\right)dydz$
Step 3: Now integrate the definite integral with respect to y.
$\int _2^3\left(\int _1^2\left(\frac{3}{2}+y+z\:\right)dy\right)dz$
$\int _2^3\left(\int _1^2\frac{3}{2}dy+\int _1^2y\:dy+\int _1^2z\:dy\right)dz$
$\int _2^3\left(\frac{3}{2}\left[y\right]^2_1+\left[\frac{y^{1+1}}{1+1}\right]^2_1+z\left[y\right]^2_1\right)dz$
$\int _2^3\left(\frac{3}{2}\left[2-1\right]+\left[\frac{y^2}{2}\right]^2_1+z\left[2-1\right]\right)dz$
$\int _2^3\left(\frac{3}{2}+\left[\frac{2^2-1^2}{2}\right]+z\right)dz$
$\int _2^3\left(\frac{3}{2}+\frac{3}{2}+z\right)dz$
$\int _2^3\left(3+z\right)dz$
Step 4: Now integrate the definite integral with respect to z.
$\int _2^33\:dz+\int _2^3z\:dz/) \(3\left[z\right]_2^3+\left[\frac{z^{1+1}}{1+1}\right]_2^3$
$3\left[3-2\right]+\left[\frac{z^2}{2}\right]_2^3$
$3\left[1\right]+\left[\frac{3^2-2^2}{2}\right]$
$3+\left[\frac{9-4}{2}\right]$
$3+\frac{5}{2}$
$5.5$
Step 5: Now write the given function with the result.
$\int _2^3\int _1^2\int _0^1\left(3x+y+z\:\right)dxdydz=5.5$
### References
Khan Academy (n.d.) | what are triple integrals
Equations of triple integral | Calculus III - triple integrals. (n.d.)
|
2022-05-25 15:52:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9620090126991272, "perplexity": 1412.9239443768477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00130.warc.gz"}
|
http://jiangtanghu.com/blog/2012/09/16/statistical-notes-4-dragonrsquos-teeth-and-fleas-hypothesis-testing-in-plain-english/
|
# Statistical Notes (4): Dragon’s Teeth and Fleas: Hypothesis Testing in Plain English
I was asked in several different occasions to explain hypothesis testing to non-technical people in plain English. Now I think I got a pretty neat one while honors belong to three great Germans, Friedrich Engels, Karl Marx, and Heinrich Heine. Engels wrote that Marx once quoted a saying from Heine that “I have sown dragon’s teeth and harvested fleas” (but I didn’t find the original source):
In terms of hypothesis testing, say, if you plant dragon’s teeth, you are not supposed to harvest the fleas; but if you do get the fleas, then most likely they are not dragon’s teeth at all!
To make up the story as simple as possible while keep most of the key messages, here take a one-sample T-test example from SAS TTEST Procedure User Guide, where the investigated data is the Degree of Reading Power (DRP, a measurement of children’s reading skill) from 44 third-grade children. The goal is to test if the mean score of DRP is equal to 30:
Null Hypothesis (H0): μ = 30
Alternative Hypothesis (H1): μ ≠ 30
For the following discussion, three statistical measures needed (number of cases, mean and standard error):
proc means data=read maxdec=4 n mean stderr;
var score;
freq count;
run;
and the output:
Here we go:
1. Suppose H0 is true that the mean score is equal to 30 (population mean μ = 30, we assume they are dragon’s teeth!).
2. We know the sample mean is the best estimate of the population mean, here it is 34.8636.
3. This sample mean is derived from a sample (namely, the 44 children). We’d like to know if this sample (with mean of 34.8636) really come from the population (with mean of 30).
4. To get to know this, we can repeat this survey (or trial, experiment), for example, take another 99 samples (also with 44 children in each sample) and calculate their sample means (we then have 100 different means from the whole 100 samples).
5. We may know these means (got from such mechanism), with some necessary mathematical transformation, follow a t-distribution with degree of freedom of 43 (df = 44 – 1):
The denominator as a whole is the standard error.
6. Now we can check our interested sample (with mean of 34.8636) in this t-distribution. There is a corresponding t-value of this sample in the distribution, (34.8636 – 30) / 1.6930 = 2.8728:
7. In this distribution, there are only 0.63% (0.0063) of t-values either above 2.8728 or below -2.8728 (symmetry of t-distribution). What does this mean? Obviously, our interested sample (with mean of 34.8636 and t-value of 2.8728) is not in the mainstream of the whole distribution. We can even think it is very “extreme” because, there are only 0.63% of elements in the whole that are more (or at least as equal) extreme than it.
8. We call this number, 0.63% (0.0063) the P-value. According to wikipedia:
the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.
Here the test statistic that was actually observed is 2.8728, the t-value from our interested sample. And we got from this t-distribution that P{|t|>2.8728} = 0.0063.
9. A question is, how extreme is extreme? We can’t define “extreme” exactly, but probably, we can. The threshold is the significance level α (usually 0.05). As a rule of thumb, if p-value is less than the predefine significance level α, then we think the event associated with this p-value is pretty extreme.
10. Retell our story: first we assume the population mean is 30 just as the null hypothesis claims (assume they are dragon’s teeth), and under this assumption, we built a t-distribution; we then got that our interested sample is an extreme case (it is a flea!). It is so extreme (much less than 0.05) that we can even think about that our assumption (the null hypothesis) is far away from true (most likely they are not dragon’s teeth at all!). We reject the null hypothesis and take the alternative that the mean score is different from 30.
11. Formally, it just follows a simple logic (taken from Glenn Walker and Jack Shostak, P.18):
In our story, P is the null hypothesis, while Q is the extreme case: if H0 is true, most likely such extreme case should not happen; now we observe the extreme case (a t-value of 2.8728 in a t-distribution with degree of freedom of 43), then it is reasonable to question the null hypothesis itself.
12. To end this analysis, we can take a look at the output of SAS PROC TTEST:
proc ttest data=read h0=30;
var score;
freq count;
run;
The same result (P-value 0.0063 < 0.05, reject H0):
# SAS Notes
12. In this example, input data take a form of cell count data, not as raw as the case-record data, so in both PROC MEANS and PROC TTEST, a “FREQ” statement added.
13. You can compute this P-value in step-6 with SAS using ProbT function:
data;
t = 2.8727938571;
df = 43;
tail = 2;
P = tail*(1-probt(t,df));
put P = ;
run;
According to the symmetry of t-distribution, you should multiple 2 to get the two-sided probability:
14. Long long ago, a so called “critical value” is calculated to support the decision. That’s the calculation method using TINV function:
data;
alpha = 0.05;
tail = 2;
df = 43;
tCritic = tinv(1-alpha/tail,df);
put tCritic=;
run;
We get this critical value of 2.0166921992:
These two shaded zones are called “rejection zones”. The rule is, if the t-value we observed lies within the rejection zones, we should then reject the null hypothesis and take the alternative. In step-6, the t-value we calculated is 2.8728, which is bigger than this critical value, so we should reject H0.
Critical value approach is equivalent with the p-value approach we discussed before, but slightly old fashioned. P-value is pretty intuitive and can be also easily calculated by computer. In days when computing power was not wildly available, people just took the t-distribution table to look up the critical values.
Most statistical packages including SAS software don’t report the critical value (but still alive in statistics text books).
# Reference
Common Statistical Methods for Clinical Research with SAS Examples by Glenn Walker and Jack Shostak
_Hypothesis Testing: A Primer_ (in Chinese)
Two sides T-distribution calculator
Single side t-distribution calculator
Online Latex Equation Editor
|
2018-11-16 00:55:53
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259665369987488, "perplexity": 2218.9944876562845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116025658-00039.warc.gz"}
|
https://math.stackexchange.com/questions/2469145/probability-that-randomly-chosen-n-points-on-a-circle-are-covered-by-semicircle/2469488
|
# Probability that randomly chosen n points on a circle are covered by semicircle with order statistics
I am aware of many forms of the solutions posted on SE. I am only interested in an approach using order statistics. The correct probability is $\frac{n}{2^{n-1}}$.
So n points are chosen randomly on a circle and I am interested in the probability that there exists a semi circle that covers all the points. We can think of it as unit circumference circle, and the sample of $n$ points can be modeled as $n$ iid standard uniform random variables, $U_1, ..., U_n$. Also let the two order statistics of this sample to be $U_\textrm{max}$ and $U_\textrm{min}$.
Here is my logic. In order for $n$ points to fall in a semicircle, $U_\textrm{max} - U_\textrm{min} \le 0.5$.
According to the First course in probability by Ross (sec. 6.6), $$\mathbb{P} \left(U_\textrm{max} - U_\textrm{min} \le x \right) = n (1 - x) x^{n-1} + x^n.$$
But using $x = 0.5$, I get probability of $\frac{n+1}{2^{n}}$, which is not the correct answer. In fact using the same distribution for order statistics of $U_1, ..., U_{n-1}$ leads to a correct answer according to this link, which says that I need to condition on $U_\textrm{max}$. But I don't get this idea. Why do we need to condition on the maximum value?
• I do not agree with your logic. If e.g. $U_{min}=0.1$, $U_{max}=0.9$ and all other points take a value in $[0.1,0.2]$ then also a semicircle exists that contains all points. This in spite of $U_{max}-U_{min}=0.8>0.5$. – drhab Oct 12 '17 at 14:17
• Oh, I see. But I am still not clear how conditioning on $U_\textrm{max}$ provides a correct logic. I think the link I provided says we can condition on $U_\textrm{max} = 1$. In this case, if the remaining points take values in [0.4, 0.7]. Among these $n-1$ points, let's say one takes 0.4 and another takes 0.7. In this case, $U_\textrm{max} - U_\textrm{min}\le 0.5$ but we can't find a semicircle. – zcadqe Oct 12 '17 at 14:51
• Are the U(i) you're generating the distance of these points from the center of the circle? If so, that's not a uniform-area distribution. However, I think I'm not understanding what you're doing. – barrycarter Oct 12 '17 at 15:35
• $U_i$'s are the positions in the circumference. Since I am assuming the circumference is 1, the position is distributed Uniform(0, 1). – zcadqe Oct 12 '17 at 15:38
If we let one of the n points be 0.5, then the semi circle would never cross the zero point, because any semi circle crossing the zero point would not contains 0.5 . Furthermore, any semi circle not crossing the zero point would surely contains 0.5 .
The n points in a semi circle.
= The other n-1 points in a semi circle which contains 0.5
= The other n-1 points in a semi circle not cross zero.
= The other n-1 points satisfy Umax−Umin≤0.5
• So I sample $n$ random points. Define the coordinate of the circle such that any one of the $n$ points to be at 0.5. Then remaining $n-1$ points have to satisfy $U_\textrm{max} - U_\textrm{min} \le 0.5$. Is this what you mean? But this depends on which of the $n$ points is chosen to be placed at 0.5. – zcadqe Oct 12 '17 at 21:20
• It seems that I should condition on $U_\textrm{max} = 1$ such that the joint distribution of order statistics of remaining $n-1$ order statistics are distributed same as the order statistics of a sample of $n-1$ iid standard uniform variables. Then I can just plug in $n-1$ in place of $n$ in $\mathbb{P} \left(U_\textrm{max} - U_\textrm{min} \le x \right) = n (1 - x) x^{n-1} + x^n$ – zcadqe Oct 12 '17 at 21:22
• We can randomly choose one point to be 0.5, it is not a condition. For example, we choose the first point. After the first point generated, we define the coordinate such that the first point is 0.5. Now the semi-circle requirement is equivalent to the following n-1 points satisfying Umax−Umin≤0.5. Making U1 = 0.5 will not affect the distribution of U2 to Un, they are still IID uniform between 0 and 1. Making Umax = 1 will change the distribution of the other points since it implies they are not the largest one. – Jeffrey Chen Oct 13 '17 at 1:11
|
2020-02-28 00:55:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561404347419739, "perplexity": 185.86378676170682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146907.86/warc/CC-MAIN-20200227221724-20200228011724-00215.warc.gz"}
|
https://curriculum.illustrativemathematics.org/MS/students/1/3/13/index.html
|
# Lesson 13
Benchmark Percentages
Let’s contrast percentages and fractions.
### 13.1: What Percentage Is Shaded?
What percentage of each diagram is shaded?
### 13.2: Liters, Meters, and Hours
1. How much is 50% of 10 liters of milk?
2. How far is 50% of a 2,000-kilometer trip?
3. How long is 50% of a 24-hour day?
4. How can you find 50% of any number?
1. How far is 10% of a 2,000-kilometer trip?
2. How much is 10% of 10 liters of milk?
3. How long is 10% of a 24-hour day?
4. How can you find 10% of any number?
1. How long is 75% of a 24-hour day?
2. How far is 75% of a 2,000-kilometer trip?
3. How much is 75% of 10 liters of milk?
4. How can you find 75% of any number?
### 13.3: Nine is . . .
Explain how you can calculate each value mentally.
1. 9 is 50% of what number?
2. 9 is 25% of what number?
3. 9 is 10% of what number?
4. 9 is 75% of what number?
5. 9 is 150% of what number?
### 13.4: Matching the Percentage
Match the percentage that describes the relationship between each pair of numbers. One percentage will be left over. Be prepared to explain your reasoning.
1. 7 is what percentage of 14?
2. 5 is what percentage of 20?
3. 3 is what percentage of 30?
4. 6 is what percentage of 8?
5. 20 is what percentage of 5?
• 4%
• 10%
• 25%
• 50%
• 75%
• 400%
1. What percentage of the world’s current population is under the age of 14?
2. How many people is that?
3. How many people are 14 or older?
### Summary
Certain percentages are easy to think about in terms of fractions.
• 25% of a number is always $$\frac14$$ of that number.
For example, 25% of 40 liters is $$\frac14 \boldcdot 40$$ or 10 liters.
• 50% of a number is always $$\frac12$$ of that number.
For example, 50% of 82 kilometers $$\frac12 \boldcdot 82$$ or 41 kilometers.
• 75% of a number is always $$\frac34$$ of that number.
For example, 75% of 1 pound is $$\frac34$$ pound.
• 10% of a number is always $$\frac{1}{10}$$ of that number.
For example, 10% of 95 meters is 9.5 meters.
• We can also find multiples of 10% using tenths.
For example, 70% of a number is always $$\frac{7}{10}$$ of that number, so 70% of 30 days is $$\frac{7}{10} \boldcdot 30$$ or 21 days.
### Glossary Entries
• percent
The word percent means “for each 100.” The symbol for percent is %.
For example, a quarter is worth 25 cents, and a dollar is worth 100 cents. We can say that a quarter is worth 25% of a dollar.
• percentage
A percentage is a rate per 100.
For example, a fish tank can hold 36 liters. Right now there is 27 liters of water in the tank. The percentage of the tank that is full is 75%.
|
2022-01-18 13:10:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913989663124084, "perplexity": 1542.2881272075986}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00214.warc.gz"}
|
https://math.stackexchange.com/questions/2864325/true-false-linear-algebra-exam-spectrum-of-composition-of-linear-function
|
True/false linear algebra exam : Spectrum of composition of linear function
I found a true/false exam. Here is the exercise : (other question are here)
1) Let $X=\{x_1,x_2,x_3,x_4\}$ a set. Let $S:Mat(3,2,\mathbb R)\to \mathcal F(X,\mathbb R)$ and $T:\mathcal F(X,\mathbb R)\to Mat(3,2,\mathbb R)$ linear application where $\mathcal F(X,\mathbb R)=\{f:X\to \mathbb R\mid f\text{ is a function}\}$. It's possible to have $Spec(T\circ S)=\{-5,-2,3\}$ where $Spec(T)=\{\text{eigenvalue of T}\}$.
2) Let $V$ a $\mathbb C-$vector space. Let $S,T\in \mathcal L(V)$. If $Spec(S)=\{-1,\frac{-1}{4},\frac{3}{4}\}$ and $Spec(T)=\{\frac{1}{2},1\}$. There is $\lambda \in Spec(T\circ S)$ s.t. $0<|\lambda |<1$.
1) Unfortunately I have no idea.
2) Unfortunately no idea.
Any help would be appreciated.
• (1) Look at the dimensions. $\operatorname{Mat}(3,2,\mathbb{R})$ has dimension $6$, while $\mathcal{F}(X,\mathbb{R})$ has dimension $4$. Then $S$ must send some non-zero vectors to $0$. $T$ has no choice but to send zero to zero. Therefore, $T\circ S$ sends some non-zero vectors to $0$. – user578878 Jul 27 '18 at 11:43
I assume that $M(3,2,\mathbb{R})$ are the $3\times 2$ real matrices?
Notice that $\mathcal{F}(X,\mathbb{R})\cong \mathbb{R}^4$. Hence $T\circ S$ is an endomorphism of a $6$-dimensional space factoring through a $4$-dimensional space, hence $\dim\ker(T\circ S)\geq 2$. It follows that $0$ must be an eigenvalue of $T\circ S$.
For the second one, recall that if $A$ is a square matrix, then $\det(A)=\prod_i\lambda_i$ where the $\lambda_i$ are the eigenvalues of $A$. Now assuming that $S,T$ are linear operators on a finite-dimensional space $V$, we have that $\det(T\circ S)=\det(T)\det(S)$. Hence $|\det(T\circ S)|<1$ On the other hand, this should be equal to the product of all eigenvalues of $T\circ S$. You can easily proceed by contradiction.
• $dim(V)<\infty$ is not given. – user578878 Jul 27 '18 at 11:55
• For 2), we need multiplicity to compue the determinant. i.e. $\det\begin{pmatrix}2&0\\0&2\end{pmatrix}=4\neq 2$.. – MSE Jul 27 '18 at 12:06
• @MSE There is no problem in finite dimensions. No matter the multiplicity, what is important is that the product has absolute values $<1$. Sure, those equal signs that they wrote are not exactly true, but not a big deal. The question for you is, is $dim(V)<\infty$ given? – user578878 Jul 27 '18 at 12:08
• @nextpuzzle: yes we suppose $\dim(V)<\infty$. – MSE Jul 27 '18 at 12:14
• Yes, sorry I forgot the multiplicity there, but it doesn't matter. I'm not sure whether it's true for infinite dimensional spaces. I don't see an immediate reason why the point spectrum of $T\circ S$ couldn't be empty for example. – Mathematician 42 Jul 27 '18 at 12:23
|
2020-06-04 22:02:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.94826740026474, "perplexity": 179.9731215947584}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00091.warc.gz"}
|
https://www.esaral.com/q/mark-the-correct-alternative-in-the-following-17211
|
# Mark the correct alternative in the following:
Question:
Mark the correct alternative in the following:
In the interval $(1,2)$, function $f(x)=2|x-1|+3|x-2|$ is
A. monotonically increasing
B. monotonically decreasing
C. not monotonic
D. constant
Solution:
Formula:- (i) The necessary and sufficient condition for differentiable function defined on (a,b) to be strictly decreading on $(a, b)$ is that $f^{\prime}(x)<0$ for all $x \in(a, b)$
Given:-
$f(x)=2(x-1)+3(2-x)$
$f(x)=-x+4$
$\frac{\mathrm{d}(\mathrm{f}(\mathrm{x}))}{\mathrm{dx}}=-1=\mathrm{f}(\mathrm{x})$
Therefore $f^{\prime}(x)<0$
Hence decreasing function
|
2023-02-03 06:30:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.95597904920578, "perplexity": 2098.0415165771233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00690.warc.gz"}
|
https://chemistry.stackexchange.com/questions/113076/synthetic-steps-for-the-interconversion-between-aldehydes-and-ketones
|
# Synthetic steps for the interconversion between aldehydes and ketones
I am learning about carbonyl chemistry at the moment. So far, we have been able to make aldehydes and ketones from alcohols (via oxidizing agents). I can't help but wonder if there is possibly any way to make aldehydes from ketones or vice-versa through a series of synthetic steps. I would appreciate it if the synthetic steps are in the level of an undergraduate organic chemistry course.
Question: Synthetic steps for the interconversion between aldehydes and ketones?
• You could reduce an aldehyde to a primary alcohol, dehydrate the alcohol to an alkene, hydrate it again with the hydroxyl on the other side (secondary alcohol), and then oxidize to get the ketone. Apr 21 '19 at 3:42
• @KarstenTheis yes, I was able to figure this out with the given answers, but I am still having trouble going in the reverse direction. Ketones to aldehydes. Apr 21 '19 at 3:55
• Check out hydroboration vs. oxymercuration, e.g. by looking at this cheat sheet. Once you have the hydroxyl group on the carbon where you want it, you just have to make sure your oxidation does not go all the way to a carboxylic acid. Apr 21 '19 at 4:02
• @JamesBond For converting Ketones to aldehydes, what you can do is to perform a Bayer-Villiger Oxidation on the ketone to get an ester. The ester what you got can now be reduced with DIBAL-H (Diisobutylaluminium hydride) to get an aldehyde and the other part will leave as alcohol which you can again oxidise using PCC. Apr 21 '19 at 4:02
• @SoumikDas Thanks! Makes perfect sense since I already know about the ester to aldehyde via DIBAL-H. The Bayer-Villiger Oxidation was the key to getting to the ester. Thanks a lot. If you want you can post your answer so I can accept it, I like this more. Apr 21 '19 at 4:19
## 2 Answers
You can easily convert aldehydes to ketones and vice-versa by using simple organic synthetic steps which are very much well known.
Converting aldehydes to ketones
For converting aldehydes to ketones there are numerous pathways. Here I am mentioning the simplest two ways, one of which is by using Grignard reagent and other by using thioacetal intermediates.
You can react aldehydes with Grignard reagents ($$\ce{R^2 -MgBr}$$) and perform acidic workup to generate secondary alcohols. Then you can oxidise the alcohol to get a ketone by commonly used oxidising agents like PCC (pyridinium chlorochromate).
The other way is to react the aldehyde with a propane-1,3-dithiol to generate a cyclic thioacetal . Now, interestingly, the proton attached to the carbon now can be removed by strong base like n-Butyllithium, and generate Carbanion which can perform a $$S_N2$$ reaction with an alkyl halide to generate cyclic thioketal. After that the thioketal can be hydrolysed using $$\ce{HgCl2/CdCO3/H2O}$$ to generate your desired ketone.
Converting ketones to aldehydes
The easiest step for this conversion is to perform a Bayer-Villiger Oxidation reaction on the ketone to get an ester. Then that ester can be reduced by Diisobutylaluminium Hydride (DIBAL-H) and an aqueous workup will generate desired aldehyde (along with an alcohol). (If you wish you can further oxidise that alcohol also in case you want aldehyde of that part).
There are two distinct ways to convert aldehydes into ketones and vice versa
An aldehyde is a carbonyl group on a primary carbon while a ketone is a carbonyl group on a secondary carbon. To interconvert the two, you can either keep the functional group on a given carbon and make/break carbon-carbon bonds. Or you can move the functional group along the existing carbon backbone without breaking carbon-carbon bonds.
Moving the carbonyl functional group
Enzymatic conversion from aldehyde to ketone and back is common in the biochemistry of carbohydrates. In glycolysis, for example, phosphorylated dihydroxyacetone is converted to glyceraldehyde. Sugars have a hydroxyl group alpha to the carbonyl group. In the enzymatic reaction, isomerization proceeds via an enediol intermediate (source):
A similar acid/base mechanism is used in the interconversion of glucose to fructose. This is a commercially relevant reaction for making high fructose corn syrup.
Turn a primary carbon into a secondary carbon and vice versa
See Soumik Das' answer.
|
2021-10-28 04:23:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5021407008171082, "perplexity": 3822.182683947357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00693.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-the-derivative-of-x-log-base5-x
|
# How do you find the derivative of x^(log(base5)(x))?
Sep 18, 2016
$2 {x}^{{\log}_{5} \left(x\right) - 1} {\log}_{5} \left(x\right)$
#### Explanation:
$y = {x}^{{\log}_{5} \left(x\right)}$
Take the natural logarithm of both sides. (This is known as logarithmic differentiation.)
$\ln \left(y\right) = \ln \left({x}^{{\log}_{5} \left(x\right)}\right)$
Use the rule: $\log \left({a}^{b}\right) = b \cdot \log \left(a\right)$ to rewrite this function:
$\ln \left(y\right) = {\log}_{5} \left(x\right) \cdot \ln \left(x\right)$
Now, rewrite ${\log}_{5} \left(x\right)$ in logarithms with base $e$ using the change of base formula: ${\log}_{a} \left(b\right) = {\log}_{c} \frac{b}{\log} _ c \left(a\right)$.
$\ln \left(y\right) = \ln \frac{x}{\ln} \left(5\right) \cdot \ln \left(x\right)$
$\ln \left(y\right) = {\left(\ln \left(x\right)\right)}^{2} / \ln \left(5\right)$
Differentiate both sides. The chain rule will be needed on both sides of the equation.
$\frac{1}{y} \cdot \frac{\mathrm{dy}}{\mathrm{dx}} = \frac{2 \ln \left(x\right)}{\ln} \left(5\right) \cdot \frac{d}{\mathrm{dx}} \ln \left(x\right)$
We already have used the derivative of $\ln \left(x\right)$ is $\frac{1}{x}$. Also, rewrite $y$ as ${x}^{{\log}_{5} \left(x\right)}$:
$\frac{1}{x} ^ \left({\log}_{5} \left(x\right)\right) \cdot \frac{\mathrm{dy}}{\mathrm{dx}} = \frac{2 \ln \left(x\right)}{x \ln \left(5\right)}$
Note that $\ln \frac{x}{\ln} \left(5\right) = {\log}_{5} \left(x\right)$ through the change of base formula in reverse, or ${\log}_{c} \frac{b}{\log} _ c \left(a\right) = {\log}_{a} \left(b\right)$.
$\frac{1}{x} ^ \left({\log}_{5} \left(x\right)\right) \cdot \frac{\mathrm{dy}}{\mathrm{dx}} = \frac{2 {\log}_{5} \left(x\right)}{x}$
Multiplying both sides by ${x}^{{\log}_{5} \left(x\right)}$:
$\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{2 {x}^{{\log}_{5} \left(x\right)} {\log}_{5} \left(x\right)}{x}$
Note that ${x}^{{\log}_{5} \left(x\right)} / x = {x}^{{\log}_{5} \left(x\right) - 1}$ through the rule that ${x}^{a} / {x}^{b} = {x}^{a - b}$:
$\frac{\mathrm{dy}}{\mathrm{dx}} = 2 {x}^{{\log}_{5} \left(x\right) - 1} {\log}_{5} \left(x\right)$
|
2021-07-26 12:37:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600229263305664, "perplexity": 355.10545331615725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00545.warc.gz"}
|
https://www.dreamwings.cn/poj2484/3744.html
|
# POJ 2484:A Funny Game
A Funny Game
Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 5099 Accepted: 3180
Description
Alice and Bob decide to play a funny game. At the beginning of the game they pick n(1 <= n <= 106) coins in a circle, as Figure 1 shows. A move consists in removing one or two adjacent coins, leaving all other coins untouched. At least one coin must be removed. Players alternate moves with Alice starting. The player that removes the last coin wins. (The last player to move wins. If you can’t move, you lose.)
Figure 1
Note: For n > 3, we use c1, c2, …, cn to denote the coins clockwise and if Alice remove c2, then c1 and c3 are NOT adjacent! (Because there is an empty place between c1 and c3.)Suppose that both Alice and Bob do their best in the game.
You are to write a program to determine who will finally win the game.
Input
There are several test cases. Each test case has only one line, which contains a positive integer n (1 <= n <= 106). There are no blank lines between cases. A line with a single 0 terminates the input.
Output
For each test case, if Alice win the game,output “Alice”, otherwise output “Bob”.
Sample Input
1
2
3
0
Sample Output
Alice
Alice
Bob
题意:有n枚硬币围成一个圈,每个人只能取走连续的一个或者两个硬币,取走的地方为空,Alice为先手,问最终谁会获胜。
当n==1 || n==2时,明显先手必胜。
当n==3时,明显先手必败。
由于每次只可取1或2个,而取2个时,2个必须相邻,
推断有:当n>3时,若n为偶数,先手无论如何取,后手可在先手对称的位置上取同等数量,于是先手必败。
若n为奇数,先手取1个时,后手可在先手对称的位置上取2个,之后无论先手如何取,后手都可在先手对称的位置上取同等数量,先手必败。
如果先手一开始取2个时,后手可在先手对称的位置上取1个,之后还剩下偶数个,可如上推出先手必败。故当 n>3时,先手必败。
### AC代码:
#include<iostream>
#include<cstdio>
#include<cstring>
#include<algorithm>
#include<queue>
using namespace std;
int main()
{
long long int n;
while(cin>>n&&n)
{
if(n<=2)cout<<"Alice"<<endl;
else cout<<"Bob"<<endl;
}
return 0;
}
|
2020-02-19 00:07:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18136382102966309, "perplexity": 1663.4486751898476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00509.warc.gz"}
|
https://math.stackexchange.com/questions/373860/simple-integration-by-substitution
|
# Simple Integration by Substitution
I am just exercising integration by substitution and would like to solve the following integral:
$$\int (x^2+1)^3 dx$$
I substitute $x^2$ with y and hence $\frac{dy}{dx} = 2x$ and $dx = dy*2x$. Thus I get:
$$\int (y+1)^3 dy*2x$$
how do I continue from this point on, i.e. how do I get rid of the 2x?
• Put 2x under the integral, it doesn't get a free ride. – Loki Clock Apr 26 '13 at 21:10
Here, the substitution isn't appropriate, because, as you see, there's no factor of $x$ in the original integrand. It would have been a better choice to substitute $y = (x^2 + 1)$ if the integral had been $$\int x(x^2 + 1)^3\,dx$$ because then we'd have $\;dy = 2x dx\implies dx = \frac 12 dy,\;$ giving us a very nice integral to work with: $$\;1/2 \int y^3 dy$$
But, alas! We don't have that integral to work with. And there's not a really handy substitution to use that will simplify our work.
Instead, for this integral, try expanding the binomial (easy to do in this case), and use the power rule to integrate each term:
$$\int (x^2+1)^3 dx \quad = \quad \int (x^6 + 3x^4 + 3x^2 + 1) \,dx \quad = \quad\dfrac{x^7}{7} + \frac{3x^5}{5} + x^3 + x + C$$
• See: binomial expansion. Makes it easier to expand binomial raised to 3+. – Justin Apr 26 '13 at 21:14
• thank you, I know the binomial expansion works fine in this example, was wondering though how to use substitution. – user66280 Apr 26 '13 at 21:18
• Substitution works great...when it works and simplifies our work. But in this case, it substitution really isn't appropriate. – Namaste Apr 26 '13 at 21:23
• Note also, even if you had the integral, using your substitution: $$\int \frac{(y + 1)^3}{\sqrt y} \,dy$$, you'd still have to expand the binomial to integrate, account for the divition, and integrate the same number of terms that you'd have if you just proceeded, from the start, with expanding the binomial, no substitution. Plus, you'd need to back substitute if you use substitution. Hence, no reason to substitute, makes more work, not less. – Namaste Apr 26 '13 at 21:36
• @amWhy: Nice work! +1 – Amzoti Apr 27 '13 at 0:39
$\frac{dy}{dx} = 2x$ implies $dx = \frac{dy}{2x}$.
To get rid of the $x$, recognize that $x^2 = y$ implies $x = \pm \sqrt{y}$.
• Note that this substitution probably won't make your integral any easier. – Emily Apr 26 '13 at 21:08
You should not perform substitution here because the derivative $2x$ was not there to begin with, making for a difficult process. I suggest actually expanding out $(x^2+1)^3$ into a polynomial, which then is very easy to integrate.
|
2019-10-19 06:50:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487432241439819, "perplexity": 459.22896903021245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692126.27/warc/CC-MAIN-20191019063516-20191019091016-00098.warc.gz"}
|
https://www.nature.com/articles/s41467-019-13544-0?error=cookies_not_supported&code=df2ecc0b-0ad9-4a2d-b605-4558e60448e3
|
# Genetic correlations of psychiatric traits with body composition and glycemic traits are sex- and age-dependent
## Abstract
Body composition is often altered in psychiatric disorders. Using genome-wide common genetic variation data, we calculate sex-specific genetic correlations amongst body fat %, fat mass, fat-free mass, physical activity, glycemic traits and 17 psychiatric traits (up to N = 217,568). Two patterns emerge: (1) anorexia nervosa, schizophrenia, obsessive-compulsive disorder, and education years are negatively genetically correlated with body fat % and fat-free mass, whereas (2) attention-deficit/hyperactivity disorder (ADHD), alcohol dependence, insomnia, and heavy smoking are positively correlated. Anorexia nervosa shows a stronger genetic correlation with body fat % in females, whereas education years is more strongly correlated with fat mass in males. Education years and ADHD show genetic overlap with childhood obesity. Mendelian randomization identifies schizophrenia, anorexia nervosa, and higher education as causal for decreased fat mass, with higher body fat % possibly being a causal risk factor for ADHD and heavy smoking. These results suggest new possibilities for targeted preventive strategies.
## Introduction
Psychiatric disorders are complex traits influenced by thousands of genetic variants that act in concert with environmental factors1,2. Genome-wide association studies (GWASs) of psychiatric disorders have identified more than 300 independent genomic loci3,4, informed biological follow-up studies5 and may deliver promising targets for drug discovery and repurposing6,7,8. Genome-wide summary statistics generated by GWASs can be used in several different ways9, including estimating single-nucleotide polymorphism-based heritability ($$h_{\mathrm{SNP}}^2$$), which is the phenotypic variance explained by common genomic variants. Values of $$h_{\mathrm{SNP}}^2$$ range from 10 to 30% for psychiatric disorders and typically capture around a third of the heritability estimated by twin studies10. Additionally, genetic correlations can be calculated using GWAS summary statistics via bivariate linkage disequilibrium score regression (LDSC), which estimates the genetic overlap (i.e. the shared genetic effects) between two traits11,12. Such GWAS-based genetic correlation analyses have shown substantial genetic overlap among psychiatric disorders13, providing evidence for an underlying “p factor” representing general liability for psychiatric illness14,15. For instance, genomic structural equation modelling16 of GWAS summary statistics for schizophrenia, bipolar disorder, major depressive disorder, post-traumatic stress disorder and anxiety showed that they load onto one shared latent factor with loading estimates between 0.29 and 0.8616. However, marked differences in the clinical presentation of psychiatric disorders exist for psychotic experiences or dysfunctional reward systems, suggesting the existence of additional disorder-specific genetic effects13,14,16.
Clinically, many psychiatric disorders are accompanied by disturbances in appetite regulation, eating behaviour and altered physical activity. These disturbances can alter body composition and result in comorbid overweight or underweight17, most prominently observed in eating disorders, such as binge-eating disorder and anorexia nervosa18. Such severe weight dysregulation typically reduces patients’ quality of life and is associated with excess morbidity and mortality19. Body composition traits, including body fat % (BF%) and fat-free mass (FFM), are also complex, with substantial twin heritabilities of ~70%20,21. Body mass index (BMI) is the most commonly studied body composition phenotype and its associated genetic variants have been found to be significantly overrepresented in genes and genomic regions active in brain cell types22, suggesting it may be a partially behavioural trait. Several studies have also shown negative genetic correlations of BMI with anorexia nervosa and schizophrenia12,23,24,25 and positive genetic correlations of BMI with attention-deficit/hyperactivity disorder (ADHD) and major depressive disorder26,27. These observations suggest that an in-depth investigation of the shared genomics between psychiatric and body composition traits is needed.
In addition, both extreme overweight and extreme underweight show a clear sex difference: females are not only disproportionately affected by anorexia nervosa (with ratios up to 15:1) but also by obesity (≥30 kg/m2)28,29,30. Sex differences are not limited to body composition: major depressive disorder31 and anxiety32 are more common in females, whereas ADHD33 and autism spectrum disorder34 occur more often in males. Sex differences in body composition, psychiatry and their interplay are not fully understood. Hormones and sex chromosomes have clearly been demonstrated to play a role35, but are insufficient to fully explain the sex differences36.
In this study, our primary aim is to identify pairs of traits with shared genetic factors by calculating sex-specific genetic correlations. To do so, we calculate sex-specific genetic correlations for GWASs of 12 psychiatric disorders mostly supplied by the Psychiatric Genomics Consortium (URLs) and five behavioural traits with sex-specific GWASs of body composition traits derived from a healthy and medication-free subsample of the UK Biobank (URLs; Supplementary Tables 1, 2). These include BMI, BF%, absolute fat mass (FM) and FFM, as well as body composition-related traits, such as objectively measured physical activity from the UK Biobank (URLs) and glycaemic traits from MAGIC (URLs; Supplementary Data 1). We apply trait-specific illness and medication filtering to obtain genomic variants that are associated with body composition traits independent of the confounding effects of somatic diseases, such as diabetes or endocrine illnesses and addiction-related behaviours, including smoking and alcohol consumption, as well as psychiatric disorders. Where possible, putative causality is examined using generalized summary data-based Mendelian randomization (GSMR)37 in females and males separately. As a secondary aim, we use GWASs of BMI and FFM from different stages of life, including childhood, adolescence, young adulthood and late adulthood, to identify the developmental stages in which the sharing of body composition genomic factors with genetic liability for psychiatric disorders occurs.
Here, we show that the genomic overlap between body composition traits and psychiatric disorders is evident only in later adulthood, whereas childhood and young adulthood GWASs of BMI do not correlate significantly with psychiatric traits. Accelerometer-measured physical activity shows genetic correlations with obsessive compulsive disorder (OCD) and anorexia nervosa, but with no other psychiatric disorder. In addition, glycaemic traits show significant genetic correlations only with anorexia nervosa and years of education, which positions anorexia nervosa as unique among the psychiatric disorders we investigate. These findings encourage a deeper investigation of metabolic pathways that may be implicated in psychiatric disorders to identify potential targets for preventive strategies.
## Results
### Genetic overlap between the sexes
Body composition and physical activity showed substantial heritability explained by common genetic variation ranging from 28–51% (standard error (s.e.) = 0.4–0.8%, LDSC; Supplementary Table 3) and sex-dependent sets of genomic variation at pBonferroni = 0.05/28 = 0.002. We detected a genetic correlation between males and females in BF% that was significantly different from 1 (rg = 0.89, s.e. = 0.03; p≠1 = 4.7 × 10−4, LDSC). Sensitivity analyses using Haseman–Elston regression38 confirmed these results (Supplementary Table 3) and suggest that specific sets of genomic variation associated with BF% may be differentially active in females and males. The genetic correlations between females and males for the remaining traits are presented in Supplementary Table 4. Detailed results for the body composition and physical activity GWASs, including significant hits and Manhattan plots, are presented on Functional Mapping and Annotation (FUMA; URLs) entry 20–22 and 38–41.
### Genetic overlap of psychiatric and body composition traits
In the genetic correlations of the psychiatric disorders and behavioural traits with body composition and physical activity, distinct patterns emerged resulting in two groups (Table 1). In the first group, anorexia nervosa, education years, OCD and schizophrenia were significantly negatively associated with BF%, while anorexia nervosa and schizophrenia were also significantly negatively associated with FFM (see Fig. 1 and Supplementary Data 2 for full results). By contrast, in the second group, ADHD, heavy smoking, alcohol dependence and insomnia were significantly positively associated with BF%, while only ADHD and heavy smoking were also significantly positively associated with FFM (Table 1). The p value threshold for the genetic correlations with body composition traits was pBonferroni = 0.05/190 = 2.6 × 10−4 using matrix decomposition of the genetic correlation matrix to identify the number of independent tests to adjust the threshold using Bonferroni correction39.
### Sex differences in genetic correlations
The genetic correlation of anorexia nervosa with BF% in females (rg = −0.44, s.e. = 0.04, LDSC) was stronger than with BF% in males (rg = −0.26, s.e. = 0.04, LDSC) with a significant difference of δrg = −0.17 (p = 4.2 × 10−5, LDSC jackknife). Conversely, education years showed a stronger genetic correlation with FM in males than in females (δrg = 0.10, p = 1.3 × 10−4, LDSC jackknife), which was also seen with FFM (δrg = 0.09, p = 1.7 × 10−4, LDSC jackknife). No other sex differences were observed (Supplementary Data 3).
### Putative causal relationships
GSMR revealed evidence consistent with putative causal relationships between psychiatric traits and body composition. The effects on continuous traits are expressed as β coefficients (Fig. 2a–c, e, Supplementary Fig. 1), whereas the effects on binary traits are presented as odds ratios (ORs; Fig. 2d). Estimates with binary exposures were converted to the liability scale40. The Bonferroni-corrected p value was 0.05/190 = 2.6 × 10−4 for the GSMR analyses (Supplementary Data 4, 5). In the first group, GSMR showed evidence for a 1.8 kg decrease in FM per standard deviation of liability to anorexia nervosa (p = 2.3 × 10−8, GSMR) that was more pronounced in females (βAN→FM = −2.14, p = 1.9 × 10−5, GSMR) than in males (βAN→FM = −1.3, p = 4.9 × 10−4, GSMR). This mirrored the observed genetic correlations. Additionally, GSMR showed evidence for a 3.7 kg decrease in FM per year of education (p = 5.1 × 10−38, GSMR). Furthermore, GSMR showed a 0.88 kg decrease in FM (p = 3.3 × 10−13, GSMR) and a 0.58 kg decrease in FFM (p = 4.5 × 10−13, GSMR) per standard deviation of liability to schizophrenia (Supplementary Data 4). GSMR results for the second group showed no evidence for an influence of ADHD on fat mass (p = 0.23, GSMR). However, GSMR showed evidence in the reverse direction with a 1.05-fold increase in risk for ADHD per kg FM (p = 1.3 × 10−12, GSMR) as well as a 1.03-fold increase in risk for ADHD per kg FFM (p = 2.0 × 10−5, GSMR) and a 1.04-fold increase in heavy smoking per kg FM (p = 6.7 × 10−8, GSMR; Supplementary Data 5).
### Genetic correlations with physical activity
In the first group, OCD (rg = 0.28, s.e. = 0.07, LDSC) and anorexia nervosa (rg = 0.17, s.e. = 0.05, LDSC) correlated positively with objectively measured physical activity, whereas education years showed a significant correlation with physical activity only in females (rg = 0.17, s.e. = 0.04, LDSC; Supplementary Data 2). However, when formally tested the genetic correlation was not significantly different from the correlation observed in males (Supplementary Data 3). Neither ADHD (p = 0.20, LDSC) nor any other trait in the second group correlated with physical activity (Supplementary Data 2).
### Genetic correlations with glycaemic traits
Our investigation into whether the relationships of the psychiatric traits with body composition are mirrored in their relationships with glycaemic traits (Fig. 3) showed that anorexia nervosa (rg = −0.28; p = 1.8 × 10−7, LDSC) and education years (rg = −0.28, p = 1.0 × 10−12, LDSC) correlated genetically negatively with fasting insulin concentrations. Accordingly, anorexia nervosa (rg = −0.29, p = 2.8 × 10−5, LDSC) and education years (rg = −0.33, p = 9.2 × 10−6, LDSC) also showed negative genetic correlations with insulin resistance. In addition, education years showed a negative genetic correlation with fasting glucose concentrations (rg = −0.14; p = 2.1 × 10−5, LDSC), whereas heavy smoking showed a positive genetic correlation with fasting glucose concentrations (rg = 0.22; p = 2.0 × 10−4, LDSC; Supplementary Data 6). No other psychiatric traits showed a genetic correlation with glycaemic traits passing our significance threshold.
Sensitivity analyses with female-only and male-only GWASs of the psychiatric and behavioural traits resulted in similar results, indicating that the patterns and results are consistent and largely independent of female-to-male ratios in the sex-combined GWAS (Supplementary Data 2, 6 and Supplementary Figs. 2a3b). Sensitivity analyses not adjusting the body composition GWASs for alcohol consumption or smoking yielded the same results (Supplementary Data 7).
## Discussion
Symptomatically, psychiatric disorders are often accompanied by alterations in energy intake, energy expenditure and body composition. Recent genetic analyses of BMI found an important role for genes expressed in the brain and specific brain cell types22, suggesting that BMI may be a metabo-behavioural trait. This spurred our in-depth investigation of the shared genetics of psychiatric traits and body composition. We were able to show that five psychiatric disorders—anorexia nervosa, OCD, schizophrenia, ADHD and alcohol dependence—as well as three behavioural traits—education years, insomnia and heavy smoking—show significant genetic correlations (i.e. shared genetics) with body composition in two distinct patterns.
The first group of psychiatric disorders and behavioural traits included anorexia nervosa, OCD, schizophrenia and education years, and was characterized by genetic correlations with genomic variants predisposing to lower BF% and FFM. The second group comprised ADHD, alcohol dependence, heavy smoking and insomnia, and had genetic correlations with genomic variants predisposing to higher BF% and FFM. Our Mendelian randomization analyses used significant genetic variants as instrumental variables and found that anorexia nervosa, schizophrenia and education years showed evidence consistent with a negative causal effect on FM and, in the reverse direction, higher BF% appeared to be a risk factor for both ADHD and heavy smoking. Our results also suggested that the overweight seen in individuals with schizophrenia in epidemiological studies42 is likely to represent medication effects43 given our observations of a putative causal effect of schizophrenia on lower FM and FFM. This finding reiterates the pressing need for the development of new antipsychotic medications with more favourable weight-related side effect profiles.
In our analysis, anorexia nervosa showed a stronger correlation with BF% in females than in males. This phenomenon was not observed for other traits genetically associated with anorexia nervosa, such as neuroticism, anxiety, major depressive disorder, OCD or schizophrenia41. These findings suggest that anorexia nervosa and BF% may share a sex-dependent set of genomic variants potentially contributing to the marked sex bias in the prevalence of anorexia nervosa. Education years showed a stronger genetic correlation with FM in males than in females. However, the GSMR analysis showed a more pronounced protective effect of education years on FM in females than in males in line with a large epidemiological study44. This suggests that the stronger genetic association between education years and FM in males may be driven by a set of pleiotropic variants.
From a developmental perspective, it is striking that GWASs of body composition across ages do not genetically correlate perfectly with each other. These varying genetic effects across the lifespan41,45 have been termed “genetic innovation”46 and represent the effects of partially different, age-dependent sets of genomic variants on body composition regulation at certain periods of life41,45. Some of the psychiatric disorders, such as ADHD and anorexia nervosa, typically have their onset in childhood or adolescence with preceding symptoms or behaviours that implicate neurodevelopmental components. We used the available life-stage GWASs of body composition and did not find genetic overlap between childhood or adolescence/young adulthood BMI with psychiatric disorders, but instead found significant genetic correlations of psychiatric disorders with later adult BMI and BF%. Our analyses also show that genetic variants associated with obesity before the age of ten were positively correlated only with ADHD and negatively only with education years. The relatively specific positive genetic correlation of childhood obesity with ADHD recapitulates a large body of clinical evidence of high phenotypic comorbidity47, also shown in family studies48. Overweight may represent a difficult but potentially intervenable risk factor at a young age.
Our finding of a genetic overlap between ADHD and obesity in childhood may implicate shared biological pathways between both traits. Given our other results, it appears that this shared component is unlikely to be related to physical activity or glycaemic traits. Instead, speculatively, a central nervous system pathway that is dysregulated by increased body mass in childhood may increase the liability to develop ADHD.
We also investigated body composition-related traits, including physical activity, fasting insulin and fasting glucose concentrations. Physical activity showed a positive genetic correlation with anorexia nervosa and OCD, which themselves were negatively genetically correlated with BF%. Carrying genetic variants that predispose to higher physical activity may be associated with the relationship between lower BF% and psychiatric traits. Higher physical activity, therefore, should be carefully assessed in the treatment of patients with compulsive psychiatric disorders like anorexia nervosa and OCD as it may be a genetically mediated behaviour, as indicated by our analysis.
Contrary to our expectations, ADHD did not show a genetic correlation with physical activity. This suggests that hyperactivity in ADHD may not originate from biological liability for higher accelerometer-measured physical activity49 and is likely to have an alternative cause, such as insufficient inhibitory control as observed in paediatric clinical samples with ADHD50, healthy adult population samples51, and in a large longitudinal developmental cohort study52.
Our analyses showed that anorexia nervosa and education years have a negative genetic correlation with fasting insulin concentrations and insulin resistance, positioning anorexia nervosa as a special case within the psychiatric disorders and potentially differentiating it from OCD. These negative correlations with fasting insulin concentrations mirrored the negative genetic correlations between anorexia nervosa, education years and BF%. The potential involvement of metabolic hormones like insulin in anorexia nervosa underscores the relationship of brain and body and their reciprocal regulation53, opening an avenue for deeper investigation of metabolic components in psychiatric disorders. The genetic correlations of ADHD with glycaemic traits were not significant, implying that these traits play a smaller role in ADHD than in anorexia nervosa, given the comparable sample size of the GWASs on both psychiatric disorders25,26. Genetic associations of physical activity and glycaemic traits with body composition and psychiatric traits in plausible directions render them interesting candidates for formal mediation analyses as they may be actionable targets54.
Our study represents the largest investigation of sex- and age-dependent effects in the genomic overlap of body composition and psychiatric traits. Although our analyses drew on the largest available GWASs, some phenotypes still had relatively small sample sizes for genomic investigations of common variants in complex traits, especially for our sex-specific analyses. These should be repeated when sample sizes have increased, especially for OCD as its currently available GWAS sample size is particularly modest. All Mendelian randomization analyses, using GSMR37, with body composition or glycaemic traits, ADHD, education years, schizophrenia or heavy smoking as exposure were sufficiently powered; however, the analyses with anorexia nervosa, insomnia or OCD as exposures should be regarded as exploratory in nature because p value thresholds were lowered to include at least 10 single-nucleotide polymorphisms (SNPs) in the instrument variable.
Finally, the age-dependent genetic influences we observed between psychiatric traits and body composition suggests that future research could focus on a developmental approach to GWAS analyses of body composition, to capture age- and sex-dependent differences. These differences have already been suggested by larger twin studies55,56 and two molecular genetic studies41,45, which enabled our examination of their relationship with psychiatric traits. Most importantly, shared biological pathways and common environmental factors influencing both body composition and behavioural traits should be studied as potential targets for interventions.
## Methods
### UK Biobank subsample
We performed GWASs on an unrelated (KING relatedness metric >0.044, equivalent to a relatedness value of 0.088; nrelated = 7765) European subsample (defined by 4-means clustering of the genetic principal components)57 of the genotyped UK Biobank participants (n = 155,961, 45% female, 32% of the genotyped participants, Supplementary Table 1)58,59. The UK Biobank (URLs) is a prospective cohort sampled from the general population between 2006 and 2010. All participants were between 40 and 69 years old, were registered with a general practitioner through the United Kingdom’s National Health Service, and lived within travelling distance of one of the assessment centres.
### Ethics
The UK Biobank is approved by the North West Multi-centre Research Ethics Committee. All procedures performed in studies involving human participants were in accordance with the ethical standards of the North West Multi-centre Research Ethics Committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. All participants provided written informed consent to participate in the study. This study has been completed under UK Biobank approved study application 27546.
### Power calculations of the GWASs
We conducted power calculations for the female and male GWASs using the Genetic Power Calculator60. A minimum of 39,580 individuals is required to detect a SNP that accounts for 0.1% of trait variance at 80% power at a genome-wide significance threshold of p ≤ 5 × 10−8 and a minor allele frequency of 0.20. According to these results, the female and the male GWASs were sufficiently powered to detect genome-wide significant loci with 70,700 females and 85,261 males. With these parameters, the female GWAS had a power of 99.8% and the male GWAS of 99.9%.
### GWASs on body composition traits in the UK Biobank
The continuous body composition traits—BF%, FM, FFM and BMI—were measured using the validated bioelectrical impedance analyser Tanita BC-418 MA (Tanita Corporation, Arlington Height, IL) at every assessment centre61,62 for every participant across the UK. We applied trait-specific medication and illness filtering to exclude participants with compromised hydration status and medications or illnesses known to affect body composition to identify genetic variation associated with body composition phenotypes that is not confounded by illnesses and their downstream effects or metabolism-changing medication. We applied stringent exclusion criteria and covaried for addictive behaviour-related phenotypes, including smoking and alcohol consumption (for exclusion criteria, see Supplementary Table 2). We regressed the body composition traits on factors related to assessment centre, genotyping batch, smoking status, alcohol consumption, menopause and continuous measures of age, and socioeconomic status (SES) measured by the Townsend Deprivation Index63 as independent variables. We took the residuals from these regressions as our phenotypes for the GWASs. We included 7,794,483 SNPs and insertion–deletion variants (hereafter referred to as SNPs) with a minor allele frequency >1%, imputation quality scores >0.8, and that were genotyped, or present in the HRC reference panel64 and used an additive model on the imputed dosage data provided by UK Biobank, using BGENIE v1.265. We accounted for underlying population stratification by including the first six principal components, calculated on the genotypes of our European subsample using FlashPCA266. We performed GWASs including incremental numbers of principal components and checked each GWAS for inflation by calculating its LDSC intercept. We identified six principal components as the optimal number to adjust for population stratification within the European subsample and to not overcorrect the analysis retaining the greatest signal. Additionally, we included assessment centre as a covariate to adjust for population stratification. We then meta-analysed the sex-specific GWASs using METAL67 (URLs) applying an inverse variance-weighted model with a fixed effect, to obtain sex-combined results.
### Clumping and genome-wide significant loci
Significantly associated SNPs (p < 5 × 10−8) were considered as potential index SNPs. SNPs in LD (r2 > 0.2) with a more strongly associated SNP within 3000 kb were assigned to the same locus using FUMA (URLs)68. Overlapping clumps were merged with a second clumping procedure in FUMA, merging all lead SNPs with r2 = 0.1 to genomic loci. After clumping, independent genome-wide significant loci (5 × 10−8) were compared with entries in the NHGRI-EBI GWAS catalogue69, using FUMA68.
### Heritability estimation and investigation of sex differences
To ensure the robustness of our results, we applied multiple approaches to calculate heritability estimates and genetic correlations. We used BOLT-LMM70, LDSC11 and GREML71 implemented in GCTA72 to calculate common variant $$h_{\mathrm{SNP}}^2$$ (URLs). Additionally, we calculated the genetic correlation between females and males using LDSC11 and Haseman–Elston regression38 implemented in GCTA72 to estimate sex differences in the genetic architecture of the body composition, glycaemic traits and physical activity. Haseman–Elston regression uses the cross-product of phenotypes for pairwise individuals and a genetic relatedness matrix to calculate heritability and genetic correlations73. All other statistics were calculated in R 3.4.1 if not otherwise stated (URLs).
### GWASs of psychiatric disorders and behavioural traits
All of the following traits were used for the sex-specific and age-dependent analyses (Supplementary Data 1). The sex-specific summary statistics for the psychiatric disorders, including major depressive disorder27, schizophrenia3, anorexia nervosa25, bipolar disorder74,75, ADHD26,76, alcohol dependence77, autism spectrum disorder78 and PTSD79, were provided by the PGC (URLs), for OCD80,81 by International Obsessive Compulsive Disorder Foundation Genetics Collaborative (IOCDF-GC) and OCD Collaborative Genetics Association Studies (OCGAS), for borderline personality disorder82 by the German Borderline Genomics Consortium, for cannabis use by the International Cannabis Consortium83, for anxiety84 by our own group, for insomnia85 by the Complex Trait Genetics group at VU University Amsterdam (URLs), for heavy smoking86 by University of Leicester available from the UK Biobank (URLs), for the behavioural traits years of education87 by the Social Science Genetic Association Consortium (SSGAC) (URLs), for neuroticism41 by our own group (Supplementary Data 1) and for migraine88,89 by International Headache Genetics Consortium (IHGC). Glycaemic traits’90 summary statistics were provided by the Meta-Analyses of Glucose and Insulin-related traits Consortium (MAGIC), whereas childhood obesity91 results were provided by the Early Growth Genetics (EGG, URLs) Consortium, BMI in young adulthood by Graff et al.92 and physical activity by our group41.
### Genetic correlations
Using an analytic extension of LDSC11, we calculated SNP-based bivariate genetic correlations (rg) to examine the genetic overlap of body composition and glycaemic traits with psychiatric and behavioural traits and disorders in a sex-specific manner. Differences in genetic correlations were calculated and their s.e.’s were calculated using a block jackknife approach as previously described41.
### Generalized summary data-based Mendelian randomization
We investigated putative causal bidirectional relationships between these traits using GSMR37. Mendelian randomization is a method that uses genetic variants as instrumental variables, which are expected to be independent of confounding factors, to test for causative associations between an exposure and an outcome93. Mendelian randomization can be used to infer credible causal associations when randomized-controlled trials are not feasible or are unethical93. GSMR performs a multi-SNP Mendelian randomization analysis using summary statistics. Let z be a genetic variant (e.g. SNP), x be the exposure (e.g. psychiatric disorder) and y be the outcome (e.g. body composition trait). First, GSMR is based on the premise that several nearly independent SNPs (z) are associated with the exposure (x). Second, it assumes that the exposure (x) has an causal effect on y. If both assumptions hold true, the SNPs that are associated with the exposure (x) will exert an effect on the outcome (y) via the exposure (x). If in this instance no pleiotropy is present, the estimate (bxy) at any of the SNPs that are associated with the exposure (x) should be highly similar, because each effect of all SNPs on the outcome (y) will be mediated through the exposure (x). With the help of a generalized least squares (GLS) model, the estimates of bxy of each SNP that is associated with the exposure (x) can be combined, resulting in higher statistical power37,94. The GSMR method essentially implements summary data-based Mendelian randomization analysis for each SNP instrument individually, and integrates the bxy estimates of all the SNP instruments by GLS, accounting for the sampling variance in both bzx and byz for each SNP and the LD among SNPs. We used individual-level genotype data from a subsample of the anorexia nervosa GWAS to approximate the underlying LD structure to account for LD between the variants in the multi-SNP instrument. Pleiotropy is an important potential confounding factor that could bias the estimate and often results in an inflated test statistic in Mendelian randomization analysis. We also removed potentially pleiotropic SNPs (i.e. SNPs that have effects on both risk factor and outcome) from this analysis using the heterogeneity in dependent instruments outlier method37,95 that detects pleiotropic SNPs at which the estimates of bxy are significantly different from expected under a causal model. The power of detecting a pleiotropic SNP depends on the sample sizes of the GWAS data sets and the deviation of bxy estimated at the pleiotropic SNP from the causal model. Based on this, the overall bxy can be estimated from all the instruments remaining using a GLS approach that takes the LD between the variants and the correlations between their effect sizes into account by modelling them in a covariance matrix. Additionally, GSMR uses the intercept of the bivariate LD score regression to account for potential sample overlap between the GWASs used as instruments for the exposure or outcome12. Estimates with binary exposures were converted to the liability scale40. Some of these analyses are exploratory because a few utilised GWASs were underpowered (i.e. did not detect ≥10 genome-wide significant independent loci at a p value level of 5 × 10−8) and we therefore lowered the p value threshold for inclusion, in order to include at least 10 independent SNP instruments as previously recommended37.
### Correction for multiple testing
We calculated the number of independent traits by matrix decomposition (i.e. number of principal components accounting for 99.5% of variance explained) and adjusted our p value threshold accordingly. The first matrix of the main analysis contained all 17 psychiatric traits, all four body composition traits, physical activity and childhood obesity (Supplementary Data 2). All sex-specific correlations were entered when available. The second matrix comprised all 17 psychiatric traits and all glycaemic traits listed in Supplementary Data 6, including their sex-specific correlations. The family-wise Bonferroni-corrected p value threshold for the main analysis, including the genetic correlations with body composition traits and physical activity, was pBonferroni = 0.05/190 = 2.6 × 10−4 and the family-wise p value threshold for the genetic correlations with glycaemic traits was pBonferroni = 0.05/231 = 2.2 × 10−4.
### URLs
For METAL, see http://csg.sph.umich.edu/abecasis/metal/; for FUMA, see http://fuma.ctglab.nl/; for SSGAC, see https://www.thessgac.org/; for Complex Traits Genetics lab, see https://ctg.cncr.nl; for International Headache Genetics Consortium, see http://www.headachegenetics.org/; for the MAGIC, see https://www.magicinvestigators.org/; for UK Biobank, see https://www.ukbiobank.ac.uk/; for the PTSD working group of the Psychiatric Genomics Consortium, see https://pgc-ptsd.com/; for the Psychiatric Genomics Consortium, see http://www.med.unc.edu/pgc; for the R project, see https://www.r-project.org/; for the EGG Consortium, see https://egg-consortium.org/.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
Supplementary Data 1 contains all information on data availability, including download links for summary statistics. Summary statistics for the body composition GWASs are available at www.topherhuebel.com/GWAS and the GWAS catalogue (www.ebi.ac.uk/gwas/). All sex-combined summary statistics for the psychiatric disorders are available at www.med.unc.edu/pgc/results-and-downloads/ and for glycaemic traits at https://www.magicinvestigators.org/. Sex-specific summary statistics of the psychiatric disorders can be requested from each working group of the Psychiatric Genomics Consortium by submitting a secondary analysis proposal. The data that support the findings of this study are available from UK Biobank (www.ukbiobank.ac.uk). Restrictions apply to the availability of these data, which were used under license for the current study (Project ID: 27546). Data are available for bona fide researchers upon application to the UK Biobank.
## Code availability
Analysis code can be accessed on github.com/topherhuebel/ukbgwas. Software can be accessed for BGENIE, at https://jmarchini.org/bgenie/; for BOLT-LMM v2.3.2, at https://data.broadinstitute.org/alkesgroup/BOLT-LMM/; for LDSC, at v1 https://github.com/bulik/ldsc; for METAL, at http://csg.sph.umich.edu/abecasis/metal/; for, at R 3.4 https://www.r-project.org/; for GSMR, at https://cnsgenomics.com/software/gcta/
## References
1. 1.
Geschwind, D. H. Evolving views of human genetic variation and its relationship to neurologic and psychiatric disease. Handb. Clin. Neurol. 147, 37–42 (2018).
2. 2.
Polderman, T. J. C. et al. Meta-analysis of the heritability of human traits based on fifty years of twin studies. Nat. Genet. 47, 702–709 (2015).
3. 3.
Schizophrenia Working Group of the Psychiatric Genomics Consortium. et al. Biological insights from 108 schizophrenia-associated genetic loci. Nature 511, 421–427 (2014).
4. 4.
Howard, D. M. et al. Genome-wide meta-analysis of depression identifies 102 independent variants and highlights the importance of the prefrontal brain regions. Nat. Neurosci. https://doi.org/10.1038/s41593-018-0326-7 (2019).
5. 5.
Visscher, P. M. et al. 10 Years of GWAS discovery: biology, function, and translation. Am. J. Hum. Genet. 101, 5–22 (2017).
6. 6.
Breen, G. et al. Translating genome-wide association findings into new therapeutics for psychiatry. Nat. Neurosci. 19, 1392–1396 (2016).
7. 7.
Gaspar, H. A. & Breen, G. Drug enrichment and discovery from schizophrenia genome-wide association results: an analysis and visualisation approach. Sci. Rep. 7, 12460 (2017).
8. 8.
Gaspar, H. A. et al. Using genetic drug-target networks to develop new drug hypotheses for major depressive disorder. Transl. Psychiatry 9, 117 (2019).
9. 9.
Maier, R. M., Visscher, P. M., Robinson, M. R. & Wray, N. R. Embracing polygenicity: a review of methods and tools for psychiatric genetics research. Psychol. Med. 48, 1–19 (2017).
10. 10.
Sullivan, P. F. et al. Psychiatric genomics: an update and an agenda. Am. J. Psychiatry 175, 15–27 (2018).
11. 11.
Bulik-Sullivan, B. K. et al. LD Score regression distinguishes confounding from polygenicity in genome-wide association studies. Nat. Genet. 47, 291–295 (2015).
12. 12.
Bulik-Sullivan, B. K. et al. An atlas of genetic correlations across human diseases and traits. Nat. Genet. 47, 1236–1241 (2015).
13. 13.
Brainstorm Consortium et al. Analysis of shared heritability in common disorders of the brain. Science 360, https://doi.org/10.1126/science.aap8757 (2018).
14. 14.
Selzam, S., Coleman, J. R. I., Caspi, A., Moffitt, T. E. & Plomin, R. A polygenic p factor for major psychiatric disorders. Transl. Psychiatry 8, 205 (2018).
15. 15.
Pettersson, E., Larsson, H. & Lichtenstein, P. Common psychiatric disorders share the same genetic origin: a multivariate sibling study of the Swedish population. Mol. Psychiatry 21, 717–721 (2016).
16. 16.
Grotzinger, A. D. et al. Genomic structural equation modelling provides insights into the multivariate genetic architecture of complex traits. Nat. Hum. Behav. 1, 513–525 (2019).
17. 17.
Kahl, K. G., Deuschle, M., Stubbs, B. & Schweiger, U. Visceral adipose tissue in patients with severe mental illness. Horm. Mol. Biol. Clin. Investig. 33, 1–7 (2018).
18. 18.
Schaumberg, K. et al. The science behind the academy for eating disorders’ nine truths about eating disorders. Eur. Eat. Disord. Rev. 25, 432–450 (2017).
19. 19.
Correll, C. U. et al. Prevalence, incidence and mortality from cardiovascular disease in patients with pooled and specific severe mental illness: a large-scale meta-analysis of 3,211,768 patients and 113,383,368 controls. World Psychiatry 16, 163–180 (2017).
20. 20.
Tarnoki, A. D. et al. Bioimpedance analysis of body composition in an international twin cohort. Obes. Res. Clin. Pract. 8, e201–98 (2014).
21. 21.
Schousboe, K. et al. Twin study of genetic and environmental influences on adult body size, shape, and composition. Int. J. Obes. Relat. Metab. Disord. 28, 39–48 (2004).
22. 22.
Finucane, H. K. et al. Heritability enrichment of specifically expressed genes identifies disease-relevant tissues and cell types. Nat. Genet. 50, 621–629 (2018).
23. 23.
Ikeda, M. et al. Re-evaluating classical body type theories: genetic correlation between psychiatric disorders and body mass index. Psychol. Med. 48, 1745–1748 (2018).
24. 24.
Duncan, L. et al. Significant locus and metabolic genetic correlations revealed in genome-wide association study of anorexia nervosa. Am. J. Psychiatry 174, 850–858 (2017).
25. 25.
Watson, H. J. et al. Anorexia nervosa genome-wide association study identifies eight loci and implicates metabo-psychiatric origins. Nat. Genet. 51, 1207–1214 (2019).
26. 26.
Demontis, D. et al. Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nat. Genet. 51, 63–75 (2019).
27. 27.
Wray, N. R. et al. Genome-wide association analyses identify 44 risk variants and refine the genetic architecture of major depression. Nat. Genet. 50, 668–681 (2018).
28. 28.
Kelly, T., Yang, W., Chen, C.-S., Reynolds, K. & He, J. Global burden of obesity in 2005 and projections to 2030. Int. J. Obes. 32, 1431–1437 (2008).
29. 29.
Mauvais-Jarvis, F. Sex differences in metabolic homeostasis, diabetes, and obesity. Biol. Sex. Differ. 6, 14 (2015).
30. 30.
Yao, S. et al. Familial liability for eating disorders and suicide attempts: evidence from a population registry in Sweden. JAMA Psychiatry 73, 284–291 (2016).
31. 31.
Fernandez-Pujals, A. M. et al. Epidemiology and heritability of major depressive disorder, stratified by age of onset, sex, and illness course in Generation Scotland: Scottish Family Health Study (GS:SFHS). PLoS ONE 10, e0142197 (2015).
32. 32.
Bandelow, B. & Michaelis, S. Epidemiology of anxiety disorders in the 21st century. Dial. Clin. Neurosci. 17, 327–335 (2015).
33. 33.
Fayyad, J. et al. The descriptive epidemiology of DSM-IV Adult ADHD in the World Health Organization World Mental Health Surveys. Atten. Defic. Hyperact. Disord. 9, 47–65 (2017).
34. 34.
Loomes, R., Hull, L. & Mandy, W. P. L. What is the male-to-female ratio in autism spectrum disorder? A systematic review and meta-analysis. J. Am. Acad. Child Adolesc. Psychiatry 56, 466–474 (2017).
35. 35.
Papathanasiou, A., Nolen-Doerr, E., Farr, O. & Mantzoros, C. S. Geoffrey Harris Prize 2018: novel pathways regulating neuroendocrine function, energy homeostasis and metabolism in humans. Eur. J. Endocrinol. https://doi.org/10.1530/EJE-18-0847 (2018).
36. 36.
McCarthy, M. M., Nugent, B. M. & Lenz, K. M. Neuroimmunology and neuroepigenetics in the establishment of sex differences in the brain. Nat. Rev. Neurosci. https://doi.org/10.1038/nrn.2017.61 (2017).
37. 37.
Zhu, Z. et al. Causal associations between risk factors and common diseases inferred from GWAS summary data. Nat. Commun. 9, 224 (2018).
38. 38.
Yang, J., Zeng, J., Goddard, M. E., Wray, N. R. & Visscher, P. M. Concepts, estimation and interpretation of SNP-based heritability. Nat. Genet. 49, 1304–1310 (2017).
39. 39.
Nyholt, D. R. A simple correction for multiple testing for single-nucleotide polymorphisms in linkage disequilibrium with each other. Am. J. Hum. Genet. 74, 765–769 (2004).
40. 40.
Byrne, E. M. et al. Conditional GWAS analysis identifies putative disorder-specific SNPs for psychiatric disorders. bioRxiv https://doi.org/10.1101/592899 (2019).
41. 41.
Hübel, C. et al. Genomics of body fat percentage may contribute to sex bias in anorexia nervosa. Am. J. Med. Genet. B. https://doi.org/10.1002/ajmg.b.32709 (2018).
42. 42.
Manu, P. et al. Weight gain and obesity in schizophrenia: epidemiology, pathobiology, and management. Acta Psychiatr. Scand. 132, 97–108 (2015).
43. 43.
Raben, A. T. et al. The complex relationship between antipsychotic-induced weight gain and therapeutic benefits: a systematic review and implications for treatment. Front. Neurosci. 11, 741 (2017).
44. 44.
Hermann, S. et al. The association of education with body mass index and waist circumference in the EPIC-PANACEA study. BMC Public Health 11, 169 (2011).
45. 45.
Helgeland, Ø. et al. Genome-wide association study reveals a dynamic role of common genetic variation in infant and early childhood growth. bioRxiv https://doi.org/10.1101/478255 (2018).
46. 46.
Kendler, K. S., Gardner, C. O. & Lichtenstein, P. A developmental twin study of symptoms of anxiety and depression: evidence for genetic innovation and attenuation. Psychol. Med. 38, 1567–1575 (2008).
47. 47.
Cortese, S. et al. Association between ADHD and obesity: a systematic review and meta-analysis. Am. J. Psychiatry 173, 34–43 (2016).
48. 48.
Chen, Q. et al. Shared familial risk factors between attention-deficit/hyperactivity disorder and overweight/obesity—a population-based familial coaggregation study in Sweden. J. Child Psychol. Psychiatry 58, 711–718 (2017).
49. 49.
Quesada, D., Ahmed, N. U., Fennie, K. P., Gollub, E. L. & Ibrahimou, B. A review: associations between attention-deficit/hyperactivity disorder, physical activity, medication use, eating behaviors and obesity in children and adolescents. Arch. Psychiatr. Nurs. 32, 495–504 (2018).
50. 50.
Graziano, P. A. et al. Co-occurring weight problems among children with attention deficit/hyperactivity disorder: the role of executive functioning. Int. J. Obes. 36, 567 (2011).
51. 51.
Cournot, M. et al. Relation between body mass index and cognitive function in healthy middle-aged men and women. Neurology 67, 1208–1214 (2006).
52. 52.
Khalife, N. et al. Childhood attention-deficit/hyperactivity disorder symptoms are risk factors for obesity and physical inactivity in adolescence. J. Am. Acad. Child Adolesc. Psychiatry 53, 425–436 (2014).
53. 53.
Kleinridders, A., Ferris, H. A., Cai, W. & Kahn, C. R. Insulin action in brain regulates systemic metabolism and brain function. Diabetes 63, 2232–2243 (2014).
54. 54.
Schuch, F. B. et al. Exercise improves physical and psychological quality of life in people with depression: a meta-analysis including the evaluation of control group response. Psychiatry Res. 241, 47–54 (2016).
55. 55.
Silventoinen, K. et al. Genetic and environmental effects on body mass index from infancy to the onset of adulthood: an individual-based pooled analysis of 45 twin cohorts participating in the COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) study. Am. J. Clin. Nutr. 104, 371–379 (2016).
56. 56.
Silventoinen, K. et al. Differences in genetic and environmental variation in adult BMI by sex, age, time period, and region: an individual-based pooled analysis of 40 twin cohorts. Am. J. Clin. Nutr. 106, 457–466 (2017).
57. 57.
Warren, H. R. et al. Genome-wide association analysis identifies novel blood pressure loci and offers biological insights into cardiovascular risk. Nat. Genet. 49, 403–415 (2017).
58. 58.
Allen, N. E., Sudlow, C., Peakman, T., Collins, R. & Biobank, U. K. UK biobank data: come and get it. Sci. Transl. Med. 6, 224ed4 (2014).
59. 59.
Sudlow, C. et al. UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 12, e1001779 (2015).
60. 60.
Purcell, S., Cherny, S. S. & Sham, P. C. Genetic Power Calculator: design of linkage and association genetic mapping studies of complex traits. Bioinformatics 19, 149–150 (2003).
61. 61.
Kyle, U. G. et al. Bioelectrical impedance analysis—part I: review of principles and methods. Clin. Nutr. 23, 1226–1243 (2004).
62. 62.
Lu, Y. et al. New loci for body fat percentage reveal link between adiposity and cardiometabolic disease risk. Nat. Commun. 7, 10495 (2016).
63. 63.
Townsend, P. Deprivation. J. Soc. Policy 16, 125 (1987).
64. 64.
McCarthy, S. et al. A reference panel of 64,976 haplotypes for genotype imputation. Nat. Genet. 48, 1279–1283 (2016).
65. 65.
Bycroft, C. et al. The UK Biobank resource with deep phenotyping and genomic data. Nature 562, 203–209 (2018).
66. 66.
Abraham, G., Qiu, Y. & Inouye, M. FlashPCA2: principal component analysis of Biobank-scale genotype datasets. Bioinformatics 33, 2776–2778 (2017).
67. 67.
Willer, C. J., Li, Y. & Abecasis, G. R. METAL: fast and efficient meta-analysis of genomewide association scans. Bioinformatics 26, 2190–2191 (2010).
68. 68.
Watanabe, K., Taskesen, E., van Bochoven, A. & Posthuma, D. Functional mapping and annotation of genetic associations with FUMA. Nat. Commun. 8, 1826 (2017).
69. 69.
MacArthur, J. et al. The new NHGRI-EBI Catalog of published genome-wide association studies (GWAS Catalog). Nucleic Acids Res. 45, D896–D901 (2017).
70. 70.
Loh, P.-R., Kichaev, G., Gazal, S., Schoech, A. P. & Price, A. L. Mixed-model association for biobank-scale datasets. Nat. Genet. 50, 906–908 (2018).
71. 71.
Yang, J., Lee, S. H., Wray, N. R., Goddard, M. E. & Visscher, P. M. GCTA-GREML accounts for linkage disequilibrium when estimating genetic variance from genome-wide SNPs. Proc. Natl Acad. Sci. USA 113, E4579–E4580 (2016).
72. 72.
Yang, J., Lee, S. H., Goddard, M. E. & Visscher, P. M. GCTA: a tool for genome-wide complex trait analysis. Am. J. Hum. Genet. 88, 76–82 (2011).
73. 73.
Chen, G.-B. Estimating heritability of complex traits from genome-wide association studies using IBS-based Haseman–Elston regression. Front. Genet. 5, 107 (2014).
74. 74.
Psychiatric GWAS Consortium Bipolar Disorder Working Group. Large-scale genome-wide association analysis of bipolar disorder identifies a new susceptibility locus near ODZ4. Nat. Genet. 43, 977–983 (2011).
75. 75.
Stahl, E. A. et al. Genome-wide association study identifies 30 loci associated with bipolar disorder. Nat. Genet. 51, 793–803 (2019).
76. 76.
Martin, J. et al. A genetic investigation of sex bias in the prevalence of attention-deficit/hyperactivity disorder. Biol. Psychiatry. https://doi.org/10.1016/j.biopsych.2017.11.026 (2017).
77. 77.
Walters, R. K. et al. Transancestral GWAS of alcohol dependence reveals common genetic underpinnings with psychiatric disorders. Nat. Neurosci. 21, 1656–1669 (2018).
78. 78.
Autism Spectrum Disorders Working Group of The Psychiatric Genomics Consortium. Meta-analysis of GWAS of over 16,000 individuals with autism spectrum disorder highlights a novel locus at 10q24.32 and a significant overlap with schizophrenia. Mol. Autism 8, 21 (2017).
79. 79.
Duncan, L. E. et al. Largest GWAS of PTSD (N = 20 070) yields genetic overlap with schizophrenia and sex differences in heritability. Mol. Psychiatry. https://doi.org/10.1038/mp.2017.77 (2017).
80. 80.
Mattheisen, M. et al. Genome-wide association study in obsessive-compulsive disorder: results from the OCGAS. Mol. Psychiatry 20, 337–344 (2015).
81. 81.
Khramtsova, E. A. et al. Sex differences in the genetic architecture of obsessive-compulsive disorder. Am. J. Med. Genet. B. https://doi.org/10.1002/ajmg.b.32687 (2018).
82. 82.
Witt, S. H. et al. Genome-wide association study of borderline personality disorder reveals genetic overlap with bipolar disorder, major depression and schizophrenia. Transl. Psychiatry 7, e1155 (2017).
83. 83.
Stringer, S. et al. Genome-wide association study of lifetime cannabis use based on a large meta-analytic sample of 32 330 subjects from the International Cannabis Consortium. Transl. Psychiatry 6, e769 (2016).
84. 84.
Purves, K. L. et al. The common genetic architecture of anxiety disorders. Mol. Psychiatry (2019). https://doi.org/10.1038/s41380-019-0559-1.
85. 85.
Hammerschlag, A. R. et al. Genome-wide association analysis of insomnia complaints identifies risk genes and genetic overlap with psychiatric and metabolic traits. Nat. Genet. https://doi.org/10.1038/ng.3888 (2017).
86. 86.
Wain, L. V. et al. Novel insights into the genetics of smoking behaviour, lung function, and chronic obstructive pulmonary disease (UK BiLEVE): a genetic association study in UK Biobank. Lancet Respir. Med. 3, 769–781 (2015).
87. 87.
Okbay, A. et al. Genome-wide association study identifies 74 loci associated with educational attainment. Nature 533, 539–542 (2016).
88. 88.
Gormley, P. et al. Meta-analysis of 375,000 individuals identifies 38 susceptibility loci for migraine. Nat. Genet. 48, 856–866 (2016).
89. 89.
Anttila, V. et al. Genome-wide meta-analysis identifies new susceptibility loci for migraine. Nat. Genet. 45, 912–917 (2013).
90. 90.
Dupuis, J. et al. New genetic loci implicated in fasting glucose homeostasis and their impact on type 2 diabetes risk. Nat. Genet. 42, 105–116 (2010).
91. 91.
Bradfield, J. P. et al. A genome-wide association meta-analysis identifies new childhood obesity loci. Nat. Genet. 44, 526–531 (2012).
92. 92.
Graff, M. et al. Genome-wide analysis of BMI in adolescents and young adults reveals additional insight into the effects of genetic loci over the life course. Hum. Mol. Genet. 22, 3597–3607 (2013).
93. 93.
Davey Smith, G. & Ebrahim, S. ‘Mendelian randomization’: can genetic epidemiology contribute to understanding environmental determinants of disease? Int. J. Epidemiol. 32, 1–22 (2003).
94. 94.
Wooldridge, J. M. Introductory Econometrics: A Modern Approach (Nelson Education, 2015).
95. 95.
Zhu, Z. et al. Integration of summary data from GWAS and eQTL studies predicts complex trait gene targets. Nat. Genet. 48, 481–487 (2016).
## Acknowledgements
This study represents independent research part funded by the UK National Institute for Health Research (NIHR) Biomedical Research Centre at South London and Maudsley NHS Foundation Trust and King’s College London. The views expressed are those of the author(s) and not necessarily those of the UK NHS, the NIHR or the Department of Health and Social Care. High-performance computing facilities were funded with capital equipment grants from the GSTT Charity (TR130505) and Maudsley Charity (980). Research reported in this publication was supported by the USA National Institute of Mental Health of the National Institutes of Health (NIMH) under Award Number U01 MH109514, U01 MH109528, U01 MH109514 and U01 MH109536. The PGC Substance Use Disorders group acknowledges support from MH109532. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Prof. Bulik acknowledges funding from the Swedish Research Council (VR Dnr: 538-2013-8864) and the Klarman Family Foundation (the Anorexia Nervosa Genetics Initiative is an initiative of the Klarman Family Foundation). Profs. Bulik and Micali are supported by NIMH R21 MH115397. PFO receives funding from the UK Medical Research Council (MR/N015746/1) and the Wellcome Trust (109863/Z/15/Z). Dr. Graff acknowledges funding from the National Institutes of Health (R01HD057194). Dr. Workalemahu acknowledges funding by the Intramural Research Programme of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health. Dr. Prokopenko was funded by the European Union’s Horizon 2020 research, and innovation programme (LONGITOOLS, H2020-SC1-2019-874739; DYNAhealth, H2020-PHC-2014-633595); and the Wellcome Trust (WT205915). Data on glycaemic traits have been contributed by MAGIC investigators and have been downloaded from www.magicinvestigators.org. Data on the childhood BMI trait has been contributed by the EGG Consortium and has been downloaded from www.egg-consortium.org. This study was completed as part of approved UK Biobank study application 27546 to Dr. Breen. Open access funding provided by Karolinska Institute.
## Author information
Authors
### Contributions
C.H., C.M.B. and G.B. designed research; C.H., H.A.G., J.R.I.C., K.B.H., K.P., I.P., M.G., J.S.N. and T.W. provided essential materials; C.H., H.A.G., J.R.I.C., K.B.H. and K.P. analysed data or performed statistical analysis; C.H., H.A.G., J.R.I.C. and G.B. wrote paper; C.H. and G.B. had primary responsibility for final content. All authors read and approved the final manuscript.
### Corresponding author
Correspondence to Christopher Hübel.
## Ethics declarations
### Competing interests
Dr. Breen has received grant funding from and served as a consultant to Eli Lilly, has received honoraria from Illumina and has served on advisory boards for Otsuka. Dr. Bulik is a grant recipient from and has served on advisory boards for Shire. She receives royalties from Pearson. All interests are unrelated to this work. Dr. Coleman, Dr. Gaspar, Ms. Purves, Dr. Hübel, Dr. Hanscombe, Dr. Prokopenko, Dr. Graff, Dr. Ngwa, Dr. Workalemahu and Dr. O'Reilly declare no competing interests.
Peer review information Nature Communications thanks Ryan Bogdan, Oleksandr Frei and Xiong-Jian Luo for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A full list of consortia members and their affiliations can be found in the Supplementary Information.
## Rights and permissions
Reprints and Permissions
Hübel, C., Gaspar, H.A., Coleman, J.R.I. et al. Genetic correlations of psychiatric traits with body composition and glycemic traits are sex- and age-dependent. Nat Commun 10, 5765 (2019). https://doi.org/10.1038/s41467-019-13544-0
• Accepted:
• Published:
• ### Pharmacological treatment of eating disorders, comorbid mental health problems, malnutrition and physical health consequences
• Hubertus Himmerich
• , Carol Kan
• , Katie Au
• & Janet Treasure
Pharmacology & Therapeutics (2020)
|
2020-09-21 07:43:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5984472036361694, "perplexity": 13977.208827856926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00247.warc.gz"}
|
https://terapia-grupowa.pl/465_calcium/reacts/with/aluminum.html
|
calcium reacts with aluminum
Calcium Carbonate (Limestone) | Mosaic Crop Nutrition
Calcium carbonate, the chief component of limestone, is a widely used amendment to neutralize soil acidity and to supply calcium (Ca) for plant nutrition. The term “lime” can refer to several products, but for agricultural use it generally refers to ground limestone.
Calcium Aluminum Alloy | AMERICAN ELEMENTS
Calcium Aluminum Alloy Ca-Al bulk & research qty manufacturer. Properties, SDS, Appliions, Price. Free samples program. Term contracts & credit cards/PayPal accepted. SECTION 6. ACCIDENTAL RELEASE MEASURES Personal precautions, protective
Facts About Calcium | Live Science
Interestingly, calcium seems to come in fifth place wherever it goes: It is the fifth most abundant element by mass in the Earth''s crust (after oxygen, silicon, aluminum and iron); the fifth most
Does Aluminum React With Water? - Reference
2020/4/14· Aluminum does react with water. The reaction happens differently, depending on what form the aluminum is in and what other elements it is bonded to.
Calcium Aluminate - an overview | ScienceDirect Topics
Calcium aluminate cements (CACs) represent an interesting alternative because their pore water pH, ranging from 11.4 to 12.5, is reduced as compared to OPC (Goñi et al., 1991). The main difference between Portland and calcium aluminate cements lies in the nature of the active phase that leads to setting and hardening.
Propanoic Acid And Calcium Carbonate Equation
a) Calcium carbonate reacts with hydrochloric acid to give calcium chloride, carbon dioxide and water. State one similarity and one difference in the observations you could make. The problem states that there is 25 mL of a propionic acid where the concentration is unknown and it was titrated with.
Calcium Hydroxide - Structure, Properties, and Uses of …
Limewater reacts with acids and forms salts. The saturated solution of calcium hydroxide in water also reacts with and dissolves metals such as aluminum. It reacts with carbon dioxide to form calcium carbonate (CaCO 3). This reaction is commonly referred to
Unit 9 Chemical Equations and Reactions
8. Octane (C 8H18) reacts with oxygen gas to produce carbon dioxide and water. (a) 2 C8H18 + 25 O2 → 18 H2O + 16 CO 2 (b) Coustion 9. Calcium carbonate reacts with aluminum phosphate to produce calcium phosphate and aluminum (a) 3 3
Calcium hydroxide | chemical compound | Britannica
Calcium hydroxide, also called slaked lime, Ca(OH) 2, is obtained by the action of water on calcium oxide. When mixed with water, a small proportion of it dissolves, forming a solution known as limewater, the rest remaining as a suspension called milk of lime.
(PDF) Calcium Modifiion of Spinel Inclusions in …
calcium reacts with alumina (for this reason it has been suggested that spinels are easier The formation of intermediate reaction products after calcium addition to aluminum-killed steel was
Composition of aluminum in aluminum oxide - 00767643
Composition of aluminum in aluminum oxide - 00767643 Tutorials for Question of General Questions and General General Questions 1. A 3.53-gsample of aluminum completely reacts with oxygen to form 6.67gof aluminum oxide. Use this data to calculate the mass
Calcium Nitrate Alternatives In Hydroponic & Fertigation …
Lime adds add Calcium and Magnesium to the soil and the carbonate reacts with acids in the soil (like H+) to increase up the soil pH. Lime can be used in acidic soils, the amount required is determined from a complete soil analysis – which takes into account the pH, H+, Ca2+, other ions and soil CEC (heaviness) in a scientific way to balance pH and ions properly.
When sulfuric acid reacts with calcium hyd | Clutch Prep
When sulfuric acid reacts with calcium hydroxide, calcium sulfate and water are produced. The balanced equation for this reaction is: H 2 SO 4 (aq) + Ca(OH) 2 (s) → CaSO 4 (s) + 2H 2 O(1) If 3 moles of calcium hydroxide react.. (a) The reaction
How hydrochloric acid reacts with aluminum. Formulas …
How hydrochloric acid reacts with aluminum. Formulas and description of the process Features of hydrochloric acid and aluminum interaction Share Tweet Send [Deposit Photos] Alu minum is a mal It is a good elec tri cal con duc tor. It is also am pho ter ic – it can re
Calcium
Measuring calcium has never been so easy. Compare Products Skip to Content Toggle Nav Sign In Create an Account My Cart Search Advanced Search Menu Products Testers pH/ORP Testers EC/TDS Testers Multiparameter Testers pH/ISE
Thermite Reaction: aluminum reacts with iron(III) oxide | …
Thermite Reaction: aluminum reacts with iron(III) oxide The reaction of iron (III) oxide and aluminum is initiated by heat released from a small amount "starter mixture". This reaction is an oxidation-reduction reaction, a single replacement reaction, producing great quantities of heat (flame and sparks) and a stream of molten iron and aluminum oxide which pours out of a hole in the bottom of
CANCRINITE (Sodium Calcium Aluminum Silie …
Cancrinite is one of the rarer meers of the feldspathoid group of minerals. Minerals whose chemistries are close to that of the alkali feldspars but are poor in silica (SiO2) content, are called feldspathoids.As a result or more correctly as a function of the fact, they
The Effect of Calcium Hydroxide Ammonium Fertilizer on …
Calcium hydroxide, which is produced by adding water to calcium oxide, addresses the problem presented by ammonium fertilizer. It reacts in water to release calcium ions and hydroxide ions, which raise the pH of the soil.
Acid-base Behavior of the Oxides - Chemistry LibreTexts
Aluminum oxide reacts with hot dilute hydrochloric acid to give aluminum chloride solution. $Al_2O_3 + 6HCl \rightarrow 2AlCl_3 + 3H_2O$ This reaction and others display the amphoteric nature of aluminum oxide. Reaction with bases: Aluminum
When calcium reacts with chlorine the reaction involves …
When calcium reacts with chlorine the reaction involves a A transfer of from ENT 215 at University of North Carolina, Greensboro As a current student on this bumpy collegiate pathway, I stuled upon Course Hero, where I can find study resources for nearly all my
Calcium (Ca) - Chemical properties, Health and …
Calcium The chemical element Calcium (Ca), atomic nuer 20, is the fifth element and the third most abundant metal in the earth’s crust. The metal is trimorphic, harder than sodium, but softer than aluminium.A well as beryllium and aluminium, and unlike the alkaline metals, it doesn’t cause skin-burns.
When calcium reacts with bromine , calcium ions , Ca 2+ …
Answer to When calcium reacts with bromine , calcium ions , Ca 2+ , and bromide ions , Br - , are formed . In this reaction , bromine atoms A ) lose electrons Study Resources Main Menu by School by Textbook by Literature Title Study Guides Infographics
[Solved] Solid calcium fluoride (CaF2) reacts with sulfuric …
Solid calcium fluoride (CaF 2) reacts with sulfuric acid to form solid calcium sulfate and gaseous hydrogen fluoride.The HF is then dissolved in water to form hydrofluoric acid. A source of calcium fluoride is fluorite ore containing 96.0 wt% CaF 2 and 4.0% SiO 2. In a
CALCIUM SULFATE | CAMEO Chemicals | NOAA
CALCIUM SULFATE is non-coustible. Decomposes to give toxic oxides of sulfur, but only at very high temperature (>1500 C). Generally of low reactivity but may act as an oxidizing agent: incompatible with diazomethane, aluminum, and phosphorus. Certain
Reaction of Aluminum with Water to Produce Hydrogen
Here, the aluminum powder is fed into the reaction chaer, where it reacts with the sodium hydroxide solution near room temperature, with the production of hydrogen gas and the formation of reaction byproducts at the bottom of the reactor.
|
2021-10-20 04:20:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5021241307258606, "perplexity": 9407.470804483537}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00409.warc.gz"}
|
https://mattshomepage.com/articles/2018/Feb/16/git_worktrees/
|
# Git Worktrees
by Robert M. Johnson - Fri 16 February 2018
Tags:
Ever wanted to have multiple branches checked out in git?
Since July 2015 (https://github.com/blog/2042-git-2-5-including-multiple-worktrees-and-triangular-workflows), git worktrees was introduced to solve this problem. Let me show you how to use this with some examples. I will create a git repo called "foo":
$git init ~/foo When I do this, my git repo is now located at ~/foo/.git If I view what's under the .git folder, I have the following files: • HEAD • config • description And the following folders: • branches • hooks • info • objects • refs I will add one file to master and commit it: $ touch a.txt; git add a.txt; git commit -m "First checkin"
OK, now let's create a branch named "dev":
$git branch dev Now, let's create a folder ~/foo/worktrees called "worktrees" and switch to it (the name of this folder doesn't matter) $ mkdir worktrees; cd worktrees
From here, we now run the following git command:
$git worktree add dev-branch dev After this, you'll get some output like the following: Preparing worktrees/dev-branch (identifier dev) HEAD is now at 9aaccf4 First checkin The git worktree takes the following form: git worktree add [name-of-our-worktree] [branch-name] You'll see a new directory appear named "dev-branch". If you nagivate inside, you are now in the "dev" working directory! But you can back out, and be in the "master" working directory. So in git, we refer to the working directories created by git worktree as linked working trees. This is distinquished from our main working tree which in this case is master. What happens if you try to create a linked working tree for master? Note that our main working tree is master. Well, we can run: $ git worktree add master master
We get the following output:
$fatal: 'master' is already checked out at '~/foo' Git doesn't allow us to have two working directories that would be the same. However, if you added the -f flag, you can force git to do this anyway. I'm not aware of a use case where you would want to do this. OK, another thing to notice, is that there is a file in our linked working tree named ".git". What is this file? If we look inside, we find: $gitdir: ~/foo/.git/worktrees/dev-branch
If we now look back in our ~/foo/.git, we find a new folder called "worktrees". Insider this folder, there is a folder for each linked working tree. If we look inside the "dev" folder, we see the following files:
• gitdir
• commondir
• index
and folders:
• logs
If we look at contents of commondir, it is "../..". It's relative path, and this relative path is relative to GIT_DIR, which in this case is ~/foo/.git/worktrees/dev-branch. Thus, GIT_COMMON_DIR points to ~/foo/.git
# Viewing Worktreesö
At any point, if you want to list all working trees (including the main working tree), run:
$git worktree list This yields (given our current setup): ~/foo/.git (bare) ~/foo/worktrees/dev-branch 9aaccf4 [dev] Note the main working directory is included in this list # Deleting Worktreesö To delete a linked working tree, simply delete the folder. When you do this, you'll notice that in the main git directory, the worktree information is still there. If you want to clean this up, run: git worktree prune Else, git will clean it up automatically based on the value set in your git config for gc.worktreePruneExpire # Moving Worktreesö What happens if you move your working directory? Let's do this: $ mv dev-branch dev-branch-move
Now, if we run git worktree list it still reports ~/foo/worktrees/dev-branch as the path
To update, I modified the following items:
• ~/foo/worktrees/dev-branch-move/.git
• ~/foo/.git/worktrees/dev-branch --> ~/foo/.git/worktrees/dev-branch-move
• ~/foo/.git/worktrees/dev-branch-move/gitdir
Also, I suppose you would need to update ~/foo/.git/worktrees/dev-branch-move/gitdir if you moved the linked directory tree to a different directory
# What's next?ö
In https://git-scm.com/docs/git-worktree, it is addressed that remove and mv commands would be good for git worktree, and I agree. Right now, it's a set of manual steps.
|
2019-04-25 06:29:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2184903621673584, "perplexity": 7759.83713118219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578689448.90/warc/CC-MAIN-20190425054210-20190425080210-00463.warc.gz"}
|
https://www.ilovephilosophy.com/viewtopic.php?p=2768325
|
## Is 1 = 0.999... ? Really?
For discussing anything related to physics, biology, chemistry, mathematics, and their practical applications.
Moderator: Flannel Jesus
## Is it true that 1 = 0.999...? And Exactly Why or Why Not?
Yes, 1 = 0.999...
13
41%
No, 1 ≠ 0.999...
16
50%
Other
3
9%
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
Ecmandu wrote:1/9 = 0.111...
Your word “are” also means “equals” in my case above, a finite expression “is” (are) equals an infinite expression.
The way I understand it, what you're saying is:
1) $$0.111\dotso$$ is an infinite sequence.
(Disagree.)
2) $$\frac{1}{9}$$ is a finite sequence.
(Disagree.)
3) $$0.111\dotso = \frac{1}{9}$$ is true.
(Disagree.)
4) If #1, #2 and #3 are true, it follows that there is at least one infinite sequence that is a finite sequence.
(I'm inclined to agree with this.)
The problem is that $$\frac{1}{9}$$ and $$0.111\dotso$$ are not sequences. They are numbers. (And it's also not true that $$0.111\dotso = \frac{1}{9}$$ but that's a peripheral issue.)
Ok, we’re going to dig at each other every so often, so I’ll just ignore your post after this.
So...
I found it interesting that you omitted the sequence:
1/10+1/100+1/1000+1/10000 etc...
But just decided to write 0.111...
The first part IMPLIES the sequence!
1/9 is finite in that it is a rational in its fraction form!!
0.111... is a repeating rational in its decimal form.
All of this shit:
1/9
0.111...
1/10+1/1000*1/1000 etc...
All equal each other!
I have no clue why you are playing such subtle word games that don’t change the content of what I wrote whatsoever, but here you are, doing just that!
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:I found it interesting that you omitted the sequence:
1/10+1/100+1/1000+1/10000 etc...
That's not a sequence, that's a sum. You are confusing the two.
$$0.111\dots$$, which is the same as $$\frac{1}{10} + \frac{1}{10^2} + \frac{1}{10^3} + \cdots$$, is an infinite sum. It is not an infinite sequence. There's a huge difference between the two.
1/9 is finite in that it is a rational in its fraction form!!
It's not a finite sequence. It's not a sequence. It's a NUMBER.
I have no clue why you are playing such subtle word games
That would be you.
You need to learn language.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
Ecmandu wrote:I found it interesting that you omitted the sequence:
1/10+1/100+1/1000+1/10000 etc...
That's not a sequence, that's a sum. You are confusing the two.
$$0.111\dots$$, which is the same as $$\frac{1}{10} + \frac{1}{10^2} + \frac{1}{10^3} + \cdots$$, is an infinite sum. It is not an infinite sequence. There's a huge difference between the two.
1/9 is finite in that it is a rational in its fraction form!!
It's not a finite sequence. It's not a sequence. It's a NUMBER.
I have no clue why you are playing such subtle word games
That would be you.
You need to learn language.
It’s only an infinite sum if it converges, an infinite sequence is just an infinitely expanding discernible pattern.
I’m aware that 1/9 is a number. I stated that it’s a rational in fractional form. Because there’s a divisor, it’s also an operation.
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
It’s only an infinite sum if it converges
Not really.
An infinite sum is simply a sum cosisting of an infinite number of terms. Whether it converges or not has nothing to do with it.
I’m aware that 1/9 is a number. I stated that it’s a rational in fractional form. Because there’s a divisor, it’s also an operation.
But you don't seem to be aware that it is not a finite sequence.
Actually, it is you who are 1) playing word games, and 2) avoiding addressing other people's claims.
As for me, I think I responded to almost every claim you made. Can you show me a claim I did not respond to?
Perhaps you simply don't like how I responded to your claims? If this is the case, can you explain why? What exactly are your expectations?
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
It’s only an infinite sum if it converges
Not really.
An infinite sum is simply a sum cosisting of an infinite number of terms. Whether it converges or not has nothing to do with it.
I’m aware that 1/9 is a number. I stated that it’s a rational in fractional form. Because there’s a divisor, it’s also an operation.
But you don't seem to be aware that it is not a finite sequence.
Actually, it is you who are 1) playing word games, and 2) avoiding addressing other people's claims.
As for me, I think I responded to almost every claim you made. Can you show me a claim I did not respond to?
Perhaps you simply don't like how I responded to your claims? If this is the case, can you explain why? What exactly are your expectations?
Dude, Magnus! Honestly!
“It’s a SUM with an infinite number of terms!! That’s what convergence fucking is! A fucking SUM!!
Not a sequence, not a series! It’s a fucking SUM! A SOLUTION to the fucking additive infinite series!
You never addressed the argument that proves infinite and finite behave differently in anything resembling a rational manner.
It is a mathematical FACT that when you remove something (and notice when I pointed out that when you “add” to an infinite set, it’s so absurd that not even YOU are arguing that! ) so the only argument you think you have is removal!
This has been explained to you!
If you remove the first one:
Boy —>
Boy —> clone
Boy —> clone
Etc...
All that NEED fucking occur is that all the boys take ONE step forward, and EVERYONE is holding hands again. This is IMPOSSIBLE!! With finite sets!!
Impossible!!! It’s a fucking PROOF that the infinite works differently than the finite!!
You figured that out. That it disproved you.
So what did you do? You ignored it and then posted this:
Boy
Boy —> clone
Boy
Boy —> clone
Boy
Boy —> clone
Etc...
And I jumped in and said “if you move the first boy up one step and then the bottom two (now) up one step and the (now) bottom three up one step Etc... all at once, everyone will still be holding hands again! But only in infinity is this a FACT!! If this is finite, it’s impossible to do this! Thus: infinite and finite WORK differently!
And you know what ?
You blew me off!
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:It’s a SUM with an infinite number of terms!! That’s what convergence fucking is! A fucking SUM!
That's not what convergence is.
https://en.wikipedia.org/wiki/Convergent_series
Wikipedia wrote:A series is convergent if the sequence of its partial sums $$(S_{1},S_{2},S_{3},\dots)$$ tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. More precisely, a series converges, if there exists a number $$\ell$$ such that for every arbitrarily small positive number $$\varepsilon$$, there is a (sufficiently large) integer $$N$$ such that for all $$n\geq N$$,
$$\left|S_{n}-\ell \right\vert <\varepsilon .$$
This means that the infinite sum $$0.9 + 0.09 + 0.009 + \cdots$$ converges to $$1$$. (Which does not mean that it is equal to $$1$$.)
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
Ecmandu wrote:It’s a SUM with an infinite number of terms!! That’s what convergence fucking is! A fucking SUM!
That's not what convergence is.
https://en.wikipedia.org/wiki/Convergent_series
Wikipedia wrote:A series is convergent if the sequence of its partial sums $$(S_{1},S_{2},S_{3},\dots)$$ tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. More precisely, a series converges, if there exists a number $$\ell$$ such that for every arbitrarily small positive number $$\varepsilon$$, there is a (sufficiently large) integer $$N$$ such that for all $$n\geq N$$,
$$\left|S_{n}-\ell \right\vert <\varepsilon .$$
This means that the infinite sum $$0.9 + 0.09 + 0.009 + \cdots$$ converges to $$1$$. (Which does not mean that it is equal to $$1$$.)
It does mean that it’s a SUM!!!
YOU’RE the one who used the word “sum” incorrectly, not me!
But here you are AGAIN nit-picking over stupid shit and avoiding arguments that have to do with either:
1.) 0.999... 1 (or not)
2.) orders of infinity exist (or not)
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:
Magnus Anderson wrote:
Ecmandu wrote:It’s a SUM with an infinite number of terms!! That’s what convergence fucking is! A fucking SUM!
That's not what convergence is.
https://en.wikipedia.org/wiki/Convergent_series
Wikipedia wrote:A series is convergent if the sequence of its partial sums $$(S_{1},S_{2},S_{3},\dots)$$ tends to a limit; that means that the partial sums become closer and closer to a given number when the number of their terms increases. More precisely, a series converges, if there exists a number $$\ell$$ such that for every arbitrarily small positive number $$\varepsilon$$, there is a (sufficiently large) integer $$N$$ such that for all $$n\geq N$$,
$$\left|S_{n}-\ell \right\vert <\varepsilon .$$
This means that the infinite sum $$0.9 + 0.09 + 0.009 + \cdots$$ converges to $$1$$. (Which does not mean that it is equal to $$1$$.)
It does mean that it’s a SUM!!!
YOU’RE the one who used the word “sum” incorrectly, not me!
But here you are AGAIN nit-picking over stupid shit and avoiding arguments that have to do with either:
1.) 0.999... 1 (or not)
2.) orders of infinity exist (or not)
Converges to means the same exact thing as “equals”
You have arguments to look at!
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:If you remove the first one:
Boy —>
Boy —> clone
Boy —> clone
Etc...
All that NEED fucking occur is that all the boys take ONE step forward, and EVERYONE is holding hands again.
[..]
You figured that out. That it disproved you.
So what did you do? You ignored it.
I did not ignore it. I responded to it by stating that it's not something that you can do because it is strictly forbidden by your previous claims.
Let's go back to page 98 where I stated:
Magnus wrote:We started with the following situation:
Boy1 -> Clone1
Boy2 -> Clone2
Boy3 -> Clone3
etc
We put the two sets in one-to-one correspondence. We paired every boy with exactly one clone and every clone with exactly one boy. This means that every boy is paired (which means there are no unpaired boys) and that every clone is paired (which means there are no unpaired clones.)
Once you remove Clone1 from the set of clones, you get the following situation:
Boy1
Boy2 -> Clone2
Boy3 -> Clone3
etc
Boy1 is now unpaired because we removed the clone he was paired with. At this point, there is no one-to-one correspondence between the two sets. In order to restore it, there must be a clone in the set of clones that is not paired -- an unpaired clone. But there are NO unpaired clones. We STATED it earlier. And if there were unpaired clones, that would mean there was no one-to-one correspondence in the first place. But we did put the two sets in one-to-one correspondence, didn't we?
A possible way out is to say that by removing Clone1 a new clone is generated. But the problem with this is . . . that's not what the word "remove" means. To remove a clone does not mean to remove a clone and add a new one.
Another possible way out is to say that there is no need for an unpaired clone to exist. You can just pair Boy1 with one of the paired clones. But the result of that wouldn't be a one-to-one correspondence. You'd have a clone paired with TWO boys. One-to-one correspondence requires that every clone is paired with EXACTLY ONE boy.
Note the bolded part.
In order to restore one-to-one correspondence between the two sets, there must be an unpaired clone to pair with an unpaired boy. But there is no such a clone. All of the clones are already paired. Thus, regardless of how you move your clones, you cannot restore one-to-one correspondence.
You responded to this by saying that the word "infinity" refers to a never-ending process of increase which means that new clones are added continually. So when we remove a clone, a new one is added automatically.
And my response to this was that the word "infinity" does not refer to a never-ending process of increase (that it does not refer to a process at all.)
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:Converges to means the same exact thing as “equals”
That's not true.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
Ecmandu wrote:Converges to means the same exact thing as “equals”
That's not true.
You’re still doing it! You’re ego is invested in nit-picking and not arguments!!!
There is a difference between ‘converges to’ (which is convergence) and ‘converges towards’ which is not convergence. But!! Even that’s a contradiction because the word convergence IN AND OF ITSELF is defined as the finite conclusion of a sequence or series. Infinite or not.
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
It's part of your argument that infinite sequences are both finite and infinite sequences.
That in turn is part of your argument that infinite sequences are algorithms.
That in turn is part of your argument that the word "infinity" refers to a never-ending processes of increase.
That in turn is part of your argument that infinities do not come in sizes.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:There is a difference between ‘converges to’ (which is convergence) and ‘converges towards’ which is not convergence. But!! Even that’s a contradiction because the word convergence IN AND OF ITSELF is defined as the finite conclusion of a sequence or series. Infinite or not.
Not true.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
It's part of your argument that infinite sequences are both finite and infinite sequences.
That in turn is part of your argument that infinite sequences are algorithms.
That in turn is part of your argument that the word "infinity" refers to a never-ending processes of increase.
That in turn is part of your argument that infinities do not come in sizes.
This part is transitive:
1/9 implies 0.111...
0.111... implies 1/9
If they both imply each other, they are equalities.
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
I have no idea what that means.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:I have no idea what that means.
And that’s why this debate is over. Because you don’t understand, really, much of anything said here!
But let me be kind to you for a moment!
2+3=5
3+2=5
That means 2 and 3 are transitive: they mean the same thing!
I’ve seen you write a bunch of fancy symbols, but you don’t even understand kindergarten math!
That’s why we are butting heads here!
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
This isn't supposed to be a contest of beliefs but a cooperative effort to resolve disagreements. (But then again, this is a forum, so pretty much everything anyone does here is some sort of competition where people try to prove themselves to be the smartest guy in the room.)
Ecmandu wrote:2+3=5
3+2=5
That means 2 and 3 are transitive: they mean the same thing!
What do you mean by "transitive"?
https://en.wikipedia.org/wiki/Transitive_relation
Wikipedia wrote:In mathematics, a homogeneous relation R over a set X is transitive if for all elements a, b, c in X, whenever R relates a to b and b to c, then R also relates a to c.
Either way, it's definitely not true that $$2$$ and $$3$$ mean the same thing.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:This isn't supposed to be a contest of beliefs but a cooperative effort to resolve disagreements. (But then again, this is a forum, so pretty much everything anyone does here is some sort of competition where people try to prove themselves to be the smartest guy in the room.)
Ecmandu wrote:2+3=5
3+2=5
That means 2 and 3 are transitive: they mean the same thing!
What do you mean by "transitive"?
https://en.wikipedia.org/wiki/Transitive_relation
Wikipedia wrote:In mathematics, a homogeneous relation R over a set X is transitive if for all elements a, b, c in X, whenever R relates a to b and b to c, then R also relates a to c.
Either way, it's definitely not true that $$2$$ and $$3$$ mean the same thing.
Magnus,
I have to admit, at this point, I enjoy teaching you because you don’t quit!
Transitive (strictly speaking) (as an example)
Is:
a*b = b*a
I gave you a more advanced version in the last post; what I should have said is that:
2+3 = 3+2
3+2 = 2+3
Etc...
When you introduce a new variable (such as “5”) (c) it becomes a different term than purely transitive, Wikipedia is wrong.
Last edited by Ecmandu on Wed Jun 17, 2020 8:32 pm, edited 1 time in total.
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:Transitive (strictly speaking) (as an example)
Is:
a*b = b*a
That looks like commutativity.
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:You never addressed the argument that proves infinite and finite behave differently in anything resembling a rational manner.
Here it is:
viewtopic.php?f=4&t=190558&p=2768316#p2768299
And you are ignoring it (:
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
Ecmandu wrote:Transitive (strictly speaking) (as an example)
Is:
a*b = b*a
That looks like commutativity.
Oh man, that’s embarrassing for me.
You have to understand that I had brain damage (was in a coma for four hours) because of a head injury.
I went from being a super-genius to just your run of the mill genius.
Yes, your neurons were not misfiring on this!
It’s communicative!
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:It’s communicative!
You mean commutative (:
Magnus Anderson
Philosopher
Posts: 4642
Joined: Mon Mar 17, 2014 7:26 pm
### Re: Is 1 = 0.999... ? Really?
Magnus Anderson wrote:
Ecmandu wrote:It’s communicative!
You mean commutative (:
*chuckles*
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
You know Magnus,
Brain damage did not impair my logic, just my memory.
The link you just sent me implies that I’m not allowed to make ANY argument that shows FOR A FACT that infinite and finite behave differently (supposedly (according to you) by my own reasoning).
Your argument about me contradicting myself by having every boy step forward and still all be holding hands is a fantasy of yours! It violates YOUR reasoning! Not what I’ve presented in this thread.
You know why I know I’ll win this debate?
Because I know god doesn’t exist.
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
### Re: Is 1 = 0.999... ? Really?
Ecmandu wrote:You know Magnus,
Brain damage did not impair my logic, just my memory.
The link you just sent me implies that I’m not allowed to make ANY argument that shows FOR A FACT that infinite and finite behave differently (supposedly (according to you) by my own reasoning).
Your argument about me contradicting myself by having every boy step forward and still all be holding hands is a fantasy of yours! It violates YOUR reasoning! Not what I’ve presented in this thread.
You know why I know I’ll win this debate?
Because I know god doesn’t exist.
Let me put this to you a different way.
There is a highest order of cardinality that in laypersons terms means “the infinite cardinal”
This is a proof of god.
Cantor knew it to.
Our every sentence in this thread is also about whether god exists or not!
Very high stakes for lots of people.
Ecmandu
ILP Legend
Posts: 10869
Joined: Thu Dec 11, 2014 1:22 am
PreviousNext
### Who is online
Users browsing this forum: No registered users
|
2020-10-29 23:34:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7936784029006958, "perplexity": 1580.3406307870118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00160.warc.gz"}
|
https://www.gamedev.net/forums/topic/694976-problem-with-room-generation-for-rougelike-game/
|
Public Group
# C++ Problem With Room Generation For Rougelike Game
## Recommended Posts
Hello,
I'm programming a simple rougelike video game for fun. I'm running into a very difficult problem with my room generation code, specifically the part that deals with the maze-like hallway to connect the randomly placed rooms inside a dungeon, using the depth-first search method. The code I've written is kinda long and probably hard to read, but I seriously cannot figure out what is causing the issue so try to bear with me please.
Below is the header file for the Floor class, which generates the floor of the dungeon.
#ifndef FLOOR_H_
#define FLOOR_H_
#include <iostream>
#include <vector>
#include <algorithm>
#include "Cell.h"
class Floor {
private:
int attempts; //Amount of attempts it tries to place a room before giving up. More attempts = more cramped.
int maxdoor; //Maximum amount of doors on each room
int maxroom; //Maximum number of rooms. 0 for 999.
int maxheight, minheight;
int maxwidth, minwidth;
int maxbigroom;
std::vector<std::vector<Cell> > region; //A collection of regions. A region is a collection of cells (rooms & hallway)
std::vector<Cell> temp_region; //Used to fill the region.
std::vector<Cell> temp_maze;
std::vector<Cell> maze;
public:
Floor();
void init(int attempts, int maxdoor, int maxroom, int maxwidth, int minwidth, int maxheight, int minheight, int maxbigroom);
void gen(Cell cell[120][60], SDL_Texture* tex);
bool check_walls(Cell cell[120][60], int* x, int* y);
};
#endif
The main thing that is important in this header is the temp_maze vector, which is used to keep track of the current section of the maze that has been carved out, so that when the maze hits a dead end it can back track. is the Cell is a class that represents a cell in the 2D grid that the player can move on. For the issue of this post, just think of each Cell as having an X integer, a Y integer, and a Bool to keep track of whether or not the cell is opened up.
Below is the part of the gen function that deals with the hallway generation.
//Hallway Generation
int cur_x, cur_y; //Current cell being tested
bool time_to_break;
for (int i = 1; i < 119; i++) {
for (int j = 1; j < 59; j++) {
if (cell[i][j].return_blockade() == true
&& cell[i + 1][j].return_blockade() == true
&& cell[i][j + 1].return_blockade() == true
&& cell[i - 1][j].return_blockade() == true
&& cell[i][j - 1].return_blockade() == true) { //Start of a path
cur_x = i;
cur_y = j;
while (true) {
cell[cur_x][cur_y].set_bloc(false); //Opens up cell
cell[cur_x][cur_y].set_tex(tex);
cell[cur_x][cur_y].set_default_tex(tex);
temp_maze.push_back(cell[cur_x][cur_y]);
if (check_walls(cell, &cur_x, &cur_y) == false) {
std::cout << "Entering nospace" << std::endl;
std::cout << "Size of k: " << temp_maze.size()
<< std::endl;
for (int k = temp_maze.size() - 1; k > -1; k--) {
int xte = temp_maze[k].return_rect().x / 15;
int yte = temp_maze[k].return_rect().y / 15;
std::cout << "k: " << k << std::endl;
std::cout << "rect_x: "<< temp_maze[k].return_rect().x<< " rect_y : "<< temp_maze[k].return_rect().y<< std::endl;
std::cout << "x_te: " << xte << " y_te: " << yte << std::endl;
if (check_walls(cell, &xte, &yte) == true) {
cur_x = xte;
cur_y = yte;
std::cout << "Exiting nospace via newblock"
<< std::endl;
break;
}
if (k == 0) {
time_to_break = true;
std::cout << "Exiting nospace via noviableblock"
<< std::endl;
break;
}
}
}
if (time_to_break == true) {
time_to_break = false;
temp_maze.clear();
break;
}
}
}
}
}
The broad idea is that, starting from the top-left cell, we scroll through them until we find a cell that is not blocked, nor has any neighbors that are blocked. Than we open that cell up, add it to the temp_maze vector, and check whether it has any viable neighbors. If it does, it moves cur_y and cur_x to that new cell and repeats. If it doesn't have any viable neighbors, than it goes through the temp_maze vector until either it finds a cell that has viable neighbors, or it reaches the end of the vector. This is accomplished with the check_walls function.
bool Floor::check_walls(Cell cell[120][60], int* x, int* y) {
int ori_x = *x;
int ori_y = *y;
std::cout << "Entering checkwalls with x: " << ori_x << " y: " << ori_y
<< std::endl;
if (cell[ori_x][ori_y].return_blockade() == true) {
std::cout << "Checkwalls returns false due to being blocked"
<< std::endl;
return false;
}
while (true) {
*x = ori_x;
*y = ori_y;
bool upblock, rightblock, downblock, leftblock;
int sidecount = 0;
int dir = rand() % 4;
if (dir == 0) {
*y = (*y) - 1;
} else if (dir == 1) {
*x = (*x) - 1;
} else if (dir == 2) {
*y = (*y) + 1;
} else if (dir == 3) {
*x = (*x) + 1;
}
if (*x <= 0 || *y <= 0) {
sidecount = 8;
dir = 5;
*x = ori_x;
*y = ori_y;
}
if (cell[*x + 1][*y].return_blockade() == false) {
sidecount++;
}
if (cell[*x - 1][*y].return_blockade() == false) {
sidecount++;
}
if (cell[*x][*y + 1].return_blockade() == false) {
sidecount++;
}
if (cell[*x][*y - 1].return_blockade() == false) {
sidecount++;
}
if (sidecount <= 1) {
std::cout << "Checkwalls returns true" << std::endl;
return true;
} else {
if (dir == 0) {
upblock = true;
} else if (dir == 1) {
leftblock = true;
} else if (dir == 2) {
downblock = true;
} else if (dir == 3) {
rightblock = true;
}
}
if (upblock == true && leftblock == true && downblock == true
&& rightblock == true) {
*x = ori_x;
*y = ori_y;
std::cout << "Checkwalls returns false" << std::endl;
return false;
}
}
}
The basic idea here is that if one of the neighbors are available, than it moves x & y to that cell and return true, and if there are none, than return false.
The problem is that when I run this the temp_maze function seems to get filled with the same cell, and time_to_break never gets set to true, as the vector seems to never end. Once again I understand this is kind of a lot but I have looked at this for a couple days now and have no idea what i am doing wrong, so any help would be very much appreciated.
##### Share on other sites
Are you able to provide source code that I can compile and run? (People are usually more willing to assist if they can compile your code and see exactly what is going on.)
Normally when you're dealing with such an issue you should be running your debugger to see why temp_maze isn't getting filled with different cells, then step through it. You will also find out what condition is being set that allows it to skip time_to_break.
You also should make it a habit to always set variables. I can see the following: bool time_to_break; I do not see where you've set time_to_break to false by default. Assuming I'm reading your code right, you need to set bool time_to_break = false; It's just a good habit to get into.
You should also set cur_x, cur_y to 0. Always set your variables, do not assume they will be 0, it's simply not true, they hold garbage, and will have unintended effects if you're trying to use them prior to re-setting.
Are you using Visual Studio by chance? Normally you would get a flag on compile if your bool isn't set.
If you're able to reply back with something I can compile, I will run it through and let you know the problem.
##### Share on other sites
Side-note: roguelike != rougelike.
##### Share on other sites
5 hours ago, Rutin said:
Are you able to provide source code that I can compile and run? (People are usually more willing to assist if they can compile your code and see exactly what is going on.)
Normally when you're dealing with such an issue you should be running your debugger to see why temp_maze isn't getting filled with different cells, then step through it. You will also find out what condition is being set that allows it to skip time_to_break.
You also should make it a habit to always set variables. I can see the following: bool time_to_break; I do not see where you've set time_to_break to false by default. Assuming I'm reading your code right, you need to set bool time_to_break = false; It's just a good habit to get into.
You should also set cur_x, cur_y to 0. Always set your variables, do not assume they will be 0, it's simply not true, they hold garbage, and will have unintended effects if you're trying to use them prior to re-setting.
Are you using Visual Studio by chance? Normally you would get a flag on compile if your bool isn't set.
If you're able to reply back with something I can compile, I will run it through and let you know the problem.
Thanks for replying Rutin!
Here is the github repository for the project: https://github.com/Starfruit64/Paladin
Im not 100% if this is what you were asking for, so if it's not just say. And no, I'm not using Visual Studio, I'm using Eclipse. (p.s I'll keep what you said about initializing variables in mind)
##### Share on other sites
Thank you for posting the code. I need to setup my SDL2 to run this, but I did take a quick look and got a basic running version up.
Just to clarify, are you saying that temp_maze.push_back(cell[cur_x][cur_y]); will always add the same entry?
I looked briefly at your code, and it appears you have a few problems.
1. I noticed you do set every cell to true for blockade, but all your functions appear to use set_bloc(false); without ever setting it to true again. This becomes a problem for the #2.
2. In your Hallway Generation you have the following method called if (check_walls(cell, &cur_x, &cur_y) == false) once you enter this function you will never be able to return false because the following block of code is never met.
if (cell[ori_x][ori_y].return_blockade() == true) {
std::cout << "Checkwalls returns false due to being blocked"
<< std::endl;
return false;
}
and lower down
if (upblock == true && leftblock == true && downblock == true
&& rightblock == true) {
*x = ori_x;
*y = ori_y;
std::cout << "Checkwalls returns false" << std::endl;
return false;
}
Because you can never satisfy check_walls to false, it essentially just continues to loop forever through your functions.
3. This is extremely important. Do not use while(true) loops without breaking! This is not proper, and if you're going to use something like this, you forgot to use break; by using any return statement you're leaving the function right away. Use something like while(looping).
Once this code hits:
if (sidecount <= 1) {
std::cout << "Checkwalls returns true" << std::endl;
return true;
}
It will just return out of the function as true, which again will not satisfy: if (check_walls(cell, &cur_x, &cur_y) == false)
This is why you can never satisfy that statement to go deeper. You never can return false in check_walls, and all your blockades being checked are always false because it appears to be set prior to false.
Let me know if I'm on to something here, it's been a long day so I couldn't go too much in depth with your code.
EDIT: I forgot to mention, in Floor.cpp you also did not initialize your boolean variables again.
bool upblock, rightblock, downblock, leftblock;
Edited by Rutin
##### Share on other sites
On 1/29/2018 at 9:28 PM, Rutin said:
Thank you for posting the code. I need to setup my SDL2 to run this, but I did take a quick look and got a basic running version up.
Just to clarify, are you saying that temp_maze.push_back(cell[cur_x][cur_y]); will always add the same entry?
I looked briefly at your code, and it appears you have a few problems.
1. I noticed you do set every cell to true for blockade, but all your functions appear to use set_bloc(false); without ever setting it to true again. This becomes a problem for the #2.
2. In your Hallway Generation you have the following method called if (check_walls(cell, &cur_x, &cur_y) == false) once you enter this function you will never be able to return false because the following block of code is never met.
if (cell[ori_x][ori_y].return_blockade() == true) {
std::cout << "Checkwalls returns false due to being blocked"
<< std::endl;
return false;
}
and lower down
if (upblock == true && leftblock == true && downblock == true
&& rightblock == true) {
*x = ori_x;
*y = ori_y;
std::cout << "Checkwalls returns false" << std::endl;
return false;
}
Because you can never satisfy check_walls to false, it essentially just continues to loop forever through your functions.
3. This is extremely important. Do not use while(true) loops without breaking! This is not proper, and if you're going to use something like this, you forgot to use break; by using any return statement you're leaving the function right away. Use something like while(looping).
Once this code hits:
if (sidecount <= 1) {
std::cout << "Checkwalls returns true" << std::endl;
return true;
}
It will just return out of the function as true, which again will not satisfy: if (check_walls(cell, &cur_x, &cur_y) == false)
This is why you can never satisfy that statement to go deeper. You never can return false in check_walls, and all your blockades being checked are always false because it appears to be set prior to false.
Let me know if I'm on to something here, it's been a long day so I couldn't go too much in depth with your code.
EDIT: I forgot to mention, in Floor.cpp you also did not initialize your boolean variables again.
bool upblock, rightblock, downblock, leftblock;
I appreciate your help, but I don't think this is the issue. if(check_walls(cell, &cur_x, &cur_y) == false is actually satisfied. Whenever i run the program, in fact, it seems always returns false. The check_walls function is used to test if there are any available neighbors of the current cell. If there are it changes cur_x and cur_y. If there are none, it is supposed to iterate through vector until it wither finds a cell that returns true when passed into check_walls, or until it runs out of cells.
if (upblock == true && leftblock == true && downblock == true
&& rightblock == true) {
*x = ori_x;
*y = ori_y;
std::cout << "Checkwalls returns false" << std::endl;
return false;
}
I don't understand why you say this can't be satisfied. If all four of the cells surrounding the current cell are blocked, it should set all four of those bools to true and return false.
##### Share on other sites
I ran your code through a debugger, and this is the information I got from each break point. I set several breaks with condition checks throughout your code, and I can never satisfy if(check_walls(cell, &cur_x, &cur_y) == false
Essentially your program never leaves the method floor.gen(board, floor_tex); in Main.cpp, it gets stuck while looping through. When I'm stating your arguments are not being satisfied, it's based on running over 20,000+ loops through if(check_walls(cell, &cur_x, &cur_y) == false I was not able to hit any of the return false blocks.
If you're getting different results, then I really don't know what to say as my debugging is telling me another story. Unless you can re-package this for Visual Studio and upload. Maybe something is going wrong when I ported it over to Visual Studio, but none of your code was altered, and I have SDL2 with all your extended libraries loading just fine.
Are you able to hit the draw code on your version?
Edited by Rutin
• 10
• 18
• 14
• 18
• 15
• ### Similar Content
• By chiffre
Introduction:
In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
(TLDR at bottom)
The Actual Post:
To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
To add some context, let me define a struct as an example:
struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape.
TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
• Well i found out Here what's the problem and how to solve it (Something about world coordinates and object coordinates) but i can't understand how ti works. Can you show me some examples in code on how you implement this???
Scaling Matrix:
m_Impl->scale = glm::mat4(1.0f); m_Impl->scale = glm::scale(m_Impl->scale, glm::vec3(width, height, 0)); Verticies:
//Verticies. float verticies[] = { //Positions. //Texture Coordinates. 1.0f, 1.0f, 0.0f, 0.0f, 2.0f, 1.0f, 1.0f, 0.0f, 2.0f, 2.0f, 1.0f, 1.0f, 1.0f, 2.0f, 0.0f, 1.0f }; Rendering:
//Projection Matrix. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the uniform. material->program->setUniformMat4f("u_MVP", proj * model); //model is the scale matrix from the previous code. //Draw. glDrawElements(GL_TRIANGLES, material->ibo->GetCount(), GL_UNSIGNED_INT, NULL);
#shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 texCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP*aPos; texCoord = aTexCoord; } #shader fragment #version 330 core out vec4 colors; in vec2 texCoord; uniform sampler2D u_Texture; void main() { colors = texture(u_Texture, texCoord); }
Before Scaling (It's down there on the bottom left corner as a dot).
After Scaling
Problem: Why does the position also changes?? If you see my Verticies, the first position starts at 1.0f, 1.0f , so when i'm scaling it should stay at that position
• Hey guys!
Ok so I have been developing some ideas to get to work on and I have one specifically that I need some assistance with. The App will be called “A Walk On the Beach.” It’s somewhat of a 3D representation of the Apple app “Calm.” The idea is that you can take a virtual stroll up and down a pier on the beach. Building the level of a pier seems self explanatory to me... but my question is this.... How could I make it so that players can leave notes on the pier for other users to read and or respond to? I was thinking something like a virtual “peg board” at the end of the pier where players can “pin up” pictures or post it’s.
Any advice on how I could accomplish this would be helpful!
• Hello Everyone!
I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far. In order to not say to much, i made a video in which i'm showing you the behavior of the rendering.
Video:
What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else?
Take a look at my code:
Sprite Class
(You mostly need to see the Constructor, the Render Method and the Move Method)
#include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; }
Renderer Class
#include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); }
#shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others.
Window Class:
#include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); }
Brain Class
#include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!)
#include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; }
Main:
#include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; }
I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files.
Brain.cpp
Error.cpp
IndexBuffer.cpp
Input.cpp
Renderer.cpp
Scene.cpp
Sprite.cpp
Texture.cpp
VertexArray.cpp
VertexBuffer.cpp
VertexBufferLayout.cpp
Window.cpp
Brain.h
Error.h
IndexBuffer.h
Input.h
Renderer.h
Scene.h
SpaceShooterEngine.h
Sprite.h
Texture.h
VertexArray.h
VertexBuffer.h
VertexBufferLayout.h
Window.h
• So in the few different tutorials that I have seen for using C++ / SDL, the implementation of the camera does not effect how the player is rendered but how the the rest of the world is rendered. Instead of changing the position / offset of where the player is rendered, you change the position / offset of where the map and other entities are renderer.
Out of curiosity, is this the standard (or maybe only) way of doing things when working with lower level code like C++ / SDL?
While it makes logical sense to me, my experience in game dev has always been high level abstractions (game engines like Unity or even libraries like Love) so it just feels wrong but maybe all of those engines / tools do it the same way and the abstraction they provide just hides that fact.
|
2018-05-21 07:39:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1897723525762558, "perplexity": 6446.0984486270445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863967.46/warc/CC-MAIN-20180521063331-20180521083331-00234.warc.gz"}
|
http://tex.stackexchange.com/questions/57194/extract-section-number-from-equation-reference?answertab=active
|
Extract Section number from Equation reference
How can I obtain the number of the Section in which an Equation (or Figure) appears? For instance, I have the reference 3.2, which refers to the second (2nd) Equation in Section 3. How can I extract the 3 (or the 2), such that it is still clickable (using hyperref)?
Preferably it should return "2nd" when it is the second Equation, "3d" when it's the third one, and so on.
MWE:
\documentclass[10pt,a4paper,fleqn]{article}
\usepackage{amsmath}
\usepackage{lipsum}
\usepackage{hyperref}
\numberwithin{equation}{section}
\setlength{\parindent}{0pt}
\begin{document}
\section{Start}
\lipsum[1]
\section{Halfway}
The equation \eqref{ThisOne} is the (1st, 2nd, 3d, ...?) Equation in Section (1, 2, 3, ...?).
\section{End}
A famous formula:
$$x_1, x_2 = \frac{ -b \pm \sqrt{b^2 - 4ac} }{ 2a }$$
And another one:
$$\sin^2(\varphi) + \cos^2(\varphi) = 1 \label{ThisOne}$$
\end{document}
-
You can do this easily with the help of the refcount (to turn the reference number into a string), xstring (to extract the number before and after the dot), and engord (to get the desired format as ordinal number) packages.
In the following example, I defined the commands \SecNum and \EqNum; the first one gives the section number of the reference and the second one gives the equation number in ordinal notation and turns it into a hyperlink to the given equation:
\documentclass[10pt,a4paper,fleqn]{article}
\usepackage{amsmath}
\usepackage{refcount}
\usepackage{xstring}
\usepackage{engord}
\usepackage{xspace}
\usepackage{lipsum}
\usepackage{hyperref}
\numberwithin{equation}{section}
\setlength{\parindent}{0pt}
\newcommand\equ{}
\newcommand\EqNum[1]{%
\StrBehind{\getrefnumber{#1}}{.}[\equ]%
\hyperref[#1]{\engordnumber{\equ}\xspace}%
}
\newcommand\SecNum[1]{%
\StrBefore{\getrefnumber{#1}}{.}\xspace%
}
\begin{document}
\section{Start}
\lipsum[1]
\section{Halfway}
The equation \eqref{ThisOne} is the \EqNum{ThisOne}~Equation in Section \SecNum{ThisOne}.
\section{End}
A famous formula:
$$x_1, x_2 = \frac{ -b \pm \sqrt{b^2 - 4ac} }{ 2a }$$
And another one:
$$\sin^2(\varphi) + \cos^2(\varphi) = 1 \label{ThisOne}$$
\end{document}
I added \xspace from the xspace package to take care of proper spacing.
-
Thanks, this seems a very nice solution. But for some reason, I'm unable to compile it (using PDFLaTeX): ! Missing number, treated as zero., l.29 ...ion \eqref{ThisOne} is the \EqNum{ThisOne}. For some reason, it expects another argument after ThisOne? – Ailurus May 25 '12 at 9:34
@Ailurus this will happen the first time you process your document with a new \EqNum command (the string for the reference has not yet been generated); press "q" when prompted and process the document again. – Gonzalo Medina May 25 '12 at 13:32
Thanks, that solved the problem (I had to run it manually, normally I use a shortcut within my editor). One other thing, why is there so much whitespace around 2^{nd}? – Ailurus May 25 '12 at 14:20
@Ailurus problem solved! It was just some spurious blank spaces in the code that I've corrected in my updated answer. – Gonzalo Medina May 25 '12 at 14:45
Ah I see. Now I also know what the %-sign means at the end of a line. Thanks! – Ailurus May 25 '12 at 14:55
The zref package allows for an extention of the regular 2-part \label-\ref system into any number of properties/elements. The following MWE creates a new property list (called special) and adds section and equation counters (in \arabic format) to this list for referencing:
\documentclass[10pt,a4paper,fleqn]{article}
\usepackage{amsmath}% http://ctan.org/pkg/amsmath
\usepackage{lipsum}% http://ctan.org/pkg/lipsum
\usepackage{hyperref}% http://ctan.org/pkg/hyperref
\usepackage{zref}% http://ctan.org/pkg/zref
\makeatletter
\zref@newlist{special}% Create a new property list called special
\zref@newprop{section}{\arabic{section}}% Section property holds \arabic{section}
\zref@newprop{equation}{\arabic{equation}}% Equation property holds \arabic{equation}
\newcommand*{\eqnref}[1]{\zref@extractdefault{#1}{equation}{??}}
\newcommand*{\secref}[1]{\zref@extractdefault{#1}{section}{??}}
\newcommand*{\spref}[2][section]{\zref@extractdefault{#2}{#1}{??}}
\newcommand*{\splabel}[1]{\zref@labelbylist{#1}{special}}%
\makeatother
\numberwithin{equation}{section}
\setlength{\parindent}{0pt}
\begin{document}
\section{Start}
\lipsum[1]
\section{Halfway}
The equation \eqref{ThisOne} is Equation~\eqnref{ThisOne} in Section~\secref{ThisOne}.
\section{End}
A famous formula:
$$x_1, x_2 = \frac{ -b \pm \sqrt{b^2 - 4ac} }{ 2a }$$
And another one:
$$\sin^2(\varphi) + \cos^2(\varphi) = 1 \label{ThisOne}\splabel{ThisOne}$$
\end{document}
zref labels are set using \splabel, while equations are referenced using \eqnref and sections using \secref. These are specific implementations of a more general \spref[<type>]{<refname>} (where <type> defaults to section). Since the macros used are expandable, fmtcount can also be used to provide ordinal references:
\usepackage{fmtcount}% http://ctan.org/pkg/fmtcount
%...
\newcommand*{\eqnref}[1]{\ordinalnum{\zref@extractdefault{#1}{equation}{??}}}
Hyper-referencing is also possible, if needed.
\let\oldlabel\label
in your preamble, you only have to use one \label command to obtain the desired referencing output rather than \label + \splabel.
Thanks Werner, nice to see such different solutions for my question. The only drawback seems that one has to use a different kind of label (\splabel), such that it is not immediately applicable to an existing document. – Ailurus May 25 '12 at 9:42
I've updated it to make it more "immediately applicable." Now only a single \label works. – Werner May 25 '12 at 16:07
|
2015-07-31 05:05:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.9542231559753418, "perplexity": 3041.072097208368}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00251-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.greencarcongress.com/2015/10/20151012-mit.html
|
## MIT study finds carbon prices more cost-effective than fuel economy regs at reducing CO2 emissions; fuel economy regs more efficient at reducing fuel use
##### 12 October 2015
Researchers at the MIT Joint Program on the Science and Policy of Global Change have compared the worldwide economic, environmental, and energy impacts of currently planned fuel economy standards (extended to the year 2050) with those of region-specific carbon prices designed to yield identical CO2 emissions reductions.
Their study, which appears in the Journal of Transport Economics and Policy, finds that such stringent fuel economy standards would cost the economy 10% of global gross domestic product (GDP) in 2050, compared with a 6% cost under carbon pricing. This finding reinforces economists’ contention that improving the efficiency of motor vehicles through fuel economy standards will yield significantly less CO2 emissions reduction per dollar than an economy-wide instrument that encourages such cutbacks where they are cheapest—principally in the electric power and industrial sectors.
However, the fuel economy standards modeled in the study did prove beneficial in terms of fuel consumption: They reduced fuel used in passenger vehicles by 47% relative to a no-policy scenario in 2050, versus only 6% under carbon pricing.
Many developed countries are choosing very expensive ways to reduce CO2 emissions, but if that’s a top priority, they should go with a price on carbon. If they’re more focused on energy independence, fuel economy standards can deliver, but a tax on gasoline would be more cost-effective. What makes our study unique is that we used a global model that captures market linkages around the world, rather than within a single nation, region or sector.
—lead author Valerie Karplus, assistant professor of global economics and management at the MIT Sloan School of Management
The new paper by Professor Karplus and her colleagues provides important new insights into the role of efforts by nations around the world to reduce petroleum use and greenhouse gas emissions from the transportation sector. The research shows that the often-used policy of requiring fuel economy improvements, while capable of reducing petroleum use, is significantly more expensive than other, economy-wide options which are more cost-effective at reducing greenhouse gas emissions.
—Jonathan Rubin, professor at the Margaret Chase Smith Policy Center and School of Economics at the University of Maine
To arrive at their findings, the researchers used the MIT Emissions Prediction and Policy Analysis (EPPA) model to simulate the impact of fuel economy and carbon pricing policies. The fuel economy scenario simulated the impacts of extending current fuel economy mandates past their expiration dates through 2050. The carbon pricing scenario consisted of a patchwork of national and regional cap-and-trade policies designed to achieve the same CO2 emissions reductions by 2050 as the fuel economy standards produced in each market.
An important feature of the study was its ability to capture, via the EPPA model, two major effects of national and regional fuel economy standards: rebound and leakage.
Adoption of more fuel-efficient vehicles, by decreasing fuel demand, also reduces the per-mile price of fuel as supply and demand balance in the market. This price reduction can lead to more driving in the market covered by the policy—known as the rebound effect—as well as in sectors and regions not covered by the policy—known as the leakage effect—because globally interlinked fuel markets cause prices to fall worldwide.
The model simulates not only rebound and leakage effects, but also the gradual adoption of new, more expensive vehicles and retirement of old ones; how vehicle owners navigate the tradeoff between using more fuel and purchasing a more efficient vehicle; the relationship between changes in household income and vehicle usage behavior; and the adoption of off-the-shelf and advanced, low-carbon technologies that increase miles per gallon.
The study also determined that by 2050, currently planned fuel economy standards would reduce CO2 emissions by about 4 percent relative to a no-policy scenario. Extending these standards past their deadlines through 2050 would decrease emissions by an additional 6%. These relatively modest reductions would come at a high cost.
Although it may be politically easier to repurpose or replicate commonly applied fuel economy standards to reduce CO2 emissions, the MIT analysis suggests that a coordinated approach that includes a price on CO2 will be far more effective at achieving this goal.
The EPPA model used in this study is supported by a consortium of government, industry, and foundation sponsors of the MIT Joint Program.
Resources
No $hit. Now all we need are some smart (and brave) politicians with the foresight and will to implement a carbon tax or even to raise the fuel taxes. Good luck with that. Disclaimer: I have 4 degrees including a doctorate in ME from MIT. Pollution controls on ICE are mandatory, regardless of other approaches. But the most simple solution to reduce consumption is price (a tax at the purchase site). The implementation is quite simple. The existing taxing system should require few if any additional government employees. And the simplest solution is usually the best. All consumers understand cost. We use it for all other products on a daily basis. Simply announce an immediate$.50 a gallon tax on motor fuel. Also announce that this tax will increase by $.50 a gallon every 6 months until the price of motor fuel is in the$10.00 a gallon range.
Take the tax revenue and use it for infrastructure improvements and R&D on renewable energy sources.
It's easy to implement. Would work wonders on changing demand. Fix the highway system. And create a pathway to the new energy paradigm.
Just one more reason to adopt carbon trading on a national or multinational basis as a business-friendly way of reducing CO2 emissions. You cannot argue with science.
If gas were $10 I'd quit my job... Sell my car and seek some sort of welfare. I'd probably walk 2 miles to work at the nearest McDonald's or something. And would you tax carbon neutral liquid fuels? Other solution would be a major raise... Like 20k a year at my job Honestly I hope they would get carbon producing utilities taxed too... 12cents is way too cheap...home owners spend more energy on their homes than their car. This needs to be addressed with a sharp increase in rates...maybe$1/kwh
Create an issue than create a market to profit from it.. good work if you can get it.
A carbon tax paying to make incentives for solar panels and electric vehicles makes sense to me. We tax what we don't want then we create incentives for what we do want. That should be obvious to everyone.
Why not both!
A revenue neutral carbon pricing scheme is simple and more likely to be accepted by the public:
https://citizensclimatelobby.org/carbon-fee-and-dividend/
The comments to this entry are closed.
|
2023-03-24 18:34:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3485357165336609, "perplexity": 3288.1035041938208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00157.warc.gz"}
|
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoms/1177705987
|
## The Annals of Mathematical Statistics
### On the Mixture of Distributions
Henry Teicher
#### Abstract
If $\mathcal{F} = \{F\}$ is a family of distribution functions and $\mu$ is a measure on a Borel Field of subsets of $\mathcal{F}$ with $\mu(\mathcal{F}) = 1$, then $\int F(\cdot) d\mu (F)$ is again a distribution function which is called a $\mu$-mixture of $\mathcal{F}$. In Section 2, convergence questions when either $F_n$ or $\mu_k$ (or both) tend to limits are dealt with in the case where $\mathcal{F}$ is indexed by a finite number of parameters. In Part 3, mixtures of additively closed families are considered and the class of such $\mu$-mixtures is shown to be closed under convolution (Theorem 3). In Section 4, a sufficient as well as necessary conditions are given for a $\mu$-mixture of normal distributions to be normal. In the case of a product-measure mixture, a necessary and sufficient condition is obtained (Theorem 7). Generation of mixtures is discussed in Part 5 and the concluding remarks of Section 6 link the problem of mixtures of Poisson distributions to a moment problem.
#### Article information
Source
Ann. Math. Statist. Volume 31, Number 1 (1960), 55-73.
Dates
First available in Project Euclid: 27 April 2007
Permanent link to this document
http://projecteuclid.org/euclid.aoms/1177705987
JSTOR
|
2014-11-28 04:09:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7212752103805542, "perplexity": 257.22138425296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009751.93/warc/CC-MAIN-20141125155649-00092-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://www.yaclass.in/p/english-language-cbse/class-7/poem-1480/dad-and-the-cat-and-the-tree-4600/re-7f5b74f4-81c4-4e26-bcf4-90b274a21609
|
### Theory:
So it’s smiling and smirking,
Smug as can be;
Still;
Stuck;
Up;
The;
Tree!
Explanation:
Stanzas $$13$$ and $$14$$ depict how the cat was happy and content after reaching the ground, whereas the child's father was stuck in the cat's place when he sought to aid the cat. The poem ended on a humorous note, as the child's father, who had hoped to save the cat, had eventually got himself in its position and had become a comic figure.
The Child's father stuck on a tree
Meaning of Difficult words:
S.No Words Meaning 1. Smirking Smiling at someone or something with satisfaction and joy 2. Smug Showing much pride in achieving something
Reference:
National Council of Educational Research and Training (2007). Honeycomb. Dad and cat and the tree - Kit Wright (pp.107-109). Published at the Publication Division by the Secretary, National Council of Educational Research and Training, Sri Aurobindo Marg, New Delhi.
|
2021-08-05 08:54:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21073482930660248, "perplexity": 7422.823044764218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00476.warc.gz"}
|
https://kerodon.net/tag/02XG
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Corollary 7.1.6.7. Let $U: \operatorname{\mathcal{C}}\rightarrow \operatorname{\mathcal{D}}$ be an inner fibration of $\infty$-categories, let $B$ and $K$ be simplicial sets, and suppose we are given a lifting problem
7.5
$$\begin{gathered}\label{equation:relative-colimit-pointwise-existence} \xymatrix@R =50pt@C=50pt{ B \times K \ar [r]^-{ f } \ar [d] & \operatorname{\mathcal{C}}\ar [d]^{U} \\ B \times K^{\triangleright } \ar [r]^-{\overline{g}} \ar@ {-->}[ur]^{ \overline{f} } & \operatorname{\mathcal{D}}} \end{gathered}$$
Assume that, for each vertex $b \in B$, the restriction $f|_{\{ b\} \times K}$ can be extended to a $U$-colimit diagram $\overline{f}_{b}: K^{\triangleright } \rightarrow \operatorname{\mathcal{C}}$ satisfying $U \circ \overline{f}_{b} = \overline{g}|_{ \{ b\} \times K^{\triangleright } }$. Then the lifting problem (7.5) admits a solution $\overline{f}: B \times K^{\triangleright } \rightarrow \operatorname{\mathcal{C}}$ satisfying $\overline{f}|_{ \{ b\} \times K^{\triangleright } } = \overline{f}_{b}$ for each $b \in B$.
Proof. Apply Corollary 7.1.6.6 in the special case where $A = \operatorname{sk}_0(B)$ is the $0$-skeleton of $B$. $\square$
|
2022-12-07 10:08:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9800522923469543, "perplexity": 88.59056859813103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00714.warc.gz"}
|
https://mathematica.stackexchange.com/questions/121923/using-output-of-ndsolve-in-further-ndsolve-calculations
|
# Using output of NDSolve in further NDSolve calculations [closed]
I am trying to use the interpolating functions found using NDSolve to solve a further differential equation where the first solutions depend on two arbitrary parameters.
As a simple example, suppose I have two coupled ODES that I can write as:
func[r_,u_]:=y''[x] == u*y'[x]+r^2*y[x]*x^3
sol[r_,u_]:= NDSolve[{func[r,u],y[0]==r,y'[0]==r*u},y,{x,0,50}]
func2[r_,u_]:= (z''[x]==y'[x]*r+u*y[x]*z'[x])/.sol[r,u]
sol2[r_,u_]:= NDSolve[{func2[r,u],z[0]==1,z'[0]==0},z,{x,0,50}]
The problem is that I get the error message when running the fourth line telling me that replacing y[x] etc. is not a good replacement rule.
In reality, my system is a horrible non-linear set f coupled ODEs with one final equation that depends on the solutions of the others. Furthermore, r and u also control the boundary conditions but I think this example suffices because I think the problems are caused by the fact that there is r and u dependence in the ODE.
I can see many posts where the output of NDSolve does not depend on any parameters, in which case it seems trivial to just define a function that does what I want e.g.
f1[x_]:=Evaluate[First[y[x]/.NDSolve[{y[x]''+y[x]'+x^3==0,y[0]==1,y'[0]==0},y,{x,0,50}]]]
sol=NDSolve[{z'[x]==^3*y[x]+y'[x],z[0]=0},z,{x,0,50}]
would work if it were not for u.
I am also aware that I can solve both equations simultaneously but the second equation is stiff so I would like to use a different method for it. Actually, the first set are DAE's whilst the last one is not so splitting the two would help me a lot.
## closed as unclear what you're asking by Michael E2, m_goldberg, MarcoB, ubpdqn, Yves KlettAug 3 '16 at 5:47
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• How are you calling the function? – Feyre Aug 1 '16 at 15:29
• Welcome! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour and check the faqs! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! – user9660 Aug 1 '16 at 15:33
• – user9660 Aug 1 '16 at 15:34
• Thanks for the help guys. I actually wrote this code in by hand because my actual code is a lot more complicated and my real ODE is very long. Feyre, what do you mean how and I calling the function? – jsaxon Aug 1 '16 at 15:41
• sol2[r_,u_]:= is set delayed, so I assume you are calling it in some form of sol2[1,2] – Feyre Aug 1 '16 at 16:09
Well, I made some changes, but then realized I hadn't changed anything important. I shortened the integration to {x, 0, 2} to save time. The original code works with this change. I memoized sol so that it's computed only once for each {r, u}. You stick First@ on NDSolve[] to remove a set of braces, but it's not important.
Clear[func, func2, sol, sol2, x, y, r, u];
func[r_, u_] := y''[x] == u*y'[x] + r^2*y[x]*x^3;
sol[r_?NumericQ, u_?NumericQ] :=
sol[r, u] = (* saves the value so that it is computed once *)
NDSolve[{func[r, u], y[0] == r, y'[0] == r*u}, y, {x, 0, 2},
InterpolationOrder -> All]
func2[r_?NumericQ, u_?NumericQ] := (z''[x] == y'[x]*r + u*y[x]*z'[x]) /. sol[r, u]
sol2[r_, u_] := NDSolve[{func2[r, u], z[0] == 1, z'[0] == 0}, z, {x, 0, 2}]
sol2[2, 1]
|
2019-10-24 00:33:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5409916639328003, "perplexity": 1121.6987033687597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00432.warc.gz"}
|
https://math.stackexchange.com/questions/2042310/combinatorics-4-digit-even-number-no-repetition
|
# Combinatorics - 4 digit even number, no repetition
I'm trying to find the number of 4-digits even numbers, such that no digit repeats.
This is simple, and the solution is 2296, as is explained in How many $4$ digit even numbers have all $4$ digits distinct?
I've tried solving it with complementary sets: "The number of 4-digits even numbers" minus "The number of 4-digits even numbers such that all the digits are the same"
However with the second approach I get 4996. What am I missing? can I find the solution using complementary sets?
If you're subtracting the cases in which all four digits are the same you are not removing all of the cases which contain a repeat. You also need to remove the cases in which $2$ digits are the same and $3$ digits are the same.
To get the answer desired (that is, to find the complement of the original answer which was how many four-digit evens with no repeats) we must find all numbers in which at least one digit is the same. Thus with $9\cdot 10 \cdot 10 \cdot 5=4500$ four-digit even numbers with no restrictions we can remove the original number you found $2296$ (the number with no repeats) to find the complement which is the number of four-digit even numbers where at least one digit is a repeat $2204$.
|
2019-11-12 18:39:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7326434850692749, "perplexity": 232.32521904793643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665726.39/warc/CC-MAIN-20191112175604-20191112203604-00209.warc.gz"}
|
http://math.stackexchange.com/questions/393530/simplifying-fractions-ending-up-with-wrong-sign
|
# Simplifying fractions - Ending up with wrong sign
I've been trying to simplify this $$1-\frac{1}{n+2}+\frac{1}{(n+2) (n+3)}$$ to get it to that $$1-\frac{(n+3)-1}{(n+2)(n+3)}$$ but I always end up with this $$1-\frac{(n+3)+1}{(n+2)(n+3)}$$ Any ideas of where I'm going wrong? Wolfram Alpha gets it to correct form but it doesn't show me the steps (even in pro version)
Thanks
-
Everywhere there is a minus sign, replace it with plus a negative.
So with your original expression, try instead simplifying $$1+\frac{-1}{n+2}+\frac{1}{(n+2) (n+3)}$$ and you should be much less prone to error.
-
Great answer, thanks! – Daniel Wardin May 16 at 16:49
You just have the problem that while $$x-y+z = x-(y-z)$$
you are instead writing:$$x-y+z=x-(y+z)$$
-
This is very useful thing to remember! Unbelievable how I missed such a simple property! Wish I could select 2 answers as best. – Daniel Wardin May 16 at 16:50
\begin{align*} 1-\frac{1}{n+2}+\frac{1}{(n+2)(n+3)} &= \frac{(n+2)(n+3)}{(n+2)(n+3)} - \frac{(n+3)}{(n+2)(n+3)}+\frac{1}{(n+2)(n+3)} \\ &= \frac{(n+2)(n+3)-(n+3)+1}{(n+2)(n+3)}\\ &= \frac{(n^2+5n+6) -n-2 }{(n+2)(n+3)} \\ &= \frac{n^2+4n+4}{(n+2)(n+3)} \\ &= \frac{(n+2)^2}{(n+2)(n+3)} \\ &= \frac{n+2}{n+3} \\ \end{align*} Provided $n\neq -2$.
|
2013-12-20 19:20:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 1903.7172473541218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345773090/warc/CC-MAIN-20131218054933-00034-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://quant.stackexchange.com/questions/3764/why-use-a-column-database-for-tick-bar-data
|
# Why use a column database for tick/bar data?
I often hear that column-oriented databases are the best choice method for storing time series data in finance applications. Especially by people selling expensive column-oriented databases.
Yet, at first glance it seems a poor choice. You want to append new ticks, or new bars, at the end (and you need to do this a lot and quickly). That is a classic row operation: you append to one file. In a column DB you have to update three files for a tick (timestamp/price/trade size), or five to six for a bar (datestamp, open, high, low, close, volume). (I said 5-6, as for regularly spaced bar data I suppose datestamp could be implicit from row number.)
For reading I don't normally want to just grab one column; I want to grab the whole bar so I can draw a candlestick (for instance). OK, I may just want the close column, or just want the volume column (but I still need two reads to also get the datestamps in a column-oriented DB, don't I?).
But what seems even more important is that when I want to read historical data I generally want to grab a sub-period, and that will be stored contiguously in the row-oriented DB.
Q1: Is there any good reason to go with column-oriented over row-oriented if all you store is trade ticks?
Q2: Is there any good reason to go with column-oriented over row-oriented if all you store is OHLCV bars?
Q3: If you think no for Q1 and Q2 what kind of columns do you need to have for column-oriented DBs to be the clearly superior choice?
UPDATE
Thanks to Chris Aycock for links to similar questions. Some of the reasoning why column-oriented DBs are better is still not making sense to me, but from the first part of http://quant.stackexchange.com/a/949/1587 I think people may be using row-oriented DBs differently. So, for the purposes of this question, please assume I have only one symbol per database table (as opposed to one huge table with a 'symbol' column). So, following the example in the above answer, the raw on-disk storage looks like:
09:30:01 | 164.05; 09:30:02 | 164.02; ...
-
This question gets ask a lot on here [ 1, 2 ]. – chrisaycock Jul 9 '12 at 11:04
Thanks @chrisaycock I had read one of those in my hunt yesterday, but the first link I'd missed and it was very informative (I'm still working through the linked 85-page PDF, but that looks useful too). I've added more information to my question to explain why I don't feel the linked-to answers fully answer my question. – Darren Cook Jul 10 '12 at 0:42
As to Q2: For a row database, I don't see any other viable option than one huge table with Symbol, Date, O, H, L, C, V, as columns. You'll need two indices built into this table: say a primary key index ordered first by Symbol and then by Date, and another index ordered first by Date and then by Symbol. Essentially, at some level, a column-oriented database will have to do something like this internally to make its operations reasonably efficient. But if you're not willing to have (or your row database can't handle) one huge table with a Symbol column, then you do need a column database. – JL344 Aug 4 '12 at 18:17
@JL344 You didn't mention the row-oriented approach of one table per symbol (see the UPDATE in my question). Is there a reason people are not using that approach? – Darren Cook Aug 5 '12 at 2:47
The approach of one table per symbol in essence is the column-oriented approach. Row-oriented databases just aren't designed to handle a huge number of tables as effectively as a huge number of rows in one table. They are built on the assumption that different tables in a database hold fundamentally different types of data, so any relations among different tables are ad hoc, and have to be joined up at query time. (The primary key is the row number, by the way. Better to think about what fundamentally makes the row unique rather than just assign an arbitrary autoincremented integer.) – JL344 Aug 5 '12 at 5:56
Like everything, which solution is most suitable completely depends on your specific case. But first I think you confuse couple concepts here. One thing is how fast a DB can retrieve data/read. Another is storing raw data. And an entirely different issue is analytics, queries. Columnar databases shine at reading and writing raw time series based data. Col DBs are not good at performing analytics. Keep in mind that even KDB itself does not shine at aggregating data, KDB itself is just a smart file system with index structures. Its the built-in query language that adds a lot of firepower in terms of query capabilities. Please keep this in mind.
1) Yes, think about how you generally read data. Think about Key/Value, which is essentially what columnar databases are all about (Edit: There is a very close connection, they are not identical). You want to retrieve a specific point in time or a time frame and its associated values. Columnar dbs are very fast at handling such requests. Once such data is in memory it can be operated upon much faster. 2) Same here: Essentially you want to read bars in the same way than raw ticks or any other time series for that matter. You want to acquire bars from Monday 9am to Tuesday 2pm. Whats the difference here? You store each value in its own column. 3) You mean if I answered "yes" to Q1 and/or Q2? Columns are symbol or symbol + open or whatever you chose. Keys are date/time/ticks...
Remember what I said first: Your use case is all that matters. If you constantly need to get prices/bars/... of many different symbols at a specific time point then a row-based database cannot be beat (well given you setup the schema in an intelligent way within an RDBMS). But if you pull out data over time of a single metric (or 4 metrics such as o/h/l/c of bars) then a columnar database is way faster than RDBMS. Why? Because I/O is the most expensive operation and having to only read the columns, needed, is way faster than having to read whole rows. Keep in mind your assertion that each column is stored in a different file is incorrect.
I would read the very same Wiki article you linked to because it answers most of your own question. Also, look at some open source structured, non-SQL, columnar databases to get started on the concepts.
But if you ask me to summarize my points in one sentence then here goes: Columnar databases are optimized for read-operations of time series like data, while row-based databases are more optimized for write operations.
Edit:
For clarification purposes, what I meant with "Think about Key/Value, which is essentially what columnar databases are all about" is the following:
I used the term "key-value" because its essentially the simplest No-SQL data storage approach. The point being is that one cannot run queries on values, cannot aggregate values or search by values such as one could in a purely RDBMS through schemata and indexes. This I think (and I am not alone here) is what sets RDBMS apart from "No-SQL" solutions. My point was that once this concept is understood that No-SQL databases are generally schema-less, lack tables (generally not always), and that, and here is the key similarity between key value and columnar dbs, queries are limited to just by keys, so that the DB knows exactly what node a query can run on. Please note that I am making the comparison looking at things from above 30,000 feet, not a detailed key-value store vs. columnar DB comparison. I just believe that once one understands the concept of key-value and the way key-values are queried then I find it much easier to understand columnar database concepts, EVEN THOUGH on the surface columnar databases look very similar to RDBMS which could not be any further from the truth.
-
Thanks for the reply Freddy. When you say column DBs are all about Key/Value, what is the key? When I think tick/bar data, the key is a datestamp, but that is a row-oriented concept, isn't it? – Darren Cook Jul 9 '12 at 5:38
Another question: I usually think in terms of one table per symbol (instrument/contract). Reading between the lines of your answer, when you have a column-oriented DB do you keep all symbols in one table? – Darren Cook Jul 9 '12 at 5:41
-1 Column oriented DBs are not essentially key-value. They do not shine at writing data but they are good at performing analytics. KDB definitely does shine at aggregating data (that's its primary use case). – chrisaycock Jul 9 '12 at 11:12
@chrisaycock By "aggregate" do you mean things like turning ticks into 1m bars, 1m bars into hourly bars, etc.? And/or do you mean making moving averages and other more complicated indicators? And/or something else? – Darren Cook Jul 10 '12 at 0:45
@Freddy Thanks for the update, sorry I only just saw it. It sounds like you are confusing NoSQL DBs (which, to me, means MongoDB, CouchDB, Cassandra, Redis, etc.) with column-oriented databases? Or do you regard them all as NoSQL solutions? – Darren Cook Aug 3 '12 at 13:32
|
2014-11-26 01:40:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49553802609443665, "perplexity": 1692.6248085098937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004988.25/warc/CC-MAIN-20141125155644-00160-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/questions/126899/gnuplot-invisible
|
# Gnuplot invisible
I'm trying to plot two implicit curves (using pgfplots and raw gnuplot) but they don't appear in the document. In gnuplot it works just fine so I guess it has something to do with pgfplots but i can't figure out, what the problem is.
Example:
\documentclass{article}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}
unset key
set contour base;
set cntrparam levels discrete 0.003;
unset surface;
set view map;
set isosamples 50,50;
set xrange [0:1]
set yrange [-1:1]
f(a,c) = a*cosh(c/a)-N
ell1(a,c) = (c/z)**2+(1/(1-cosh(z)))**2*(a-N/(cosh(z)))**2-(N/cosh(z))**2
N = 1
z = 1.19968
splot f(x,y),ell1(x,y)
};
\end{axis}
\end{tikzpicture}
\end{document}
-
add the missing ; at the end of each command, then it should work – texenthusiast Aug 6 '13 at 8:02
oh indeed. i thought they are optional as gnuplot didn't need them. Do you want to answer this or shall i flag my question for deletion? – oerpli Aug 6 '13 at 8:11
retain the question as it might look silly mistake ( sometimes silly can be brilliant) but someone may be in the same situation would benefit. I made the answer with the plot, hope it's the plot you are looking for – texenthusiast Aug 6 '13 at 8:24
okay. thanks a lot. – oerpli Aug 6 '13 at 8:29
If you notice ur-file-name.pgf-plot.gnuplot created by first pdflatex -shell-escape ur-filename.tex compilation there is a missing ; at the end of each gnuplot command, then it may not work like in a terminal gnuplot(eg: command by command execution) as ; command line terminator is required to separate the gnuplot commands when fed in a file to gnuplot.
\documentclass[convert=false,border=2pt]{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.8}
\begin{document}
\begin{tikzpicture}
\begin{axis}
unset key;
set contour base;
set cntrparam levels discrete 0.003;
unset surface;
set view map;
set isosamples 50,50;
set xrange [0:1];
set yrange [-1:1];
f(a,c) = a*cosh(c/a)-N;
ell1(a,c) = (c/z)**2+(1/(1-cosh(z)))**2*(a-N/(cosh(z)))**2-(N/cosh(z))**2;
N = 1;
z = 1.19968;
splot f(x,y),ell1(x,y);
};
\end{axis}
\end{tikzpicture}
\end{document}
Output:
-
|
2015-08-29 17:31:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8680301308631897, "perplexity": 6195.5430719960295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064517.22/warc/CC-MAIN-20150827025424-00249-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://8ch.net/sci/index.html
|
[ / / / / / / / / / / / / / ]
# /sci/ - Science and Mathematics
Go STEM or go home.
Email Subject sage Select/drop/paste files here (Randomized for file and post deletion; you may also set your own.) * = required field [▶ Show post options & limits]Confused? See the FAQ.
Embed
(replaces files and can be used instead)
Options
dice sides modifier
Allowed file types:jpg, jpeg, gif, png, webm, mp4, swf, pdf
Max filesize is 16 MB.
Max image dimensions are 15000 x 15000.
You may upload 5 per post.
Welcome to /sci/. If you're here for homework help go to >>>/hwk/. If you're here for religion or politics, fuck off.
Post last edited at
File: cb72d7df7bb4c3d⋯.png (16.67 KB, 120x68, 30:17, IQ-Mindware-logo-1.png)
has anyone tried/ have any idea if these things work?
https://www.iqmindware.com/brain-training-apps
https://www.brainhq.com/why-brainhq/brain-training-your-way/brainhq-courses
any other resources I should look into?
No.5459
The basics:
Get enough sleep. Sleep dep gives you brain damage. (The book Why We Sleep is a great resource.)
Get enough exercise. Cardio literally grows new brain cells. (https://en.wikipedia.org/wiki/Neurobiological_effects_of_physical_exercise)
Make sure you aren't suffering from chronic inflammation.
If you feel depressed, anxious, forgetful, distracted, lack willpower, there's a good chance you are.
Inflammation has a devastating effect on the brain.
Inflammation means your immune system is pissed off.
You usually experience it when you're sick, but certain foods cause it too.
Refined sugars, vegetable oils, omega-6, and in some cases, carbs in general.
I didn't realize what a huge difference it made until it went away. And when I broke the diet to see what would happen, I felt like absolute shit, and couldn't believe I used to live like that all the time.
An easy way to check, is to stop eating everything inflammatory for a few days and see how you feel.
I recommend the beef-only diet (no, you won't die), but beef and greens should work in most cases.
Once you stop giving yourself brain damage, check out Dual-N Back. It trains your working memory.
Best wishes
No.5461
>>5459
thanks anon.
how long have you been doing dual-n-back?
did you notice any effects?
No.5511
>>5459
>check out Dual-N Back. It trains your working memory.
Good advice aside from this. The opinion on this is mixed: https://www.gwern.net/DNB-FAQ#criticism
No.5512
>>5511
can you summarize the criticism? I don't feel like reading all of it.
No.5525
bymp
So, … the Chinese landed a rover on the 'Dark' side of the Moon.
They claim the Moon rotates just once for each rotation around the sun. They say it is explained by the phenomenon of 'tidal locking.' If tidal locking is a thing, why is the Earth not tidally locked to the Sun?! Earth spins ~ 12 times each rotation around the Sun.
Any legitimate explanation for this? I have been looking but it seems physicists are busy with other stuff. (I am just a Chemist and Engineer - I don't get it)
No.5516
YouTube embed. Click thumbnail to play.
What flavor chemist and engineer are you? O do you mean chemical engineer?
Either way you are going to want to work on your google skills.
No.5518
There is no dark side of the moon. Try looking up at night, you always see the same face and sometimes it's dark sometimes it's lit.
Tidal locking happens because of friction bleeding away angular momentum. It takes a long time. Earth isn't there yet. For whatever reason moon locked faster.
No.5523
>>5515
The time it takes for the moon to orbit Earth is the same time it takes the moon to rotate. This is why it’s tidally locked.
>>5515
>If tidal locking is a thing, why is the Earth not tidally locked to the Sun?!
Earth is to far from the Sun.
>>5516
>What flavor chemist and engineer are you? O do you mean chemical engineer?
He’s a brainlit. I knew what this was when I was twelve.
File: e90336aa1f24b27⋯.png (95.04 KB, 616x174, 308:87, sci.png)
Is this beautiful trap really going to teach me about Quantum Physics? Or is it a scam?
3 posts omitted. Click reply to view.
No.5487
her videos are good for jerking off, not much else
No.5490
She is such a ditz and admits herself that she squandered her undergrad years and doesn't even know basic computer science.
No.5494
More like cumton jizzics
No.5520
>>5448
is she really a trap?
No.5522
File: f1fb1fdb68cfcaa⋯.jpg (135.04 KB, 700x700, 1:1, 27-food-review1.w700.h700.jpg)
File: 97c886e5f457435⋯.gif (25.91 KB, 672x864, 7:9, h-bomb schematic.gif)
has anyone here ever successfully created an atomic weapon?
No.5513
fuck off fbi fag
go tell government to bomb Iran some more
No.5521
What is magnetism
No.5524
>>5482
An "atomic bomb" could literally be a dirty bomb: uranium and a few sticks of dynamite. Would make a city uninhabitable for generations. Don't do that though.
File: b901a089a6f9f8a⋯.png (90.5 KB, 645x729, 215:243, brainlet.png)
I have n number of marbles and n ≤ 100. If I group the marbles into some groups of three, I will have
2 marbles left that belong to no group. If I group the marbles into groups of five, I have no marbles
left. But when I group them into groups of seven, I will have 4 marbles left. How many marbles that I
actually have? In other words, determine the value of n
24 posts and 5 image replies omitted. Click reply to view.
No.5450
>>5433
I'm sorry. I genuinely don't get the joke here.
Are you praising him for still using C or are you mocking him?
No.5451
>>5431
How does one algebraically solve this problem without any blind trial and error?
No.5468
>>5428
anon, I know you already found the answer (and probably aren't checking this board anyways) but the lines wouldn't use the same variable, because the number of groups of 3, groups of 5, etc. are going to be different. It's y (the solution) that will be the same between them.
No.5478
File: 272c74fe7b99ec9⋯.png (69.93 KB, 645x729, 215:243, gmrww8q64syz.png)
I guessed 80. But the 95 guy's post showed that I was a brainlet.
No.5517
>>5441
that code doesn't even work. those if statements are outside of the for loop.
File: 55a952879986bab⋯.png (40.09 KB, 554x602, 277:301, TIMESAND___762wet2c sut8wd….PNG)
As a corollary in this paper, one finds a remarkable disproof of the Riemann Hypothesis.
>Real Numbers in the Neighborhood of Infinity
Anons, whats the easiest way to determine what sub-field I want to go into? I'm doing physics and don't know what direction I want to go. Quantum seems too pop-sci but classical mechanics doesn't seem to have a lot of job openings. I heard high-energy physics is good, but I wanna hear your opinions on either finding a field or just on this subject.
1 post omitted. Click reply to view.
No.5504
>>5499
Independent funding or government funding? I saw that the US was cutting theirs in favor of computing.
No.5507
>>5504
Private funding, there are ~5 labs racing to be the first but as it's fusion they could be racing for a long time.
No.5508
>>5507
ah, thanks anon
No.5509
>>5495
are you not already interested in something? or do you like a broad range of things which makes it hard to decide what exactly you want to do?
IMO you should just do what you like doing, but of course you don't want to get yourself into a dead end career.
No.5510
>>5509
Yeah, it's from broad range of topics I like, and I'll keep my options open (majoring in math with stats minor) and prob look into fusion like the above anon said.
File: 26c9d93561824d7⋯.png (578.51 KB, 999x1031, 999:1031, MathematicaSpikeyVersion8.png)
What are some good trackers for academic software?
No.5493
bump
No.5501
None that I know of. Most of the academic software I use of is open source anyways. Mathematica is pretty nice though. Are you looking for anything specific because I know Mathematica is on a whole bunch of public trackers.
First post on this board so you better reply back faggot.
No.5505
>>5501
not OP, but which ones do you use?
No.5506
>>5505
SciPy, OpenFoam, and OpenSCAD to name a few though then again I am unsure if you would call those academic software. It is what you make of it. Check out http://openscience.org/software/ if you are looking for more. Site is a little out of date but if there is a project that is dead it is fairly easy to find someone that took of the reigns and forked it.
so, for a true random coin flip the possible outcome of patterns is 2^n coin flips.
but what happens if i hard code an even distribution of heads and tails?
for a reasonable small n i can calculate it by hand.
e.g. n=2 coin flips:
either HT or TH compared to the true random TT, TH, HT, HH.
i've cut the possible outcomes by exactly half.
for n=4 coin flips:
HHTT, HTHT, HTTH, THTH, THHT, TTHH
6 out of the 2^4 possible outcomes. the amount of possible outcomes compared to true randomness keeps shrinking the more i increase n.
i'm pretty sure this is trivial for most, but i couldn't find the formula for n coin flips. i've calculated (ok, ok, i counted) a bit higher n's on paper via binary trees. tryed to look up a formula, but couldn't find one – prolly just missing the right wording for the question.
can someone help? (and maybe explain, not just give answer)
No.5502
If you've ever heard of the choose function, often written something like this (imagine the parentheses are one giant parenthesis).
>(n)
>(m)
It basically stands for the number of ways to choose 3 distinct items out of a bucket of six items, regardless of the order. It's actually a shorthand for this equation.
>(n!)/[m! * (n - m)!]
As for why it's this, you can first think about the number of ways to pick the first "m" out of a set of "n" distinct items, where the order matters. This is essentially
>n * (n - 1) * (n - 2) * … * (n - m + 1) *
Since there are "n" ways to pick the first item, "n - 1" ways to pick the second, and so on. If you think about it, this is essentially
>1 * 2 * 3 * … * (n - m - 1) * (n - m) * (n - m + 1) * … * (n - 1) * n [This is n!]
←—————————–(Divided by)————————————–
>1 * 2 * 3 * … * (n - m - 1) * (n - m) [This is (n - m)!]
Which can be written as n! / (n - m)!
Finally, since we don't actually care about the order in which we picked the items, we can divide it by the number of items we picked, which is "m!". Thus we get
>[n! / (n - m)!] / m! = n! / [m! * (n - m)!]
Here's another way to think about this. The number of ways we can arrange "n" numbers is n!. After arranging our numbers, we put the first "m" items in one set and the remaining "n - m" items in another. Since we don't care about the order of the first "m" items, we divide n! by m!. But we also don't care about the order of the "n - m" items we didn't pick, so we divide it by (n - m)! as well.
Keep in mind this means that picking "m" elements out of "m" is the same thing as picking "n - m" elements out of "n". In other words:
(n) = (n)(m) = (n - m)
So to finally answePost too long. Click here to view the full text.
No.5503
File: cba7478d60aae61⋯.png (179.86 KB, 1000x700, 10:7, Yui umarm.png)
>>5502
thank you
File: 655884378c7a71a⋯.jpeg (68.29 KB, 640x494, 320:247, 2BD0E3D0-0F0C-45A7-A25C-7….jpeg)
So get this. I look into the vaccine debate extensively. I look at all sides and I notice how all the anti-vaxxers are conspiracy theorists from conspiracy website circlejerks. I come to have faith in my country’s public health organization and all other countrie’s public health organizations and of course, my public schools, because the whole world’s public health orgs and schools teach the same thing in regards to the safety and effectiveness of vaccines… So I think I’m ready, and I can debate anyone with confidence.
Then a friend of my close friend says his cousin got some sort of trauma that resulted in a speech impediment, and all credible resources diagnosed it as a vaccine related injury…
My close friend informs me that this is proof vaccines are bad, and that I need to not be so gullible and do better research.
Doesn’t help that I’m fat and autistic and that anti vaxxer friend is in shape and never gets sick despite being a chain smoker.
Which brings us to today’s topic. What is the fine line between being educated and brainwashed? Has anyone else had an instance where reality seems to contradict what is taught in schools? Sometimes research and “educating” myself and others just makes me feel like the ignorant one. I feel useless no matter what I look into and what stance I try to uphold.
No.5497
>>5496
Most modern vaccines are schemes.
Back in the day it was more a good thing. Some people can't get their arms down from patenting and controlling the prescription of antibiotics and the few vaccines actually solving real pandemics. Along with being legal pushers and pushing extremely harmful prescription drugs over the counter.
The rest is just about dollars. The silver and aluminium the vaccines contain to create a whole-body immunological reaction is extremely volatile and can create a wide range of bad effects from heart failure to an itch in the ass.
Don't mess with them. Take only the really necessary ones.
I wouldn't give my children any in this day and age where I live.
No.5498
It seems you were looking for a black and white answer in a grey world.
Vaccines greatly increase average life-span and average quality of life but as with all averages there is a lower limit and that is the extremely rare cases where vaccines do harm mostly due to allergic reactions.
>What is the fine line between being educated and brainwashed?
Reason, if you can use critical thinking to support your position that is a good start but keep in mind the universe is under no obligation to make sense (see wave / particle duality ect.).
No.5519
>Ideas so good, they have to be mandatory.
Let's say there's a line of men fucking each other in the ass. The one in the very back (the only one that is not getting fucked in the ass) has HIV. How long until the guy in the front gets HIV, if they continue fucking like this for 600 thrusts an hour?
No.5492
>>5486
Practically an eternity.
It’s kind of glossing over details but essentially any virus must successfully infiltrate cells and modify their dna before they can begin replicating and producing particles capable of causing disease. This is something like 2-4 days for HIV. So it would have to be a really long orgy.
Plus the actual risk of contracting HIV from an infected individual in one encounter is a lot lower than you might think. I was surprised when I looked it up. I mean you’re still playing Russian roulette but it’s not like all six chambers are loaded.
File: 18af674c328e722⋯.jpg (5.23 KB, 261x193, 261:193, Nanitf.jpg)
>Friction is a non conservative force
>Friction arises from electromagnetic forces acting on atoms between two surfaces with relative motion between each other
>Electromagnetic forces are conservative
>MFW
No.5483
>>5464
Look at how you're defining your systems.
No.5485
>being a cuckservative
>not being third position
No.5491
>>5485
Gaddafi was 3rd position and he got stabbed in the butthole
File: 7c3a77f6d7a13e5⋯.jpg (6.95 KB, 480x360, 4:3, hqdefault.jpg)
Does 0^0 = 1?
Make your case, why or why not. If not, what should it be defined as?
5 posts omitted. Click reply to view.
No.5458
>>5413
It’s zero because any fraction where zero is the numerator is zero, is zero.
No.5462
Lets look Taylor sum of exponential function: $\Sigma_{i=0}^\infty x^i /i!$
If this make any sense, then 0^0 has to be one since exponent fuction is one at x=0.
No.5480
>>5413
Zero has no multiplicative inverse.
No.5488
>>5480
Is this fact the consequence of the fact that zero is the additive neutral element?
No.5489
>>5421
Generally,
0/0 = x
x*0 = 0
for any real x. Therefore, any real x is a valid value for 0/0. Therefore, 0/0 is indeterminate (as is does not correspond to a specific number).
File: 0714fe7e8bd9517⋯.png (3.79 MB, 1920x1080, 16:9, ClipboardImage.png)
I hope this isn't taken as profane by the scientists around here, so I have been watching this show and the interesting thing about its premise has been how most powers seem to come with a certain amount of drawbacks, of course there are a lot of those which completely break the rules but some others may seem OK.
So… let's say there are a few ideas for powers that I would like to write about or explore for whatever reason, I would like some real basis for them, this still can sound overpowered for all intents and purposes but let's try it.
What if there was this character whose power basically meant: This guy is a human particle accelerator/collider? what kind of things come to mind with such a premise other than to ask me to kill myself?
Delete Post [ ]
[]
Previous [1] [2]
| Catalog | Nerve Center | Cancer
[ / / / / / / / / / / / / / ]
|
2019-01-23 06:29:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3341955244541168, "perplexity": 1838.3940781552076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583897417.81/warc/CC-MAIN-20190123044447-20190123070447-00368.warc.gz"}
|
https://physics.stackexchange.com/questions/455348/why-does-air-move-from-higher-to-lower-pressure/455394
|
# Why does air move from higher to lower pressure?
Somebody answered this question elsewhere and said it's because there is a higher force acting at the high pressure region compared to the low pressure region so a net force pushes it towards the lower pressure region. My problem with this is that pressure is a scalar quantity, not a vector, from what I understand. So how do we know which direction the air moves in the high pressure region? If anybody could also link a diagram showing this that would be helpful as I'm having difficult visualising this. Thanks.
• The air will tend to move in the direction of the steepest pressure gradient. Jan 19, 2019 at 20:00
• ... with a deviation due to the Coriolis force.
– user137289
Jan 19, 2019 at 21:08
A good way to look a this is thinking of the gas as composed of many small particles.
In this context, air pressure is proportional to the amount of molecular collisions per unit area per unit time of the medium on its walls.
So, if you have a wall separating a higher pressure air from low pressure air, more particles will be colliding on the high P side than on the low P side of the wall. (see picture).
If the wall is removed, the particles won't collide with the wall and will just fly through it. Since the number of collisions per unit time is higher on higher P side than on the low P side, it is natural that there will be a net flow of particles towards the low P side, until the pressure equilibrates on the entire volume.
The Navier-Stokes equations of fluid flow can be written as $$\rho\left [ \frac{\partial v}{dt}+(v\cdot \nabla) v \right ] = -\nabla P + \rho g + \mu \nabla^2 v.$$ The left side is basically the "$$ma$$" in Newton's equation $$F=ma$$, and the right side is the total force. The first term is the pressure term that interests us, the other two gravity and viscosity.
$$\nabla P$$ is the gradient of the pressure, turning the scalar pressure field into a vector field. It can be viewed as pointing in the direction of the greatest rate of increase of the function. This is a hint of what is going on: what matters is not pressure, but pressure differences. Regions with constant pressure do not cause wind, but if there is an imbalance of pressure then air will move to reduce it, and hence flow along the (negative) gradient until the pressure difference is removed.
Thermodynamics to the rescue. Forget the sun, the earth's motions, the effects of altitude, etc. Think of two reservoirs of a monomolecular gas, equal in volume (keeping it simple!) but A is of higher pressure than B.
Connect the two with a valved tube, and open the valve. The pressure equalizes simply because there are initially more molecules in A to make their way into B than vice versa. Statistical mechanics in its crudest form. More precisely, B, which permits longer mean-path lengths, has become available to the higher pressure gas in A, and the pressure equalizes through a macroscopic transfer of gas, based on the statistical understanding of the motions of trillions of trillions of identical particles. This happens more or less rapidly depending on the cross-section of the throat of the valve.
Notice that I have nowhere described the process of diffusion.
One might say of this that initially there was lower entropy, thus work could be extracted, and was-- the work of equalizing pressure. Or one could invoke the equipartition theorem, which states that Entropy will have her way: energy gets parcelled out equally to all degrees of freedom. The Navier-Stokes equation, abstruse and elegant as it is, is the exfoliation of some entirely commonsense notions. The gas laws follow rather more directly.
Thermodynamics, or statistical mechanics, is often- or perhaps always-- the basis of a branch of physics. The Dutch physicist Verlinde derived Newtonian mechanics from thermodynamics! And I am sure that more work like his will be done.
And I notice that an even simpler answer has been entered while I was dithering with grammar! My complements.
• But initially, while there is a pressure difference, there is no random walk. You describe diffusion, with for example one gas on one side and a different gas on the other side. It takes a very long time to mix.
– user137289
Jan 20, 2019 at 13:39
• Pieter, think of a tank of gas being vented in outer space. The gas released finds itself in a region where the mean-path-length (between collisions- that's the term I wanted) is arbitrarily large. It goes, and doesn't come back. We think macroscopically that the vacuum is pulling it out of its tank, or the pressure is pushing it out, but the real story is microscopic, atom-scaled-- statistical mechanics. Jan 20, 2019 at 18:54
My problem with this is that pressure is a scalar quantity, not a vector, from what I understand. So how do we know which direction the air moves in the high pressure region?
$$\let\rho=\varrho$$ I think you are asking for a more fundamental answer, at a classical macroscopic level. This I'll try now.
You're perfectly right, pressure is a scalar. Even though, from a more sophisticated viewpoint, it could be called an "isotropic tensor". Quite intimidating, but conveying a more basic meaning of the concept.
What is meant is simple. Let's view on an example. Suppose you have a tank filled of water. Take a solid cube, tie it to a wire and drop it into the water, leaving it suspended at half height. Assume two faces of the cube stay horizontal, the other four vertical.
Because of pressure water pushes on all six faces. It's a property of fluids (liquids and gases) discovered by Pascal that the force on any face is always perpendicular to that face and its intensity is $$F = P A$$ ($$P$$ pressure, $$A$$ face's area). This remains true if also you rotate your cube in any direction.
I've just stated the famous Pascal's principle. You may well take it as an experimental truth.
Note that I've written $$P$$ for pressure, tacitly assuming its value is the same everywhere. If it's so, net force of water on cube is obviously zero, since forces on opposite faces cancel each other. But I expect you to protest: "Hey, I heard of buoyancy, of Archimedes' principle. Where did it get lost?"
You're absolutely right, and I pretended of forgetting all that to the benefit of my presentation. Of course it isn't true that pressure is the same everywhere. It isn't if water is resting in a gravitational field, as we are accustomed to see. Maybe you know of another fundamental law of hydrostatics: Stevin's Law: $$P = P_0 + \rho g h$$ where $$P$$ is pressure in a point at depth $$h$$ under water's free surface, $$P_0$$ is the atmospheric pressure just above water, $$\rho$$ water's density, $$g$$ the intensity of gravitational field (assumed uniform).
Then it's no longer true that forces on opposite faces cancel. It remains true for vertical faces, but not for the two horizontal faces. If $$P$$ is pressure on upper face, the one $$P'$$ on lower face is higher: $$P' = P + \rho g a$$ if $$a$$ is the side of cube. Resultant of both forces is upward directed and amounts to $$F_{\mathrm{net}} = \rho g a^3 = \rho g V = M g$$ where $$M$$ is the mass of a volume $$V$$ of water (the cube's volume).
We have retrieved, for this special case, Archimedes' principle.
Now we are able to tackle you main question. It deals not with a static uniform situation such as the one considered so far, but with a nonuniform atmosphere, where pressure differences cause air to move. To ask why there are pressure differences is another story I'm not going to face.
So let's substitute atmosphere for our water tank and in place of a solid cube consider a small air volume, only ideally separated from the rest by a closed surface in form of a cube. If for some reason it happens that pressure on opposite sides (e.g. a couple of vertical sides) is not the same then on that air cube a net horizontal force will act, from the higher side towards the lower one.
We're only left to apply 2nd Newton's Law to conclude that the air cube will be accelerated from high pressure towards low pressure. You were already warned in other answers that things in the atmosphere may not be so simply treated - there will be other forces, first of all friction, then the "mysterious" Coriolis' force. Actually it's very rare that air moves straightly from high to low pressure: more frequently it goes on spiraling around high or low pressure regions (viz anticyclonic or cyclonic regions) with a diverging (for anticyclon) or a converging (for cyclon) component.
Here is an easy way to think of this.
The atmosphere is an ocean consisting of air and its depth and density determine the pressure we measure all the way down at the bottom of that ocean, where we live.
Since cold air is denser than warm air, a region of the atmosphere consisting of cold air will be slightly heavier than the (warmer) air surrounding it and hence will tend to sink towards the bottom of the ocean and spread out underneath the warmer air around it.
In addition to this, the air ocean has currents in it like the water ocean does, and those currents sometimes collide and pile the air ocean up to a slightly greater depth in that region. the pressure we experience down at the bottom underneath one of those pileups will be greater than normal, and the pileup strives to sink down under its weight and spread out against the bottom.
|
2022-07-01 09:05:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7112333178520203, "perplexity": 416.4643373454303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00717.warc.gz"}
|
https://moderndive.netlify.app/2-6-facets.html
|
## 2.6 Facets
Before continuing with the next of the 5NG, let’s briefly introduce a new concept called faceting. Faceting is used when we’d like to split a particular visualization by the values of another variable. This will create multiple copies of the same type of plot with matching x and y axes, but whose content will differ.
For example, suppose we were interested in looking at how the histogram of hourly temperature recordings at the three NYC airports we saw in Figure 2.9 differed in each month. We could “split” this histogram by the 12 possible months in a given year. In other words, we would plot histograms of temp for each month separately. We do this by adding facet_wrap(~ month) layer. Note the ~ is a “tilde” and can generally be found on the key next to the “1” key on US keyboards. The tilde is required and you’ll receive the error Error in as.quoted(facets) : object 'month' not found if you don’t include it here.
ggplot(data = weather, mapping = aes(x = temp)) +
geom_histogram(binwidth = 5, color = "white") +
facet_wrap(~ month)
We can also specify the number of rows and columns in the grid by using the nrow and ncol arguments inside of facet_wrap(). For example, say we would like our faceted histogram to have 4 rows instead of 3. We simply add an nrow = 4 argument to facet_wrap(~ month).
ggplot(data = weather, mapping = aes(x = temp)) +
geom_histogram(binwidth = 5, color = "white") +
facet_wrap(~ month, nrow = 4)
Observe in both Figures 2.13 and 2.14 that as we might expect in the Northern Hemisphere, temperatures tend to be higher in the summer months, while they tend to be lower in the winter.
Learning check
(LC2.18) What other things do you notice about this faceted plot? How does a faceted plot help us see relationships between two variables?
(LC2.19) What do the numbers 1-12 correspond to in the plot? What about 25, 50, 75, 100?
(LC2.20) For which types of datasets would faceted plots not work well in comparing relationships between variables? Give an example describing the nature of these variables and other important characteristics.
(LC2.21) Does the temp variable in the weather dataset have a lot of variability? Why do you say that?
|
2021-01-26 08:16:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.318776398897171, "perplexity": 1305.3540192860382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00273.warc.gz"}
|
http://mathoverflow.net/revisions/78685/list
|
MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
3 Fixed matrix
The question I am going to ask looks well-known, and I even may have heard things about it (but since I used to be deaf to anything in characteristic 2, whatever I heard has never been recorded in my mind) so it may be seen as a request for references. I have not been able to find a reference myself.
Let $f$ be a cuspidal eigenform of weight $k \geq 2$ for some congruence subgroup of $Sl_2(\mathbb{Z})$. Then as is well-known, for every prime $p$, there exists a unique, absolutely irreducible, Galois representation $\rho : G_{\mathbb Q} \rightarrow Gl_2(K)$ where $K$ is a suitable finite extension of $\mathbb Q_p$, odd (that is such that $\rho(c)$ is conjugate to the diagonal matrix $(1,-1)$), and satisfying the Eichler-Shimura relations.
I am interested in the case $p=2$. Let $A$ be the ring of integers of $K$, $m$ is maximal ideal, and $k=A/m$ the residue field, of characteristic $2$. I want to reduce $\rho$ mod $m$. As is still well-known, there are several way to do that, one for each choice of a stable $A$-lattice $\Lambda$ in $K^2$: one defines the representation $\bar \rho_\Lambda$ over $k$ as the action of $G_{\mathbb Q}$ on $\Lambda/m \Lambda$. The various $\bar \rho_\Lambda$ have all the same semi-simplification.
Now my question: What is the conjugacy class of $\bar \rho_\Lambda(c)$ in $Gl_2(k)$?GL_2(k)$? The characteristic polynomial of$\bar \rho_\Lambda(c)$is$X^2-1 = (X-1)^2$in$k$, so either this matrix is the identity, or it is conjugate to the unipotent matrix $(1 \\begin{pmatrix} 1 & 1$1 \\$ 0 \& 1)$1 \end{pmatrix}$. I'd like to know when (that is for which $f$, $\Lambda$) we are in the first case, and when we are in the second case.
Note: the fact that $\rho(c)$ is conjugate to the diagonal matrix $(1,-1)$ does not trivially implies that $\bar \rho_\Lambda(c)$ is necessarily the diagonal matrix $(1,1)$, because the two eigen-lines of $\rho(c)$ may not be in good position w.r.t the lattice $\Lambda$, that is the sum of their intersections with $\Lambda$ may be a proper sub-lattice of $\Lambda$. For example if $\rho(c)$ is the anti-diagonal matrix in the canonical basis of $K^2$, and $\Lambda = A \oplus A$ is a stable lattice, then $\bar \rho_\Lambda(c)$ is clearly not the identity.
2 added 27 characters in body; added 1 characters in body
The question I am going to ask looks well-known, and I even may have heard things about it (but since I used to be deaf to anything in characteristic 2, whatever I heard has never been recorded in my mind) so it may be seen as a request for references. I have not been able to find a reference myself.
Let $f$ be a cuspidal eigenform of weight $k \geq 2$ for some congruence subgroup of $Sl_2(\mathbb{Z})$. Then as is well-known, for every prime $p$, there exists a unique, absolutely irreducible, Galois representation $\rho : G_{\mathbb Q} \rightarrow Gl_2(K)$ where $K$ is a suitable finite extension of $\mathbb Q_p$, odd (that is such that $\rho(c)$ is conjugate to the diagonal matrix $(1,-1)$), and satisfying the Eichler-Shimura relations.
I am interested in the case $p=2$. Let $A$ be the ring of integers of $K$, $m$ is maximal ideal, and $k=A/m$ the residue field, of characteristic $2$. I want to reduce $\rho$ mod $m$. As is still well-known, there are several way to do that, one for each choice of a stable $A$-lattice $\Lambda$ in $K^2$: one defines the representation $\bar \rho_\Lambda$ over $k$ as the action of $G_{\mathbb Q}$ on $\Lambda/m \Lambda$. The various $\bar \rho_\Lambda$ have all the same semi-simplification.
Now my question: What is the conjugacy class of $\bar \rho_\Lambda(c)$ in $Gl_2(k)$?
The characteristic polynomial of $\bar \rho_\Lambda(c)$ is $X^2-1 = (X-1)^2$ in $k$, so either this matrix is the identity, or it is conjugate to the unipotent matrix $(1 \& 1$ \ \ $0 \& 1)$. I'd like to know when (that is for which $f$, $\Lambda$) we are in the first case, and when we are in the second case.
Note: the fact that $\rho(c)$ is conjugate to the diagonal matrix $(1,-1)$ does not trivially implies that $\bar \rho_\Lambda(c)$ is necessarily the diagonal matrix $(1,1)$, because the two eigen-lines of $\rho(c)$ may not be in good position with w.r.t the lattice $\Lambda$, that is the sum of their intersections with $\Lambda$ may be a proper sub-lattice of $\Lambda$. For example if $\rho(c)$ is the anti-diagonal matrix in the canonical basis of $K^2$, and $\Lambda = A \oplus A$ is a stable lattice, then $\bar \rho_\Lambda(c)$ is clearly not the identity.
1
Image of complex conjugation by modular representations in characteristic 2
The question I am going to ask looks well-known, and I even may have heard things about it (but since I used to be deaf to anything in characteristic 2, whatever I heard has never been recorded in my mind) so it may be seen as a request for references. I have not been able to find a reference myself.
Let $f$ be a cuspidal eigenform of weight $k \geq 2$ for some congruence subgroup of $Sl_2(\mathbb{Z})$. Then as is well-known, for every prime $p$, there exists a unique, absolutely irreducible, Galois representation $\rho : G_{\mathbb Q} \rightarrow Gl_2(K)$ where $K$ is a suitable finite extension of $\mathbb Q_p$, odd (that is such that $\rho(c)$ is conjugate to the diagonal matrix $(1,-1)$), and satisfying the Eichler-Shimura relations.
I am interested in the case $p=2$. Let $A$ be the ring of integers of $K$, $m$ is maximal ideal, and $k=A/m$ the residue field, of characteristic $2$. I want to reduce $\rho$ mod $m$. As is still well-known, there are several way to do that, one for each choice of a stable $A$-lattice $\Lambda$ in $K^2$: one defines the representation $\bar \rho_\Lambda$ over $k$ as the action of $G_{\mathbb Q}$ on $\Lambda/m \Lambda$. The various $\bar \rho_\Lambda$ have all the same semi-simplification.
Now my question: What is the conjugacy class of $\bar \rho_\Lambda(c)$ in $Gl_2(k)$?
The characteristic polynomial of $\bar \rho_\Lambda(c)$ is $X^2-1 = (X-1)^2$ in $k$, so either this matrix is the identity, or it is conjugate to the unipotent matrix $(1 \& 1$ \ \ $0 \& 1)$. I'd like to know when (that is for which $f$, $\Lambda$) we are in the first case, and when we are in the second case.
Note: the fact that $\rho(c)$ is conjugate to the diagonal matrix $(1,-1)$ that $\bar \rho_\Lambda(c)$ is necessarily the diagonal matrix $(1,1)$, because the two eigen-lines of $\rho(c)$ may not be in good position with the lattice $\Lambda$, that is the sum of their intersections with $\Lambda$ may be a proper sub-lattice of $\Lambda$. For example if $\rho(c)$ is the anti-diagonal matrix in the canonical basis of $K^2$, and $\Lambda = A \oplus A$ is a stable lattice, then $\bar \rho_\Lambda(c)$ is clearly not the identity.
|
2013-06-19 11:54:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899527907371521, "perplexity": 86.81933192420585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708739983/warc/CC-MAIN-20130516125219-00019-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://aux.planetmath.org/compassandstraightedgeconstructionofgeometricmean
|
# compass and straightedge construction of geometric mean
Given line segments of lengths $a$ and $b$, one can construct a line segment of length $\sqrt{ab}$ using compass and straightedge as follows:
1. 1.
Draw a line segment of length $a$. Label its endpoints $A$ and $C$.
2. 2.
Extend the line segment past $C$.
3. 3.
Mark off a line segment of length $b$ such that one of its endpoints is $C$. Label its other endpoint as $B$.
4. 4.
Construct the perpendicular bisector of $\overline{AB}$ in order to find its midpoint $M$.
5. 5.
Construct a semicircle with center $M$ and radii $\overline{AM}$ and $\overline{BM}$.
6. 6.
Erect the perpendicular to $\overline{AB}$ at $C$ to find the point $D$ where it intersects the semicircle. The line segment $\overline{DC}$ is of the desired length.
This construction is justified because, if $\overline{AD}$ and $\overline{BD}$ were drawn, then the two smaller triangles would be similar, yielding
$\frac{AC}{DC}=\frac{DC}{BC}.$
Plugging in $AC=a$ and $BC=b$ gives that $DC=\sqrt{ab}$ as desired.
If you are interested in seeing the rules for compass and straightedge constructions, click on the provided.
Title compass and straightedge construction of geometric mean CompassAndStraightedgeConstructionOfGeometricMean 2013-03-22 17:14:55 2013-03-22 17:14:55 Wkbj79 (1863) Wkbj79 (1863) 10 Wkbj79 (1863) Algorithm msc 51M15 msc 51-00 ConstructionOfCentralProportion
|
2018-06-18 15:20:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 45, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043422937393188, "perplexity": 372.35980657675987}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860570.57/warc/CC-MAIN-20180618144750-20180618164750-00516.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Involution_(Mapping)/Definition_2
|
# Definition:Involution (Mapping)/Definition 2
Jump to navigation Jump to search
## Definition
An involution is a mapping which is its own inverse:
$f: A \to A$ is an involution precisely when:
$\forall x, y \in A: \map f x = y \implies \map f y = x$
## Also known as
An involution is also known as an involutive mapping or an involutive function.
An involutive mapping can also be found described as self-inverse.
## Also see
• Results about involutions can be found here.
|
2020-01-26 09:21:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506134152412415, "perplexity": 3870.210234264347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00377.warc.gz"}
|
https://mangband.org/forum/viewtopic.php?f=24&p=10006&sid=e59e89c7b39feab39559498bd0bd410a
|
## Saving macros
Flambard
King Vampire
Posts: 258
Joined: Wed 20.06.2007, 10:49
### Re: Saving macros
ema wrote:
Thu 30.05.2019, 20:55
mmmh...I remember in the old versions of MAngband (some years ago) I used keys like "-", "N", "x" and "h", for example...
But now these keys interact with macros in other section of the game (like in the shops) or in creating macros (like \em1h with key "h" as macro command).
In this case should be an error in my sequence macro creating (I see now new command (8,9,0))?
Sorry for my unbelievable english...
Right, those were either "command macros" either "keymaps". They only execute when the game is waiting for a command prompt, and don't interfere with other things. I don't think there's a good way to add "command macros" right now (something's missing in the UI/pref file loader), but I'm fairly sure you can create keymaps.
You will need to edit your pref file by hand, the syntax is very similar to macro definitions (A line with action, followed by C line with mapping), for example:
Code: Select all
# Execute action 'm1b' when 'h' is pressed
A:m1b
C:0:h
(0 here means "normal keymap", 1 would mean "roguelike keymap")
Hope that was of any help.
ema
Greedy Little Gnome
Posts: 15
Joined: Wed 12.08.2009, 18:46
Location: Italy
### Re: Saving macros
Troubles on macros continue.
But doesn't matter, I switch on TOme net.
Thx for Your attention anyway, Flambard!
|
2022-09-25 03:03:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5921118855476379, "perplexity": 13347.366559057888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00319.warc.gz"}
|
https://www.physicsforums.com/threads/very-basic-qm-problem-commuter-of-position-and-momentum-operators.308344/
|
# Very Basic QM problem: Commuter of position and momentum operators
I'm not exactly sure if this belongs in introductory or advanced physics help.
## Homework Statement
In my book, the author was explaining the proof of the Uncertainty relation between po
position and momentum.
It simply stated that [x,p]= ih(h is reduced)
But when I tried to verify it I got -ih. I now it would give the same result, but it still won't be good for me to mess up a fundamental concept so early.
## Homework Equations
$$\hat{p}$$=-ih d/dx
$$\hat{x}$$=$$\hat{x}$$
[A,B]=AB-BA
## The Attempt at a Solution
[x,p]$$\left|\psi$$>=(xp-pa)$$\left|\psi$$>=xp$$\left|\psi$$>-pa$$\left|\psi$$>
It would become this:
-ihx d$$\psi$$/dx -ihx-(-ihx d$$\psi$$/dx)=
-ihx
Which is not the answer my book gave me.
Dick
Homework Helper
Your answer isn't right. The px part means you have to differentiate x*psi. Use the product rule.
Hello, Pinu7 ! I wish I could do some help.
Firstly, I'd like to tell you that, the given attempt solution is incorrect, for the sign of $i\hbar x$ is plus rather than minus. It is a slight carelessness in extending the derivative of product. Now, I will re-perform the calculating in details. And please translate the tex codes yourself.
For one-dimensional simplified case, as put forward in the question (or for the x-component of 3-dimensional analyses):
$$\hat{x}=x$$
$$\hat{p}= -i\hbar \frac{d}{dx}$$
Hence, with the quantum Poisson bracket operator:
[\hat{x}, \hat{p}] \phi = \hat{x} \hat{p} \phi - \hat{p} \hat{x} \phi
= -i\hbar x \frac{d}{dx} \phi -( -i\hbar \frac{d}{dx} (x \phi))
= -i\hbar x \frac{d}{dx} \phi -( -i\hbar \phi -i\hbar x \frac{d}{dx} \phi ) (Look Out! Your Carelessness Happens Here!)
= i\hbar \phi
Just as the textbook gives.
|
2021-01-23 05:49:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9334457516670227, "perplexity": 3241.7631429648122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00291.warc.gz"}
|
https://www.nature.com/articles/s41467-021-22422-7?error=cookies_not_supported&code=68261495-c0dc-4620-9689-c650a593949a
|
## Introduction
In cells, actin filaments are organized into cross-linked, branched, and bundled networks. These different architectures appear in structures, such as filopodia, stress fibers, the cell cortex, and contractile actomyosin rings; each has unique physical properties and fulfills different roles in important cellular processes1. These different structures must be actively assembled and maintained by cellular factors, such as the many actin cross-linking proteins. By mediating higher-order actin organization, cross-linkers allow actin filaments to fill a diverse array of structural and functional roles within cells2,3.
In many cases, actin networks are linked to, or organized around, cellular membranes. Actin polymerization is a driving force behind many examples of membrane dynamics, including cell motility, membrane trafficking, and cell division1. Many of the actin-binding proteins involved in these processes are directly regulated via interactions with phospholipid bilayers4,5 and membrane interactions have, in turn, been shown to physically guide actin assembly6. While the link between the actin cytoskeleton and phospholipid bilayers is clear, how these connections affect the large-scale organization of complex actin networks remains an open question.
Actin is not only one of the most prevalent proteins in current reconstitution experiments7,8, but was also one of the first proteins to be explored in such approaches9,10. The focus of actin-related work has since shifted from identifying the components responsible for muscle contraction11, to investigating more detailed aspects of the cytoskeleton8,12, such as the dynamics of actin assembly13,14 or the cross-talk with other cytoskeletal elements15. These experiments have extended to actin–membrane interactions, including reconstitution of actin cortices on the outside of giant unilamellar vesicles (GUVs)16,17,18, and contractile actomyosin networks associated with supported membranes19,20,21. Recently, creating a synthetic cell with minimal components recapitulating crucial life processes, such as self-organization, homeostasis, and replication, has become an attractive goal22,23. As such, there is increased interest in work with actin in confinement and specifically within GUVs24,25, in order to mimic cellular mechanics, by encapsulating actin and actin-binding proteins in vesicles26,27,28,29. However, the investigation of higher-order actin structures or networks has been the subject of few studies thus far29,30,31,32.
Particularly, interesting for the reconstitution of actin-related cell processes is the co-encapsulation of myosin with actin, in order to form contractile actomyosin structures. While Tsai et al. showed the reconstitution of a contractile network in vesicles26, and others have reconstituted actomyosin networks in vesicles that imitate actin cortices27,29, contractile actomyosin rings have proven difficult to achieve. A true milestone toward the reconstitution of a division ring is the work of Miyazaki and coworkers, who encapsulated actomyosin with a depletant in water-in-oil droplets33. They showed not only that the formation of equatorial rings from actin bundles is a spontaneous process that occurs in spherical confinement, in order to minimize the elastic energy of the bundles, but also demonstrate the controlled contraction of these actomyosin rings.
Due to the difficulty of encapsulating functional proteins within membrane vesicles, much of the past work has been limited to water-in-oil emulsions and adding proteins to the outside of vesicles or onto supported lipid membrane systems. However, novel encapsulation methods such as continuous droplet interface crossing encapsulation (cDICE), as used here, have enabled the efficient transfer of proteins and other biomolecules into cell-sized phospholipid vesicles, as an ideal setting to study complex cellular processes involving membranes34,35,36,37. The challenges and applications of protein encapsulation in GUVs are summarized in a current review article38. Here, we optimized actin encapsulation for a high degree of reproducibility and precision. This allowed us to reconstitute novel cell-like cytoskeletal features, as well as compare our experimental results with numerical simulations of confined interacting actin filaments. The development of experimentally testable predictive theoretical models is central for the future design of complex experiments that approach the functional complexity of biological systems.
We combined actin bundling and actin–membrane linkage to obtain results more closely resembling in vivo morphologies than previously achieved in vitro. Specifically, we induced the formation of membrane-bound single actin rings, which imitate the contractile division rings observed in many cells. In agreement with our numerical simulations, we show that membrane anchoring significantly promotes the formation of actin rings inside vesicles. We achieved close to 100% probability of ring formation in vesicles when using the focal adhesion proteins talin and vinculin, which we recently identified as effective actin bundlers39. With the inclusion of motor proteins, these actomyosin rings contract similar to those observed in yeast protoplasts40.
Thus, in this study, we not only achieve the formation of membrane-attached actin rings within lipid vesicles, but also observe large-scale membrane deformation when including myosin in the system. Although aspects of our study were previously addressed individually, such as encapsulation of actin bundles, actin binding to the inner membrane leaflet of a vesicle, or encapsulation of contractile actomyosin networks in vesicles, until now it proved too experimentally challenging to reproducibly combine these within one experimental system. Our results provide a high-yield approach, returning reproducible and quantifiable results, that brings us that much closer to the ultimate goal of being able to quantitatively design and experimentally achieve full division of a synthetic membrane compartment, and thus, to the self-reproduction of artificial cells, a persistent goal in bottom-up biology41,42,43,44.
## Results
### Experimental system
In order to investigate the interplay between actin cross-linking and membrane binding, we used a modified cDICE method45,46 to encapsulate G-actin with associated proteins and generate cytoskeletal GUVs made from the lipid POPC (Fig. 1a). Since components cannot be added once the reaction mix is encapsulated, the precise composition of the initial reaction mix is crucial. By tuning concentrations of the polymerization buffer, bundling proteins, membrane anchors, and motor proteins, we manipulated the final morphology of the actin network.
By co-encapsulating actin with known actin cross-linking proteins, we achieved large-scale networks with clearly discernible actin structures, similar to earlier studies with cytoskeletal GUVs30. We tested four different types of actin bundling proteins: fascin, α-actinin, vasodilator-stimulated phosphoprotein (VASP), and a combination of the focal adhesion proteins talin and vinculin. Each case represents a slightly different mechanism of actin binding. Fascin, a 55 kDa protein, binds to actin through two distinct actin-binding sites, thereby inducing filament cross-links as a monomer47. α-Actinin (110 kDa) forms a dimer which bridges two filaments30,48. Talin (272 kDa) and vinculin (116 kDa) both dimerize, and also require interactions with each other in order to bind and bundle actin filaments39. Here we use a deregulated vinculin mutant (see Supplementary Information). VASP (50 kDa) forms a tetramer, which can link up to four filaments together49. Under all four conditions, the formation of thick filament bundles was observed (Fig. 1b, c). Interestingly, while α-actinin, talin/vinculin, and VASP all produced similar morphologies, fascin bundles take on the most unique appearance. These bundles often bend in kinks when their path is obstructed by the membrane, while the other proteins form smoothly curved bundles that can follow the curvature of the encapsulating membrane (Fig. 1c, d). While similar observation have been made in the past, generally, bulk assays will be better suited for quantifying these bundle parameters30,50,51,52,53.
After establishing successful encapsulation of actin and its bundling proteins, we modified the approach and linked the actin filaments to the phospholipid bilayer via biotin–neutravidin bonds, similar to previous work on planar supported lipid bilayers19. This requires the incorporation of biotinylated lipids in the vesicle membranes and the addition of both biotinylated g-actin and neutravidin in the encapsulated reaction mix. We tested different fractions of biotinylated lipids, as well as biotinylated actin (Supplementary Figs. 2 and 3) and identified 1% biotinylated lipids and 4% biotinylated actin as suitable amounts, which we used in the following experiments.
### Numerical simulations
Theoretical predictions by Adeli Koudehi et al. have suggested that actin organization depends crucially on confinement and surface attachment54. In order to explore the agreement of our experimental results with these simulations, we adopted their theoretical model. As such, we performed numerical simulations of interacting actin filaments under spherical confinement using Brownian dynamics (see Supplementary Methods)54. Semi-flexible actin filaments were modeled as beads connected by springs, with cross-linking represented by a short-range attraction with spring constant $$k_{{{\mathrm{atr}}}}$$. Polymerization from an initial number of filament seeds was simulated by addition of beads at one of the filament ends (representing the barbed end). The number of seeds was changed to achieve different final filament lengths. Boundary attraction was simulated as short-range attraction to the confining boundary. Simulated maximum intensity projections were performed as in Bidone et al.55.
### Membrane attachment shapes actin organization by curvature induction
We performed a series of experiments with the simplest bundling protein fascin to investigate the effects of membrane binding on bundle morphology. We notice that membrane-binding primarily affects the curvature of these bundles: while actin with fascin forms very straight bundles that are just generally confined by the membrane, we see that membrane-bound fascin bundles often adopt the exact curvature of the membrane (Fig. 2a). Figure 2b shows a histogram of the distribution of bundle curvatures in these vesicles. The histogram shows a much broader distribution for unbound bundles, with a maximum at low curvatures, while the maximum for membrane-bound actin bundles is centered around the curvature of the membrane (relative curvature = 1.0).
Despite this difference on a small scale, the general distribution of bundles within the vesicles seems to be largely independent of the presence of actin–membrane linkers. Figure 3a shows a set of conditions with and without membrane binding. We quantified the average actin distribution for each condition and find that (with some exceptions) actin is consistently positioned in close proximity to the membrane, with only minor differences between conditions with and without membrane linkers (Fig. 3c, d). While in previous work even unbundled actin has been seen to be more concentrated at the membrane32, our results indicate that bundles need to be sufficiently long, so that confinement by the vesicle boundary forces them to bend and concentrate at the inner surface. We note that for 2 µM actin (Fig. 3a, top row) even low concentrations of fascin are sufficient to cause this effect, while we do observe that the thickness of the bundles increases with higher concentrations (Supplementary Fig. 5).
At higher actin concentrations (6 µM) and low fascin to actin ratios (3.3 and 6.7%), bundles were shorter and thus more homogeneously distributed in the vesicles when not bound to the membrane (Fig. 3d). Interestingly, we note that membrane-binding affects the threshold at which long actin bundles form: at a fascin to actin ratio of 6.7%, we only observe long bundles when we include membrane linkers (see also Supplementary Fig. 6). These observations agree with corresponding simulations (Fig. 3b).
In our experiments, ring-like structures consistently form at 2 µM actin, while at 6 µM actin, multiple bundles usually arrange themselves into cortex-like structures that do not condense into single rings. In our simulations, we see similar cortex-like morphologies in the early stages, but at longer times these condense into rings, both for low and high actin concentrations (Supplementary Fig. 7). As discussed below, the smaller confinement size we chose due to computation limitations favors ring formation. Another ring promoting factor could be the absence of a simulated maximum bundle thickness, unlike in experiments56.
### Ring formation
Excitingly, and in accordance with theoretical predictions, the most noticeable effect of membrane attachment was an increase in the formation of single actin rings. Although ring formation could still be observed without membrane–actin links, the introduction of membrane binding greatly enhances the probability of actin condensation into one single clearly discernible ring in vesicles. Membrane-bound actin rings have so far not been reported within synthetic vesicles.
Figure 4 highlights this effect of membrane attachment on the formation of actin rings. Figure 4a summarizes ring formation probabilities for three different bundlers, comparing conditions with and without membrane binding. We chose actin and bundler concentrations for which the formation of single rings is already relatively likely (12–35%) even without membrane attachment. In vesicles with membrane-attached actin, probability of ring formation roughly doubles for all bundlers, and reaches up to 80% for actin bundled by vinculin and talin (fluorescence image in Fig. 4c).
In simulations, Adeli Koudehi et al. found that boundary attraction in the case of spherical confinement enhances the probability of ring formation from bundled filaments54. However, the effect of boundary attraction in their work was studied for filament lengths larger than the confining diameter, whereas in experiments, we observed increased ring formation for vesicles and actin concentrations where the opposite should be true. We thus performed new simulations of actin filaments for concentrations chosen as in our experiments (c = 2 µM), and varied their lengths and confinement sizes (Fig. 4b, Supplementary Fig. 11, and Supplementary Movie 5). We selected filament cross-linking simulation parameters that lead to bundle formation without filament sliding and associated bundle compaction katr = 2 pN/µm. Including surface attraction greatly enhanced ring and ring-like structure formation for short filaments (length L = 1.2 µm) in small spheres (radius R = 2.5 µm). We also observed an enhancement of ring formation for filament lengths and sphere sizes comparable to that of our experiments (L = 6 µm as estimated from prior studies33,57,58, R = 5 µm), including when we increased the persistence length of individual actin filaments to simulate cross-linking induced bundle stiffening (Supplementary Fig. 11). Inspired by modeling results implying that the probability of ring formation depends on compartment size54, we analyzed our experimental data to confirm that rings preferably form in smaller vesicles (Supplementary Fig. 12).
### Actomyosin contraction
The rings observed here can be assumed to mimic reorganization of actin that occurs during the last stages of cell division. In order to take this analogy one step further, we included muscle myosin II with the ultimate goal of forming a contractile actomyosin ring. Constriction of non-anchored actin rings was shown by Miyazaki et al., who demonstrated myosin-mediated contraction in a less cell-like system. They used actin bundled by depletion forces in water-in-oil droplets and showed that the behavior reproduced by this system has a striking resemblance to constricting cell division rings33.
In our vesicles, the addition of myosin complicated the formation of single actin bundle rings. We used low concentrations of myosin II, such that the effect of myosin activity on bundling was minimized and motor-induced constriction slow enough to be observed while imaging. Although it appeared that the appropriate assay conditions for homogeneously contracting single rings have not yet been met in our giant vesicles, we recorded the constriction of a membrane-anchored ring-like structure along with membrane deformations. Figure 5a and Supplementary Movie 6 show this example over the course of 2 h. In accordance with our expectations, in this minimal system without further ring-stabilizing components, the constricting actomyosin ring eventually slides along the membrane and collapses into a single condensate on one side of the vesicle, a behavior that has been seen in yeast cells lacking cell walls40. Such an arbitrary local collapse is not too surprising, as coordinated ring constriction in the cell is a highly spatially regulated process involving hundreds of proteins. Clearly, additional cellular machinery is required to stabilize the position of the ring, and membrane geometry and fluidity likely play additional roles. Figure 5a shows how the actomyosin ring initially deforms the vesicle membrane (orange arrows), leading to a furrow-like indentation. The entire time series without overlays is shown in Supplementary Movie 6. Our experiments clearly show that the actin bundles are firmly attached to the inner leaflet of the vesicle membrane and that active forces are exerted by the motor proteins, capable of deforming the vesicle. Figure 5b shows another instance of actomyosin contractions leading to deformations of the vesicle membrane. Unfortunately the above mentioned complications in assay design hampered consistent observations of these membrane deformations.
An additional effect of the contraction of membrane-bound actomyosin is a change in the xy cross section area of the vesicle after contraction. This effect also appears for vesicles without initial furrow constriction. This is likely due to crumpling of the membrane into the actomyosin contraction point, which decreases membrane area (while vesicle volume is largely preserved) and increases membrane tension. As a result, vesicles that are initially slightly deflated become more spherical as a result of actomyosin contraction. Supplementary Figure 14a shows a differential interference contrast (DIC) image of the actomyosin contraction point in Fig. 5a, and more examples of vesicles with shrinking xy cross sections.
### Shaping the membrane compartment
Lipid membranes are highly flexible, and the shape of GUVs is mostly determined by the osmotic pressure inside the vesicle with respect to its environment. If this pressure is low, i.e., vesicles are osmotically deflated, strong deviations from the spherical shape are possible, and additional mechanical determinants, such as external forces or an encapsulated cytoskeleton induce arbitrary shapes of the vesicles32, as can be seen in Fig. 6a. The experiment we performed in Fig. 6b confirms the role of an artificial cytoskeleton in determining vesicle shape, i.e., stabilizing the shapes of membrane compartments: by imaging deformed cytoskeletal vesicles with increased laser power on our confocal microscope, the actin filaments depolymerize after some time due to photodamage59, relaxing the cytoskeleton-inferred shape determinants and leaving the deflated vesicles without internal support. This leads to dramatic changes in their shape, usually by taking on a spheroid (oblate) shape.
These stabilizing cortices of actin bundles can even protect the vesicles, e.g., against the unfavorable conditions of sample preparation for cryo-electron microscopy, specifically the drying of the sample (removal of the surrounding aqueous phase): Fig. 6c shows a cryo-scanning electron microscopy image of frozen cytoskeletal vesicles. When trying to freeze and image vesicles without an encapsulated actin cortex or with actin bundles that are not attached to the membrane, vesicles rarely survive the process (Supplementary Fig. 15). Although a thorough quantitative assessment of this effect is not within the scope of this study, it confirms previous work that GUVs can be stabilized through a shell of cross-linked material on the membrane of GUVs, not only with unbundled actin60, but also with other proteins61, as well as DNA origami62. Here, we show that heterogeneously distributed, higher-order structures can potentially achieve a similar mechanical effect.
## Discussion
In this work, we succeeded in reconstituting ring-like actomyosin structures in GUVs. With respect to a suitable protein machinery that may serve as a minimal divisome for protocells, this constitutes the first important step toward assembling contractile rings of sufficiently large sizes. To this end, we encapsulated a reaction mix into the vesicles that causes actin to polymerize, bundle, bind to the vesicle membrane, and even contract. We show that the bundle networks can be highly organized and, under many conditions, reproducibly cross-linked into single rings.
Clearly discernible ring formation has been previously shown by Miyazaki et al., who utilized depletion forces through the crowding agent methylcellulose, while confining actin polymers in small water-in-oil droplets33. These rings were shown to contract, but due to the lack of surface attachment in this system, unable to transmit contractile forces to the compartment interface. The formation of rings from biopolymer bundles in confinement due to a minimization of bending energy is known from both, theory54,63 and other experimental systems31,64,65. Here, we achieved similar actin rings bundled by various physiological factors. However, encapsulation in lipid vesicles allowed us to go an important step further and explore the effect of attaching these rings to the compartmentalizing lipid bilayer. In this scenario, ring contraction may be able to transmit a contractile force directly to the membrane, resulting in dramatic shape transformation of the respective compartment, induced by energy dissipation within.
We have thus shown, as a proof of principle, that non-equilibrium myosin-mediated constriction of such ring-like membrane-bound actin structures can be induced in lipid membrane vesicles. These vesicle deformations (Fig. 5a) demonstrate the strength of the membrane anchoring. The final contracted state reveals myosin-induced symmetry breaking, as observed in other actomyosin in vitro systems under confinement18,26,27,66. For example, Tsai et al. encapsulated a contractile actomyosin system in vesicles that condensed into dense clusters26.
Unless membrane area can be expanded at the same time, the osmotic pressure inside a spherical vesicle complicates a cell division-like symmetric constriction in the center of the vesicle, and in the absence of other geometric regulators, the fluidity of the membrane causes the ring-like bundles to “slip” and contract into a cluster in one location. Our experiments indicate, that in order to achieve a binary fission through contraction of a single ring, more spatial determinants are required.
A behavior similar to what we observe can be seen in vivo for the contraction of actomyosin rings in yeast protoplasts (yeast cells that have been stripped of their cell walls)40. Stachowiak et al. beautifully demonstrated that in these spherical cells without cell walls, the contractile actomyosin ring slides along the cell membrane, collapsing into one point at the side of the cells. The absence of a cell wall in fission yeast results in both a loss of their elongated shape and a lack of stabilizing the actomyosin ring in the cell center40.
We conclude that further assay improvement and, very likely, additional functional components will be necessary to accomplish a complete division of a cell-sized vesicle compartment in vitro. Functional studies will allow us to identify a machinery that ensures the placement of a contractile actomyosin ring at a defined site, while invoking other spatial cues to prevent the deflection of the induced contractile forces by surface slipping. The MinDE system, which has previously been shown to target FtsZ rings to the middle of compartments67, and extend this potential of positioning even to functionally unrelated membrane-binding particles68, may be an attractive candidate. Moreover, ring constriction could be more successful in a nonspherical, elongated vesicle; such a confinement shape, however, will likely prevent initial ring formation along the desired constriction site. Further requirements may include mechanisms to generate actin filament bundles of mixed polarity, and to sustain such distribution throughout the constriction process, possibly through turnover of components69.
To summarize, by reconstituting a contractile actomyosin ring-like structure in GUVs, we have made one essential step forward with regard to establishing a minimal system for active membrane vesicle division from the bottom up. Using this protein machinery from eukaryotes, large-size contractile ring structures could be generated and attached to vesicle membranes from the inside to transmit their contractile forces. Our experiments suggest that ring formation, membrane attachment, and contraction are not sufficient for division of these cell-like compartments, as long as the positional stability of the force-inducing machinery on the compartment surface cannot be ensured. However, toward the bottom-up design of a minimal division machinery, this system is an ideal starting point for identifying the additional parameters and components required. Furthermore, the robust and reproducible in vitro assembly methods used here provide a reliable platform for further reconstitution of the key processes of life beyond cell division.
## Methods
### Proteins
The proteins fascin (human, recombinant), α-actinin (turkey gizzard smooth muscle), myosin II (rabbit, m. psoas), actin (alpha-actin skeletal muscle, rabbit), ATTO488-actin (alpha-actin skeletal muscle, rabbit), and biotin-actin (alpha-actin, rabbit skeletal muscle) were purchased from HYPERMOL (www.hypermol.com). VASP was expressed in Drosophila S2 cells, and purified using Ni-NTA affinity purification followed by gel filtration. We use the mutant vinculinN773A,E775A (vinculin2A), which has reduced autoinhibitory interactions70. Purification of talin and vinculin2A was performed as described in detail in our previous publication39. In brief, the His-tagged proteins were expressed in Escherichia coli BL21 (DE3) and purified via Ni-NTA affinity purification. Talin was further purified using ion exchange purification and gel filtration chromatography on a Superdex 200 16/600 column (GE Healthcare) or Superose 6 10/300 column (GE Healthcare) in storage buffer (50 mM HEPES, pH 7.8, 150 mM KCl, 10% glycerol, and 0.1 mM EDTA). Protein purity was assessed via SDS–PAGE and gel filtration. Proteins were quick frozen and stored in aliquots at −80 °C until further use.
### Reaction mix preparation
In all experiments, we used 10% labeled actin (ATTO488-actin). In conditions with membrane-attached actin bundles we used 4% biotinylated actin and 0.17 µM neutravidin. In all cases, actin, labeled actin, and biotinylated actin were resuspended in deionized water and pre-spun at 15,000 × g for 10 min at 7 °C in a tabletop microcentrifuge. The top 75% of the actin solution was then transferred to a new Eppendorf tube and kept on ice for the duration of the experiment.
The actin concentrations we used are within the typical range used in in vitro experiments. However, it should be noted that these concentrations are much lower than the concentrations present in living cells, where a complex regulatory system controls the amount of polymerizable actin.
The reaction mix contained 10 mM imidazole, 50 mM KCl, 1 mM MgCl2, 1 mM EGTA, 0.2 mM ATP, pH 7.5, and 15% iodixanol (from OptiPrep™, Sigma Aldrich). For the experiments shown in all figures and movies except Fig. 6 and Supplementary Movie 7, the solution surrounding the GUVs contained 10 mM imidazole, mM KCl, 1 mM MgCl2, 1 mM EGTA, 0.2 mM ATP, pH 7.5, and 200 mM glucose. For the deflated GUVs in Fig. 6 and Supplementary Movie 7 the outer solution contained a higher glucose concentration (300 mM).
For the experiment in Fig. 5, we used 0.1 µM myosin II.
### Lipids
The preparation of the lipid-in-oil mixture is based on published protocols27,71. We use POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine, Avanti Polar Lipids, Inc.) with 1% DSPE-PEG(2000) biotin (1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[biotinyl(polyethyleneglycol)-2000], Avanti Polar Lipids, Inc.; both 25 mg/ml in chloroform) and give 77 µl thereof in a 10 ml vial with 600 µl chloroform. In experiments with labeled membranes, 3 µl DOPE-ATTO655 (0.1 mg/ml in chloroform) is added. While being mixed on a vortex mixer, 10 ml of a silicon oil and mineral oil (Sigma Aldrich, M5904) mix (4:1 ratio) is slowly added to the lipid solution. Since the lipids are not fully soluble in this mix of silicon oil, mineral oil, and chloroform, the resulting liquid is cloudy.
### Vesicle generation
Vesicles were produced using the cDICE method as described by Abkarian et al.45 with modifications we described in a previous publication46: instead of utilizing petri dishes, we 3D printed the rotating chamber, in which the vesicles are generated. Inner diameter of chamber: 7 cm, diameter top opening: 3 cm, height of chamber: 2 mm. (CAD file available at https://doi.org/10.5281/zenodo.4555840.) Printed with Clear Resin on Formlabs Form 2. A magnetic stirring device (outdated IKA-COMBIMAG RCH) served as a motor, after the heating unit was removed to expose the motor shaft.
The inner solution is loaded into a syringe (BD Luer-Lock™ 1-ml syringe), which is then placed into a syringe pump system (neMESYS base 120 with neMESYS 290 N) and connected through tubing to a glass capillary (100 µm inner diameter).
A total of 700 µl of the outer solution is pipetted into the rotating chamber, followed by ~5 ml of the lipid-in-oil mixture. The capillary tip is then immersed in the oil phase and the inner phase injected at a flow rate of 50 µl/h for 10 min. The vesicles are withdrawn from the cDICE chamber with a micropipette.
The concentration of the encapsulated protein varies within a certain range. In experiments in which we encapsulated a simple soluble fluorescent dye, we found that the concentrations within the vesicle population follows a log normal distribution (Supplementary Fig. 4b). We assume that this effect is reduced for a reaction mix containing actin in the process of polymerizing and bundling. During vesicle generation, this vesicle content is much less diffusive and therefore less likely to leave the vesicles.
### Imaging
The vesicles are pipetted into a microtiter plate for imaging (Greiner Bio-One, 384-well glass bottom SensoPlate™), each well passivated beforehand with 50 µl of 5 mg/ml β-casein (Sigma Aldrich) for 20 min.
Imaging is then performed with an LSM 780/CC3 confocal microscope (Carl Zeiss, Germany) equipped with a C-Apochromat, 40×/1.2 W objective. We use PMT detectors (integration mode) to detect fluorescence emission (excitation at 488 nm for ATTO488) and record confocal images.
Z-stack datasets of vesicles contain between 40 and 65 confocal slices (depending on vesicle size) with a slice interval of 0.5 µm with the exception of time series (Figs. 5 and 6b, and Supplementary Figs. 13 and 14), which contain less slices with a larger z interval.
### Image analysis
Image processing and analysis is mostly performed using the software ImageJ/Fiji72,73 and SOAX74,75. All images shown in the manuscript are maximum projections of z-stacks of confocal images (see Supplementary Fig. 1). Only exceptions are the images in Supplementary Figs. 2 and 3, which show single confocal images in order to highlight membrane binding. The three-dimensional representations in Supplementary Movies 14 are created with the Fiji command “3D Project”.
For the computational, three-dimensional characterization of the actin networks, we generate skeletonized models from selected vesicles in our confocal z-stacks. The networks are identified and extracted with SOAX by active contour methods. In order to optimize the images for the identification of the filaments, the stacks are first deconvolved using the software Huygens (Scientific Volume Imaging) and then further preprocessed using Fiji. Bundle curvature (Fig. 2b) is estimated with SOAX. Membrane proximity (Fig. 3c, d) is calculated with a custom ImageJ script, which is available at https://doi.org/10.5281/zenodo.4555840.
Visualizations of the skeletonized models of actin networks by SOAX, as shown in Fig. 1d and Supplementary Movie 2, are generated in UCSF Chimera76.
Vesicle sizes were manually measured using a MATLAB (MathWorks) script.
For more details regarding image processing and analysis see Supplementary Method.
### Statistics and reproducibility
Most of the images show particular features that were not reproduced with identical protein concentrations, however, were reproduced under similar conditions and in sum reflect on robust underlying mechanisms. In total, we performed ≥30 independent experiments with fascin, ≥15 independent experiments with α-actinin, ≥10 independent experiments with talin + vinculin and three independent experiments with VASP that all showed similar actin morphologies, as evidenced by the respective images. All images in Fig. 1 were reproduced at least three times with similar concentrations. Results in Fig. 2 were obtained on four different vesicles for each condition within two experimental runs. Conditions as in Fig. 3 were varied once with a total of 20 conditions and incremental differences between the parameters, indicates a high degree of reproducibility. Experiments in Fig. 4 were performed two (fascin) or three (α-actinin and talin + vinculin) times. Reproducibility of experiments with myosin was poor, as described in the paper. Figure 5a shows the most “furrow-like” membrane deformation we observed. We captured time series of membrane deformation in four additional cases. Membrane deformations as shown in Fig. 6a were observed on many occasions in ≥5 experimental runs. Result shown in Fig. 6b was repeated many times within ≥3 experimental runs. Cryo-EM shown in Fig. 6c was repeated once with similar results. Experiments shown in Supplementary Figs. 14 were performed once. Images in Supplementary Fig. 6 are from three different experimental runs per condition. Images shown in Supplementary Figs. 810 were reproduced at least once with similar concentrations. Images in Supplementary Figs. 1315 were not reproduced.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
2023-01-29 23:29:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5789405703544617, "perplexity": 4724.4042238669235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00174.warc.gz"}
|
http://nbviewer.jupyter.org/url/www.maths.manchester.ac.uk/~vsego/python/09a-analysis_of_algorithms.ipynb
|
Programming with Python¶
Contents:¶
1. Analysis of algorithms
Analysis of algorithms¶
It is not enough that a program works properly, but it should also finish its job timely and with reasonable resource requirements. When analyzing these aspects of an algorithm, we talk about time and space complexity. If unspecified which one it is, the time complexity is usually implied.
Why do we do it?¶
Consider the following problem:
We define Fibonacci numbers as follows: $$F_n := \begin{cases} n, & n \in \{0,1\}, \\ F_{n-1} + F_{n-2}, & \text{otherwise}.\end{cases}$$ Compute $F_n$ for a given $n$.
Mathematically, the problem is trivial: we just have to apply the definition. For example, for $n = 5$, we get:
\begin{align*} F_n = F_5 &= F_4 + F_3 \\ &= (F_3 + F_2) + (F_2 + F_1) \\ &= ((F_2 + F_1) + (F_1 + F_0)) + ((F_1 + F_0) + 1) \\ &= (((F_1 + F_0) + 1) + (1 + 0)) + ((1 + 0) + 1) \\ &= (((1 + 0) + 1) + (1 + 0)) + ((1 + 0) + 1) \\ &= 5. \end{align*}
Of course, we can do so for any $n$, but the time this takes might not be very encouraging. The following table shows the results and the timings of an algorithm that is a copy of the above definition (fib_def) and the simple, yet fast one similar to the one we've seen in the lecture about algorithms and variables (fib_fast).
n fib_def(n) fib_fast(n) time(def) time(fast)
1 1 1 0.0000029 0.0000012
2 1 1 0.0000021 0.0000014
3 2 2 0.0000029 0.0000014
4 3 3 0.0000033 0.0000017
5 5 5 0.0000050 0.0000017
6 8 8 0.0000074 0.0000019
7 13 13 0.0000112 0.0000021
8 21 21 0.0000167 0.0000021
9 34 34 0.0000262 0.0000024
10 55 55 0.0000412 0.0000026
11 89 89 0.0000660 0.0000029
12 144 144 0.0001061 0.0000029
13 233 233 0.0001805 0.0000029
14 377 377 0.0002797 0.0000031
15 610 610 0.0004530 0.0000033
16 987 987 0.0007396 0.0000036
17 1597 1597 0.0011926 0.0000038
18 2584 2584 0.0019245 0.0000041
19 4181 4181 0.0030890 0.0000041
20 6765 6765 0.0049942 0.0000043
21 10946 10946 0.0080504 0.0000045
22 17711 17711 0.0130370 0.0000050
23 28657 28657 0.0210469 0.0000055
24 46368 46368 0.0362742 0.0000083
25 75025 75025 0.0549119 0.0000072
26 121393 121393 0.0912960 0.0000086
27 196418 196418 0.1439888 0.0000074
28 317811 317811 0.2361903 0.0000083
29 514229 514229 0.3863642 0.0000093
30 832040 832040 0.6122549 0.0000100
31 1346269 1346269 0.9895782 0.0000112
32 2178309 2178309 1.5994761 0.0000093
33 3524578 3524578 2.5780375 0.0000100
34 5702887 5702887 4.1696584 0.0000100
35 9227465 9227465 6.7392943 0.0000098
36 14930352 14930352 10.9062860 0.0000100
37 24157817 24157817 17.6386068 0.0000098
38 39088169 39088169 28.5867844 0.0000098
39 63245986 63245986 46.3197675 0.0000119
40 102334155 102334155 75.2086654 0.0000117
As you can see, for $n = 40$, fib_def takes over a minute to produce the result, while fib_fast takes less than one milisecond. It is worth noting that small timings are not very precise, which is the reason why the numbers in the rightmost column are not ascending all the time, but the order of the magnitude is correct.
The number of steps that fib_def takes grows quickly. On the same computer, computing fib_def(100) would take approximately 6.7 million years, while fib_fast(100) takes only 0.0203 miliseconds. Mathematics may not care, but anyone using your program (including you) will.
A program that was used to produce the above results can be downloaded here.
So, why does the computation by definition get so slow so quickly? Without much detail, this can be shown quite simpy. Let us denote the number of additions (which corresponds to time) needed to compute $F_n$ as $T_n$. The exact value of $T_n$ is simple to obtain from the definition of $F_n$:
$$F_n := \begin{cases} n, & n \in \{0,1\}, \\ F_{n-1} + F_{n-2}, & \text{otherwise},\end{cases} \quad\quad T_n := \begin{cases} 0, & n \in \{0,1\}, \\ T_{n-1} + 1 + T_{n-2}, & \text{otherwise}.\end{cases}$$
Disregarding the trivial cases $n=0,1$, what can we say about the speed of growth of $T_n$? Obviously, $T_i > T_j$ for all $i > j$, i.e., $T$ grows. So, $$T_{n+2} = T_{n+1} + T_n > T_{n} + T_{n} = 2T_{n}.$$ In other words, when $n$ grows only by 2, the time $T_n$ needed to compute $F_n$ more than doubles!
And, indeed, if you look at the time needed to compute $F_{38}$ and the time needed to compute $F_{40}$, the latter is more than double the former ($28.59$ seconds vs. $75.21$ seconds: $2.63$ times more time!).
This goes on at a similar rate and it would take (much) more than $2^{(100-40)/2} = 2^{30} \approx 10^9$ times longer to compute $F_{100}$ than it did to compute $F_{40}$ (using the same algorithm on the same computer, of course). A better estimate is roughly $2.6^{30} \approx 2.8 \cdot 10^{12}$ times more time to compute $F_{100}$ than to compute $F_{40}$.
How we should NOT compare two algorithms?¶
Take a stopwatch, run the program, measure the time (just like we did above ☺).
Why not like this?
The speed of a program's execution depends on many factors, among which the quality of an algorithm is just one. Here are just a few things that often affect this speed:
1. The hardware. Some processors will be able to do some things directly, while others will have to simulate them. The differences in speed of communication between various components of the computer and even inside the processor itself are more than significant. There are also cache speeds and sizes, memory speed and sizes, etc.
2. The operating system. How well does it use all that the hardware has to offer? How well does it parallelize and prioritize tasks, since there are other processes running along with your program?
3. The libraries used. A library is a collection of functions and other objects, often used as a black box. Simply using a different library can speed up or slow down a program as much as several times.
4. The programming language and even its implementation. An interpreter/compiler translates the commands to the so called machine language. This can be done in many ways, with more or less overhead, with various levels of efficiency, etc. A Python on Windows may run a certain program in more or less time than the same program on Linux, even if the input data is identical.
5. The data. When running a program, it is impossible to test all possible inputs. How representative the chosen data is? How much worse the timings can get for a carefully chosen data?
How to do it properly¶
Ideally, by counting the basic steps of the algorithm (not even a program, as this already depends on the choice of a language!). While that sounds simple enough, it is next to impossible to do for all but the simplest of algorithms. There are several reasons for this:
1. The basic steps (arithmetic operations, assignments, comparisons,...) are often "hidden" in modern languages.
2. The number of steps almost always depends on the data.
Here is a very simple example:
Write a function that takes one integer and returns its number of digits (return $1$ for $n = 0$).
Here are three solutions:
In [1]:
from math import floor, log10
def count_digits_log(n):
"""Return the number of digits of an integer n using logarithms."""
if n == 0:
return 1
return floor(log10(abs(n))) + 1
def count_digits_str(n):
"""Return the number of digits of an integer n using logarithms."""
return len(str(abs(n)))
def count_digits_loop(n):
"""Return the number of digits of an integer n using logarithms."""
if n == 0:
return 1
n = abs(n)
res = 0
while n > 0:
res += 1
n //= 10
return res
n = int(input("n = "))
print("count_digits_log({}) = {}".format(n, count_digits_log(n)))
print("count_digits_str({}) = {}".format(n, count_digits_str(n)))
print("count_digits_loop({}) = {}".format(n, count_digits_loop(n)))
n = 1234567
count_digits_log(1234567) = 7
count_digits_str(1234567) = 7
count_digits_loop(1234567) = 7
Let us determine the number of the basic steps.
• count_digits_log
We have a comparison and, if n = 0, nothing more. Otherwise, we have an absolute value of an integer, a logarithm, and a rounding. While changing a sign and rounding a number are quite simple operations, computing a logarithm is definitely not. Whatever the steps of this function are, they are unlikely to be the same for all the programming languages, their implementations, and various libraries.
• count_digits_str
This whole function is based on how Python treats strings, which is very different between languages. Even if we ignore that, what does str actually do? It is probably something similar to count_digits_loop, because the numbers are memorized in the computer in binary (zeros and ones), so they cannot be directly split into decimal digits (zero through nine).
Somewhat surprisingly, len is highly dependent on the language:
• A string in C is marked by the special character at the end, meaning that its len (called strlen) must check each character in the string to find its length.
• In Pascal, the length is simply read from the zeroth character of the string (imposing the limit of 255 characters on the length of strings).
• In Python, it changed through the versions (the last change being introduced as recently as in Python 3.3, as proposed by PEP 393). You don't need to know these details; only that it is nowhere as simple as it looks.
• count_digits_loop
This one is the most clean way in terms of counting basic steps, as we use only these operations (assignment, comparison, increment, division). However, notice that the while loop has one step for each digit of n, meaning that the total number of steps depends on the number of digits of n.
Obviously, our approach is too pedantic. Relating the number of steps to the data itself is unnecessarily convoluted.
The first thing to do in order to simplify this process is to relate the number of the operations needed to execute an algorithm with the size of the data, instead of the data itself.
Now, what is the size of the input?
This is left to the one doing the analysis to decide, as it is a thing highly specific to the problem that the analyzed algorithms are solving.
Here are a few examples:
• When we are doing something with a single number (like counting its digits), the only reasonable measures are the number itself and the number of its digits.
However, if we denote the number as $n$ and - for simplicity - asume that it's not a zero, then its number of digits is $\lfloor \log_{10} |n| \rfloor + 1$, so these two measures are clearly related and it doesn't really matter which one we use.
• For various list algorithms, the size of the list is a good measure.
It is a single number describing the size of the data while disregarding the individual elements of the list. Assuming that all the elements of the list have more-or-less the same size (say, all are numbers), the size of all the elements of the list together would be length_of_the_list * size_of_an_element.
Note that if the elements radically vary in size, we might want to use some other measure. For example, a list of different sized lists is not described well enough by the number of those lists. Instead, a good measure might be the sum of lengths of all those lists or something similar.
• For algorithms dealing with matrices of order $n$, one could consider using the number of elements as a describing value. However, this is just $n^2$, which is clearly related to the order $n$ of the matrices, so we often use $n$ instead of $n^2$.
The most important thing to remember here is that we are comparing different algorithms that solve the same problem (and take the same input). This means that we can choose the measure of the size of the input, but the choice must be the same for all the compared algorithms.
In other words, if we have two algorithms working with matrices of order $n$, we can take the input size to be either $n$ or $n^2$, but we must make the same choice for both of the algorithms. Talking $n$ for one algorithm and $n^2$ for the other will almost never produce a meaningful result.
This is very much like comparing lengths: two roads maintain the same ratio of lengths regardless of the measure we use (miles, kilometers, milimeters, light years,...).
From now on, we shall denote the size of the input as $n$, unless emphasised to be different.
Even with the simplified "observe only the size of the input" approach, it is very hard to find the exact number of operations needed to execute an algorithm.
For example, let's say that we have two algorithms, with the following number of operations for an input of size $n$:
• Algorithm 1: $f(n) = 2^n + n^3$ operations,
• Algorithm 2: $g(n) = 10^{10} n^7$ operations.
For $n = 1$, Algorithm 1 will need $3$ operations, while Algorithm 2 will need $10^{10}$ operations. So, it seems that the Algorithm 1 is faster, especially due to the ridiculously big constant $10^{10}$ in $g(n)$. However, this will only be true for $n \le 77$. For $n > 77$, Algorithm 1 will need more steps than Algorithm 2.
So, how best to compare functions $f$ and $g$ as the representatives of the complexity of Algorithms 1 and 2, respectively?
The main concern of this analysis is what happens with big data. There is not much concern whether a solution to some small problem is reached in a milisecond or two. But, there is a big difference between 75 seconds and a milisecond, or millions of years and miliseconds (as seen in the Fibonacci example).
For that reason, we don't really need the exact number of steps. Instead, we are usually concerned with the asymptotic behaviour of algorithms, i.e., in the approximate number of steps as the input size grows limitlessly ($n \to \infty$).
The most often used measure is described by the big O notation:
Let f and g be two functions defined on some subset of the real numbers. One writes
$$f(x) = O(g(x)) \text{ as x \to \infty}$$
if and only if there is a positive constant $M$ such that for all sufficiently large values of $x$, the absolute value of $f(x)$ is at most $M$ multiplied by the absolute value of $g(x)$.
That is, $f(x) = O(g(x))$ if and only if there exists a positive real number $M$ and a real number $x_0$ such that
$$|f(x)| \le M |g(x)| \text{ for all x \ge x_0}.$$
In layman's terms, albeit a bit imprecise, this means that
$f(x) = O(g(x))$ if and only if there is $x_0$ after which $f$ grows at most as quickly as $g$.
In all the generality, this sounds confusing. However, it is quite simple:
• $n = O(n)$,
• $n = O(n+1)$,
• $n = O(n-1)$,
• $n-1 = O(n)$,
• $n+1 = O(n)$,
• $n = O(n^2)$,
• $n^2 \ne O(n)$,
• $n^2 + 10^{100}n = O(n^2)$,
• $10^{10^{10}} n^{100} = O(2^n)$,
• $2^n \ne O(10^{10^{10}} n^{100})$,
• $10^{10^{10}} n^k = O(n^k) \text{ for all$h \in \mathbb{N}$}$,
• $\log(n) = O(p(n)) \text{ for any non-constant polynomial$p$}$,
• $p(n) \ne O(\log(n)) \text{ for any non-constant polynomial$p$}$.
These are very simple to read in a somewhat informal manner. For example, the last two:
Logarithm grows at most as fast as a polynomial, but a polynomial grows faster than a logarithm.
So, if $n$ denotes the size of our input, the big O notation will help us compare two algorithms by stripping all irrelevant parts from the estimates of their numbers of the basic operations.
Let us denote $$f(n) = O(g(n)), \quad \text{but } g(n) \ne O(f(n))$$ as $O(f(n)) < O(g(n))$, i.e., that $g$ grows asymptotically strictly faster than $f$.
The following is true: \begin{align*} O(1) &< O(\operatorname{\underbrace{\log\cdots\log}_{k}}n) < \cdots < O(\log n) \\ &< O(\sqrt[a]{n}) < O(\sqrt[b]{n}) \\ &< O(p(n)) < O(q(n)) \\ &< O(2^n) < O(3^n) < \cdots \\ &< O(n!) \\ &< O(n^n), \end{align*} where $k \ge 2$, $a > b$, and $p$ and $q$ are real polynomials such that $\deg p < \deg q$.
So, the algorithms with constant complexity ($O(1)$) are, quite expectedly, the fastest ones. The algorithms with the logarithmic complexity ($O(\log n)$) are considered better than those of the linear complexity $(O(n)$), which are better than those of a nonlinear polynomial complexity $(O(n^k)$) with them being better with a lower degree, and all of them are considered better than the algorithms of an exponential complexity ($O(b^n)$) with them being better with a smaller basis.
Combining algorithms¶
If algorithms $A_1$ and $A_2$ have respective complexities $f$ and $g$ such that $O(f(n)) < O(g(n))$, but $A_1$ is significantly slower for smaller inputs, it is desirable to use both of them, picking the right one depending on the data. So,
if an_appropriate_condition_about_input:
result = function_that_implements_A2(data)
else:
result = function_that_implements_A1(data)
A common example of this is matrix multiplication: Strassen algorithm is the fastest one to use, but on smaller matrices it is actually slower than the conventional matrix multiplication.
There is a theoretically faster matrix multiplication algorithm: Coppersmith–Winograd algorithm. However, this one is not used in practice, as it pays off only on the matrices that are too big for modern computers.
Another example of combining algorithms is when some extra properties can be exploited. For example, the matrix algorithms that exploit some structure:
if is_symmetric(M):
eigenvalues = eigenvalue_solver_for_symmetric_matrices(M)
else:
eigenvalues = eigenvalue_solver_for_general_matrices(M)
Notice that the test is_symmetric becomes a part of this new, combined algorithm, so its speed must be taken into account as well. If the test is too complex, it might not pay off to combine algorithms, but instead just use the more general one.
Of course, more than two algorithms can be combined.
An example¶
Problem: Write a function that checks if an integer is prime or not.
Recall that a number is prime if it is greater than 1 and divisible only by 1 and itself.
In [2]:
def is_prime_def(n):
"""Returns True if an integer n is a prime and False otherwise."""
if n < 2:
return False
for d in range(2, n):
if n % d == 0:
return False
return True
def is_prime_sqrt(n):
"""Returns True if an integer n is a prime and False otherwise."""
if n < 2:
return False
d = 2
while d*d <= n:
if n % d == 0:
return False
d += 1
return True
n = int(input("n = "))
bool2text = { False: "is not", True: "is" }
print("is_prime_def says that number {} {} a prime.".format(n, bool2text[is_prime_def(n)]))
print("is_prime_sqrt says that number {} {} a prime.".format(n, bool2text[is_prime_sqrt(n)]))
n = 1234577
is_prime_def says that number 1234577 is a prime.
is_prime_sqrt says that number 1234577 is a prime.
Let us compare these two algorithms. First, a short description of their main loops:
In is_prime_def, the main loops goes from $2$ to $n-1$ and checks if $n$ is divisible by any of these numbers. Stops and returns False if it is.
In is_prime_sqrt, the main loops goes from $2$ to $\lfloor\sqrt{n}\rfloor$ and checks if $n$ is divisible by any of these numbers. Stops and returns False if it is.
Let us now compare the number of steps that each of these algorithms (programs, actually) perform:
• If $n < 2$: both perform only 1 comparison.
• If $n \ge 2$ is composite:
Denote the smallest positive divisor greater than 1 as $p$.
• is_prime_def: 1 comparison (n<2), $p-1$ assignments and increments (implicitly in for), $p-1$ divisions (n%d), and $2(p-1)$ comparisons (one implicitly in for and n%d == 0 for each step of the loop).
• is_prime_sqrt: 1 comparison (n<2), $p-1$ assignments and increments d+=1, $p-1$ multiplications d*d, $2(p-1)$ comparisons (d*d <= n and n%d == 0 for each step of the loop), $p-1$ divisions (n%d).
• If $n \ge 2$ is prime:
• is_prime_def: 1 comparison (n<2), $n-1$ assignments and increments (implicitly in for), $n-1$ divisions (n%d), and $2(n-1)$ comparisons (one implicitly in for and n%d == 0 for each step of the loop.
• is_prime_sqrt: 1 comparison (n<2), 1 assignment (d = 2), $\lfloor\sqrt{n}\rfloor-1$ assignments and increments d+=1, $\lfloor\sqrt{n}\rfloor-1$ multiplications d*d, $2(\lfloor\sqrt{n}\rfloor-1)$ comparisons (d*d <= n and n%d == 0 for each step of the loop), $\lfloor\sqrt{n}\rfloor-1$ divisions (n%d).
So, which one is faster (i.e., has less steps)?
• If $n < 2$: there is no difference.
• If $n \ge 2$ is composite:
• is_prime_def has $5(p-1)+1$ operations,
• is_prime_sqrt has $6(p-1)+2$ operations,
• If $n \ge 2$ is prime:
• is_prime_def performs $5(n-1)+1$ operations,
• is_prime_sqrt: performs $6(\lfloor\sqrt{n}\rfloor-1)+2$ operations.
So, depending on $n$, one or the other is faster. However, notice that
$$6(p-1) + 2 \le 10(p-1) + 2 = 2(5(p-1)+1),$$
so $6(p+1)+2 = O(5(p-1)+1)$. However,
$$6(\lfloor\sqrt{n}\rfloor-1)+2 = O(5(n-1)+1), \quad \text{but} \quad 5(n-1)+1 \ne O(6(\lfloor\sqrt{n}\rfloor-1)+2).$$
In other words, if $n$ is composite, the algorithms have the same complexity; if $n$ is prime, is_prime_sqrt has a strictly lower complexity than is_prime_def.
So, which is better on average?
This is actually impossible to say for almost any practical purpose. How many numbers that we use these algorithms on will be prime and how many will be composite?
Statistics can be of help here, but only if we have a very solid information on the input data: the distribution of primes and composites in it, plus their sizes (as the number of steps depend on that.
In other words, to find an average number of steps, we'd need to first solve the problem, by which time it is too late to pick the faster of the two algorithms.
Luckily, it is more often useful to check the worst case than the average one, because we are interested in a guarantee of how fast our program will run.
This simplifies things a lot, because it is very clear that the worst case scenario in our example is to let both of the loops finish. In other words,
• the (worst case) complexity of is_prime_def is $O(n)$, while
• the (worst case) complexity of is_prime_sqrt is $O(\sqrt{n})$,
which makes is_prime_sqrt the "faster" of the two (not in all cases, but in those that are the most important).
So, at the end of the day, we are interested in the order of the magnitude of the number of the operations that an algorithm performs in its worst case. Even though the above analysis is lengthy and quite complicated, knowing what we actually need (and why), this is trivial to compute:
1. Everything outside of the loops performs once, so it doesn't affect the complexity.
2. Everything in the outermost loop is performed as many times as the loops has steps.
3. Everything in any loop directly inside the outermost one is performed $s_1 \cdot s_2$ times, where $s_1$ and $s_2$ are the number of steps in these two loops.
etc.
Beware of the function calls! Some functions may contain loops themselves and this must be accounted for!
So, what is usually enough to check is the number of runs of the innermost loop. However, be very careful that something more complex is not "hidden" somewhere. For example, let matrix1 and matrix2 be matrices of order n. Observe the following algorithm sketch:
for i in range(n):
something = matrix1 * matrix2
for j in range(n):
something_else = number + number
Here, we have $n$ matrix multiplications and $n^2$ number additions. However, the number of the operations is not of the order of magnitude $n^2$, because the matrix multiplication has $n^3$ operations which, when called $n$ times, gives us order of maginutide $n^4$ of operations.
However, we are comparing algorithms here. We can count more complex operations (for example, matrix multiplications instead of number additions), but then we have to do the same for all the compared algorithms. In other words, we can say:
• Algorithm 1 is faster because it has $n^4$ operations and Algorithm 2 has $n^5$, or
• Algorithm 2 is faster because it has $n$ matrix multiplications and Algorithm 2 has $n^2$,
but it is wrong to say
• Algorithm 1 is slower because it has $n^4$ operations and Algorithm 2 has $n^2$ matrix multiplications.
So, a quick estimate for our exampe above (i.e., how we usually do it):
In [3]:
def is_prime_def(n):
"""Returns True if an integer n is a prime and False otherwise."""
if n < 2:
return False
for d in range(2, n):
if n % d == 0:
return False
return True
def is_prime_sqrt(n):
"""Returns True if an integer n is a prime and False otherwise."""
if n < 2:
return False
d = 2
while d*d <= n:
if n % d == 0:
return False
d += 1
return True
n = int(input("n = "))
bool2text = { False: "is not", True: "is" }
print("is_prime_def says that number {} {} a prime.".format(n, bool2text[is_prime_def(n)]))
print("is_prime_sqrt says that number {} {} a prime.".format(n, bool2text[is_prime_sqrt(n)]))
n = 1234577
is_prime_def says that number 1234577 is a prime.
is_prime_sqrt says that number 1234577 is a prime.
Observation:
is_prime_def: the loop has $n$ steps ($n-1$ actually, but it is the same order of magintude as $n$), each containing only basic operations (however many, as long as it is at most a constant, which is $5$ here).
is_prime_sqrt: the loop has $\sqrt{n}$ steps (again, only the order of magnitude; it doesn't even matter that it's not an integer!), each containing only basic operations (however many, as long as it is at most a constant, which is $6$ here).
How we actually count:
is_prime_def: the loop has $n$ basic operations.
is_prime_sqrt: the loop has $\sqrt{n}$ basic operations.
Since $O(\sqrt{n}) < O(n)$, is_prime_sqrt has a lower (computational) complexity.
Note that this doesn't say that is_prime_sqrt cannot be made faster. By using a single sqrt, we can remove d*d (which repeats in all steps of the loop):
In [4]:
from math import floor, sqrt
def is_prime_sqrt(n):
"""Returns True if an integer n is a prime and False otherwise."""
if n < 2:
return False
for d in range(2, floor(sqrt(n)) + 1):
if n % d == 0:
return False
d += 1
return True
n = int(input("n = "))
bool2text = { False: "is not", True: "is" }
print("is_prime_sqrt says that number {} {} a prime.".format(n, bool2text[is_prime_sqrt(n)]))
n = 1234577
is_prime_sqrt says that number 1234577 is a prime.
This one has the same complexity as the previous version, but it is somewhat faster.
As always, here is a more Pythonic way to check if a number is prime:
In [5]:
from math import floor, sqrt
def is_prime_py(n):
"""Returns True if an integer n is a prime and False otherwise."""
return (n >= 2) and all(n%d != 0 for d in range(2, floor(sqrt(n)) + 1))
n = int(input("n = "))
bool2text = { False: "is not", True: "is" }
print("is_prime_py says that number {} {} a prime.".format(n, bool2text[is_prime_py(n)]))
n = 1234577
is_prime_py says that number 1234577 is a prime.
Friendly numbers¶
We say (just for the purpose of this problem) that two natural numbers $a \ne b$ are friendly if $a$ equals the sum of all natural divisors of $b$ except $b$ itself and $b$ equals the sum of all natural divisors of $a$ except $a$ itself.
Problem: For a given $n$ write all pairs of friendly numbers smaller than or equal to $n$, each of them once (say, $a < b$).
As one should always do, let us analize the problem before writing any code.
Denote the sum of all divisors of $n$ except the $n$ itself as divisors_sum(n). This is a function that we can write quite easily.
What we now need is:
For each $a \in \{1,\dots, n\}$:
For each $b \in \{a+1,\dots, n\}$:
Check that $a = {\tt divisors\_sum}(b)$ and $b = {\tt divisors\_sum}(a)$; print if True.
The code should be trivial:
In [3]:
from time import time
def divisors_sum(n):
"""Returns the sum of all natural divisiors of a natural number n except n itself."""
res = 0
for d in range(1, n):
if n % d == 0:
res += d
return res
# More Pythonic version
def divisors_sum_py(n):
"""Returns the sum of all natural divisiors of a natural number n except n itself."""
return sum(d for d in range(1, n) if n % d == 0)
n = int(input("n = "))
st = time()
for a in range(1, n+1):
for b in range(a+1, n+1):
if a == divisors_sum(b) and b == divisors_sum(a):
print((a, b))
print("Done in {:.6f} seconds.".format(time() - st))
n = 500
(220, 284)
Done in 3.520527 seconds.
Remark: Notice that print has two pairs of braces:
print((a, b))
This is because we want to print a tuple, so the outer braces belong to print, but the inner ones define a tuple (a, b) that is printed in the form appropriate for printing pairs of numbers.
This is equivalent to
print("({}, {})".format(a, b))
but, obviously, much simpler.
The complexity of this algorithm is $O(n^3)$:
• the a-loop runs n times,
• the b-loop runs n-a times for each step of the a-loop, which gives the following total (during one run of the program): $$(n-1)+(n-2)+\cdots+1+0 = n(n-1)/2 = \frac{1}{2}n^2 - \frac{1}{2}n = O(n^2).$$
• divisors_sum(x) has x steps for each of the two calls in each step of the b-loop. Just as in the previous item, this brings us to a grand total $O(n^3)$ steps.
The exact number of operations is, of course, smaller than $n^3$ (because the b-loop doesn't run $n^2$ times, but "only" $n(n-1)/2 < n^2$ times). However, it asymptotically behaves like $n^3$, which means that the program will take approximately $2^3 = 8$ times longer for $2n$ than it takes for $n$.
Can we do better?
Notice the following:
For each number $a$, there is only one number that can possibly be its friend. That is, of course,
$$b := {\tt divisors\_sum}(a).$$
So, why check for all $b \in \{a+1,\dots,n\}$?
We can achieve the same like this:
For each $a \in \{1,\dots, n\}$:
Define $b := {\tt divisors\_sum}(a)$.
Check that $a < b$ and $a = {\tt divisors\_sum}(b)$; print if True.
Translated from English to Python:
In [4]:
n = int(input("n = "))
st = time()
for a in range(1, n+1):
b = divisors_sum(a)
if a < b and a == divisors_sum(b):
print((a, b))
print("Done in {:.6f} seconds.".format(time() - st))
n = 500
(220, 284)
Done in 0.015062 seconds.
The complexity of this program is $O(n^2)$ (verify that this is correct!), meaning that solving the problem for $2n$ will take roughly $2^2 = 4$ times longer than for $n$, which is much better than $O(n^3)$ that we had before.
Always analize your problem before writing any code that solves it!
Sure, this can "lose" you a few minutes, but in the long run your programs will be faster, more compact, probably less buggy, and easier to maintain.
Prime factors of a number¶
The analysis can get trickier than what we've seen above. In the following problem, a loose "worst case analysis" that we've seen above will not reflect how big the difference between two algorithms is, but it'll take some common sense (or a very hard formal analysis) to observe that difference.
Problem: Write a function that prints all the prime factors of a given integer, each of them once.
Mathematically, the solution is as follows:
Check all the numbers between 2 and $|n|-1$. For those with which $n$ is divisible, check if they are primes. If they are, print them.
Translated to Python, we get:
In [5]:
from time import time
from math import floor, sqrt
def is_prime_sqrt(n):
"""Returns True if an integer n is a prime and False otherwise."""
if n < 2:
return False
for d in range(2, floor(sqrt(n)) + 1):
if n % d == 0:
return False
d += 1
return True
def print_prime_factors_def(n):
"""Prints all the prime factors of n, each of them once."""
for p in range(2, abs(n)+1):
if n % p == 0 and is_prime_sqrt(p):
print(p)
n = int(input("n = "))
print("The prime factors of {} are:".format(n))
st = time()
print_prime_factors_def(n)
print("Total time:", time() - st)
n = 1234567
The prime factors of 1234567 are:
127
9721
Total time: 0.12981295585632324
What is the complexity of this algorithm?
Since we are using $|n|$ where the sign matters, we can assume that $n \ge 0$ without the loss of generality, in order to make the analysis easier to follow.
The function's loop always has $n - 2$ steps. Each step has a division and some have checks that the number is a prime. Those checks themselves have the complexity $O(\sqrt{p})$, which can be replaced by $O(\sqrt{n})$ to avoid dependance on the exact data.
If we were checking whether $p$ is prime or not for every $p$ (and not just for divisors of $n$), the complexity of our program would have been $O(n\sqrt{n})$.
However, we are doing this check only for the divisors of $n$, so our complexity is between $O(n)$ and $O(n\sqrt{n})$. In layman's terms, this means that our algorithm is slower than the linear one, but not by much. It remains open by how much.
For a different algorithm, let us analyze the problem a bit.
Let $n = s p_1^{k_1} p_2^{k_2} \cdots p_r^{k_r}$, where $s \in \{-1, 1\}$ is a sign, $p_i$ are primes such that $p_1 < p_2 < \dots < p_r$, and $k_i \in \mathbb{N}$, be a factorization of $n$ into prime factors.
What is the smallest number $x > 1$ such that $x | n$ ($x$ divides $n$, i.e., $n$ is divisible by $x$)?
It is, of course, $x = p_1$. Our algorithm doesn't need to verify that it is a prime, because we know that it is.
Now, let us define $n' := n / p_1^{k_1} = s p_2^{k_2} \cdots p_r^{k_r}$. What is the smallest $x > 1$ such that $x|n'$?
Of course, this is $p_2$ which is also a prime.
If we repeat this until nothing is left (i.e., we get $n' = s \in \mathbb \{-1, 1\}$), we will get all the prime factors of $n$.
Here is the code, with a minor change of dropping the sign at the beginning:
In [6]:
from time import time
def print_prime_factors_div(n):
"""Prints all the prime factors of n, each of them once."""
p = 2
n = abs(n)
while n > 1:
if n % p == 0:
print(p)
while n % p == 0:
n //= p
p += 1
n = int(input("n = "))
print("The prime factors of {} are:".format(n))
st = time()
print_prime_factors_div(n)
print("Total time:", time() - st)
n = 1234567
The prime factors of 1234567 are:
127
9721
Total time: 0.0015151500701904297
The worst case scenario here is $O(n)$, if the number is prime, which is only slightly better than the previous algorithm.
But, as the numbers grow, the ratio of the prime numbers in the set of all the observed numbers falls towards zero (see, for example, Prime number theorem).
Now, the more accurate estimate of the number of steps here is $O(p_r + k_1 + k_2 + \cdots + k_r)$, which is, on average, significantly smaller than $n$. We shall not go into detail, but consider numbers like $n = 2p$ or $n = 3p$ for some prime number $p$: for them, $p_r = n/2$ or $p_r = n/3$ respectively. This is still linear, but much smaller than $n$.
If we try to measure the times for the above problems, we shall notice that for big composite number (for example, 1234567) the second algorithm is faster (usually several times). However, for big primes (for example, 1234577), the second algorithm is a bit slower!
This can be further improved, at expense of some of the code elegance. For example:
In [10]:
from time import time
from math import floor
def print_prime_factors_div(n):
"""Prints all the prime factors of n, each of them once."""
p = 2
n = abs(n)
n2 = floor(sqrt(n))
while p <= n2:
if n % p == 0:
print(p)
while n % p == 0:
n //= p
n2 = floor(sqrt(n))
p += 1
if n > 1:
print(n)
n = int(input("n = "))
print("The prime factors of {} are:".format(n))
st = time()
print_prime_factors_div(n)
print("Total time:", time() - st)
n = 1234567
The prime factors of 1234567 are:
127
9721
Total time: 0.00019025802612304688
Of course, now the complexity of the algorithm is $O(\sqrt{n})$ for primes and $O(\sqrt{p_r} + k_1 + k_2 + \cdots + k_r)$ for composites. The worst case here is, of course, that $n$ is prime, so our algorithm has the worst-case complexity $O(\sqrt{n})$.
Note that the above algorithm with sqrt is not reliable for truly large numbers. In Python, integers can be as big as we want (most of the other languages have a limit here), but real numbers have a fixed precision of roughly 16 digits. This means that the square root of any number with more than 32 digits will have an imprecise square root, and will thus be prone to errors.
To overcome this problem, one would have to implement an integer square root function, using only integer arithmetics (by Newton's method or some similar procedure).
Apart from the neat sqrt-speedup, this algorithm is also attractive because it is compact and elegant (it doesn't need additional function to check if a number is a prime), and it is easily modifiable. In the following examples, we shall modify the original, $O(n)$ version of the algorithm. Each of these can be made faster using the same sqrt trick that we used above.
Try to solve the following problem:
Write a function that prints all the prime factors of a given integer $n$, each of them as many times as it appears in the factorization of $n$.
So, $p_1$ has to be printed $k_1$ times, $p_2$ has to be printed $k_2$ times, etc.
Try to modify print_prime_factors_def to achieve this. Here, we modify print_prime_factors_div (the version without sqrt).
In [ ]:
def print_prime_factors_div(n):
"""Prints all the prime factors of n, each of them
repeated according to its multiplicity."""
p = 2
n = abs(n)
while n > 1:
if n % p == 0:
while n % p == 0:
print(p)
n //= p
p += 1
n = int(input("n = "))
print("The prime factors of {} are:".format(n))
print_prime_factors_div(n)
That's right: we only swapped two lines (i.e., we moved the print into the inner while).
However, in
if some_condition:
while that_same_condition:
do_something()
the if part is redundant (check what happens when some_condition is True and what if it is False).
So, the final version is, somewhat surprisingly, even simpler that the original:
In [12]:
def print_prime_factors_div(n):
"""Prints all the prime factors of n, each of them
repeated according to its multiplicity."""
p = 2
n = abs(n)
while n > 1:
while n % p == 0:
print(p)
n //= p
p += 1
n = int(input("n = "))
print("The prime factors of {} are:".format(n))
print_prime_factors_div(n)
n = 1234567
The prime factors of 1234567 are:
127
9721
Problem: Print each prime factor once, but next to it write also its multiplicity, i.e., each line of the output has to have the form $p_i \ast\ast k_i$.
Once again, we modify the original print_prime_factors_div. We already have a loop that has exactly one step for each occurrence of each $p_i$. We just need to count that and adapt printing.
In [13]:
def print_prime_factors_div(n):
"""Prints the summands of the prime factorization of n."""
p = 2
n = abs(n)
while n > 1:
if n % p == 0:
i = 0
while n % p == 0:
i += 1
n //= p
print("{}**{}".format(p, i))
p += 1
n = int(input("n = "))
print("The prime factors of {} are:".format(n))
print_prime_factors_div(n)
n = 1234567
The prime factors of 1234567 are:
127**1
9721**1
Final remark: Returning all of these by a generator is very simple: all we need to do is replace print with an appropriate yield.
For example, the last one:
In [14]:
def prime_factors_div(n):
"""Returns the generator that makes an iterator for
the prime factorization of n."""
p = 2
n = abs(n)
while n > 1:
if n % p == 0:
i = 0
while n % p == 0:
i += 1
n //= p
yield "{}**{}".format(p, i)
p += 1
n = int(input("n = "))
print(n, "=", "+".join(prime_factors_div(n)))
n = 1234567
1234567 = 127**1+9721**1
|
2018-03-18 21:34:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851521372795105, "perplexity": 661.1338796012831}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646176.6/warc/CC-MAIN-20180318204522-20180318224522-00118.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-411/topics/Topic-7322/subtopics/Subtopic-97624/?activeTab=interactive
|
New Zealand
Level 8 - NCEA Level 3
# Amplitude of sine and cosine
## Interactive practice questions
Fill in the missing number.
a
What is the amplitude of the graphs of $y=\sin x$y=sinx and $y=\cos x$y=cosx?
b
What is the period of the graphs of $y=\sin x$y=sinx and $y=\cos x$y=cosx in radians?
Easy
Less than a minute
Consider the expression $\cos\theta$cosθ.
How does the graph of $y=4\cos x$y=4cosx differ from the graph of $y=\cos x$y=cosx?
Select all the correct options.
Is the following statement true or false?
"The amplitude of $y=5\cos x$y=5cosx is greater than the amplitude of $y=\cos x$y=cosx."
### Outcomes
#### M8-2
Display and interpret the graphs of functions with the graphs of their inverse and/or reciprocal functions
|
2022-01-21 02:54:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4731927216053009, "perplexity": 4094.8639420367244}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00023.warc.gz"}
|
http://mathematica.stackexchange.com/questions/18789/can-a-latticedata-image-be-displayed-in-a-space-filled-fashion/18793
|
# Can a LatticeData image be displayed in a space filled fashion
I'd like a FaceCenteredCubic image as in the in LatticeData docs, but space packed, and cropped on all the boundaries, so that it illustrates the geometry of the calculation that leads to the $\pi/\sqrt{18}$ density result.
The call that produces the plot is
LatticeData["FaceCenteredCubic", "Image"]
but this has small spheres, and probably doesn't have a way to crop on the boundaries. I see that there's a PackingRadius listed in the documentation. I thought it might be possible to use the PackingRadius property, to display the spheres in the lattice in a space packed fashion (so that they touch), but that wouldn't handle the cropping part of the problem.
Is there any easy way to do this, perhaps using the lattice data to draw spheres using the Graphics drawing options, and then using the various ViewAngle type options to crop that appropriately?
-
All the information is there, but to adjust the sphere radius I had to do a replacement as follows:
spaceFilledPlot[latticeType_] :=
LatticeData[latticeType, "Image"] /.
Sphere[pt_, r_] :> {Opacity[.5],
spaceFilledPlot["FaceCenteredCubic"]
I added the opacity for better visibility of the underlying lattice.
Oh, and you wanted to crop at the boundaries:
Show[
spaceFilledPlot["FaceCenteredCubic"],
PlotRange -> {{-1, 1}, {-1, 1}, {-1, 1}}]
Edit
In response to the comment, here is how one could replace the spheres in the default plot by RegionPlot so that the cut surfaces show up in a cropped display, giving a more solid appearance:
volumetricPlot[latticeType_] := Module[
{
img = LatticeData[latticeType, "Image"],
},
Show[
img /. Sphere[pt_, r_] :> {},
Map[
RegionPlot3D[(EuclideanDistance[{x, y, z}, #] < r),
{x, -1, 1}, {y, -1, 1}, {z, -1, 1},
Mesh -> False,
PlotStyle -> Opacity[.5]
] &, Cases[img, Sphere[pos_, _] :> pos, Infinity]]
]
]
volumetricPlot["FaceCenteredCubic"]
Edit 2
There is at least one bug in LatticeData. I just found this when thinking about how to crop a unit cell for a non-cubic lattice. So I tried the hexagonal close-packed structure, which is closely related to the previous example. But here is the plot:
volumetricPlot["HexagonalClosePacking"]
This is clearly not very closely packed! The error is in the "Image" data for this lattice. It's very easy to spot this when you use my function because it expands the spheres to where they should touch. So I'd recommend being very careful when using these data. A somewhat better-working notebook for this case can be downloaded from Mathworld.
-
Very cool. I didn't realize what LatticeData was actually returning here. To understand what you did, running LatticeData["FaceCenteredCubic", "Image"] // InputForm is very helpful. Is there a way to draw the spheres as solids instead of shells? I tried Opacity[1], but that still renders the spheres as shells. – Peeter Joot Jan 31 '13 at 16:38
Yes, you can create a volumetric appearance, but then I have to replace the spheres by something else. I'll edit the post. – Jens Jan 31 '13 at 19:20
fyi. I've submitted a bug report through the wolfram support site for the HCP LatticeData bug (using a notebook containing your volumetricPlot function to illustrate). – Peeter Joot Nov 16 '13 at 14:45
|
2014-04-20 10:52:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3789094388484955, "perplexity": 1716.8630830550132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://msp.org/pjm/2018/295-2/p02.xhtml
|
#### Vol. 295, No. 2, 2018
Recent Issues Vol. 304: 1 Vol. 303: 1 2 Vol. 302: 1 2 Vol. 301: 1 2 Vol. 300: 1 2 Vol. 299: 1 2 Vol. 298: 1 2 Vol. 297: 1 2 Online Archive Volume: Issue:
The Journal Editorial Board Subscriptions Officers Special Issues Submission Guidelines Submission Form Contacts ISSN: 1945-5844 (e-only) ISSN: 0030-8730 (print) Author Index To Appear Other MSP Journals
Certain character sums and hypergeometric series
### Rupam Barman and Neelam Saikia
Vol. 295 (2018), No. 2, 271–289
##### Abstract
We prove two transformations for the $p$-adic hypergeometric series which can be described as $p$-adic analogues of a Kummer’s linear transformation and a transformation of Clausen. We first evaluate two character sums, and then relate them to the $p$-adic hypergeometric series to deduce the transformations. We also find another transformation for the $p$-adic hypergeometric series from which many special values of the $p$-adic hypergeometric series as well as finite field hypergeometric functions are obtained.
##### Keywords
character sum, hypergeometric series, $p$-adic gamma function
##### Mathematical Subject Classification 2010
Primary: 11S80, 11T24, 33E50, 33C99
|
2020-01-28 08:57:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8927222490310669, "perplexity": 5882.390200616364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00268.warc.gz"}
|
https://golem.ph.utexas.edu/~distler/blog/archives/000462.html
|
## October 28, 2004
### Roundup
Around the blogs:
• Matt continues his excellent review of Lattice Gauge Theory, with a post on the classic paper of Lepage and MacKenzie.
• Luboš blogs about the paper of Itzhaki and McGreevy that I discussed a while ago.
• Urs is musing abut the $n^3$ degrees of freedom in the low-energy theory on $n$ coincident M5-branes.
• Sean writes about his new paper with Jennie Chen, on an attempt to explain why the initial state of the universe was one of low-entropy.
I haven’t read their paper yet, but the idea is that our universe originated as a thermal fluctuation in an ambient de Sitter space, which then inflated. Personally, I’m sceptical that quantum gravity in (eternal) de Sitter space makes sense. Tout court, while there are certainly metastable de-Sitter-like solutions, I don’t think eternal de Sitter space exists as a solution to String Theory. But an approach like that of Carroll and Chen certainly has the advantage that one is not immediately plunged into the tangled thicket of quantum cosmology. Those discussions never go anywhere because, sooner or later, someone mentions “the wave function of the universe,” the physics-equivalent of Godwin’s Law, and all rational discussion comes to an end.
Posted by distler at October 28, 2004 12:31 AM
TrackBack URL for this Entry: https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/462
### Re: Roundup
Maybe this is the right place to annouce that the other day, reading in Lubos’ blog I made the mistake of clicking the “creat your own blog” button. So, here it is: atdotde.blogspot.com. There is not much there as, yet, but I promise I will do my best to make it interesting. I just don’t know how you people (esp. Lubos) manage to fit in blogging into a day of only 24 hours.
Posted by: Robert on October 28, 2004 5:33 AM | Permalink | Reply to this
### Re: Roundup
This is evolving from a coffee table to an entire coffee house. ;-)
What can I do to let my RSS reader list new entries on the BLOGGER blogs, like Robert is using now? I can’t find a ‘syndicate’ link.
Posted by: Urs Schreiber on October 28, 2004 8:08 AM | Permalink | PGP Sig | Reply to this
### Syndication
Blogger doesn’t do RSS, they do Atom Syndication. If your News Aggregator is recent enough, it should support Atom as well.
The Atom feed is a file, atom.xml, at the root level of the blog, e.g. http://atdotde.blogspot.com/atom.xml.
This is evolving from a coffee table to an entire coffee house.
I guess I should also remind both Robert and Luboš that they are still registered as authors at the Coffee Table too. So if they have a something in which they want the equations to be rendered, they can post (or cross-post) it to the Coffee Table.
Posted by: Jacques Distler on October 28, 2004 8:35 AM | Permalink | PGP Sig | Reply to this
### Re: Syndication
Ok, thanks. Now I am using ‘sage’. atom.xml gives me the entries, how do I get the comments?
Posted by: Urs Schreiber on October 28, 2004 9:52 AM | Permalink | PGP Sig | Reply to this
### Custom feeds
Now I am using ‘sage’. atom.xml gives me the entries, how do I get the comments?
You don’t. Blogger doesn’t provide them, and Blogger users can’t add their own Comment Feed.
Yet another reason (as if being able to use inline TeX weren’t sufficient) to prefer MovableType.
Posted by: Jacques Distler on October 28, 2004 12:20 PM | Permalink | PGP Sig | Reply to this
### Re: Custom feeds
I see. Is it also the case that Blogger does not support trackbacks, or is there a way to make my SCT entry commenting on some Blogger entry appear as a link in the latter?
Posted by: Urs Schreiber on October 28, 2004 12:39 PM | Permalink | PGP Sig | Reply to this
I don’t believe they do.
It’s only with their recent redesign that they started offering (Atom) syndication feeds and a Comment System (previously, some Blogger users used 3rd party comment services, like HaloScan).
Trackbacks are not implemented (sending or receiving).
Posted by: Jacques Distler on October 28, 2004 12:58 PM | Permalink | PGP Sig | Reply to this
### Re: Syndication
“I guess I should also remind both Robert and Lubos that they are still registered as authors at the Coffee Table too. So if they have a something in which they want the equations to be rendered, they can post (or cross-post) it to the Coffee Table.” [how do I get blockquote to work?]
No need to be reminded. Its just that on my own blog I can write about things that would be off topic at the coffee table (like computer stuff, radio hosts, politics, other nonsense). String theory discussions still have their appropriate place at the coffee table.
Posted by: Robert on October 28, 2004 10:47 AM | Permalink | Reply to this
### Re: Syndication
String theory discussions still have their appropriate place at the coffee table.
Nice to hear! I find myself posting almost exclusively to the SCT, while I would much rather be listening to what others have to say.
Speaking of which: Did you get any comments on your recent paper on quantization of the string?
Posted by: Urs Schreiber on October 28, 2004 11:58 AM | Permalink | PGP Sig | Reply to this
### Re: This and that
Speaking of which: Did you get any comments on your recent paper on quantization of the string?
Quite a few. I had some positive feedback from people I talked to, José Velhinho pointed out his paper gr-qc/0406008 which talkes about related issues in the cosmology version of LQG (the approach formerly known as mini-superspace) and Max Niedermaier (the guy who started the Bogdanov affaire by sending around an email about their papers to some friends, he was a postdoc at AEI when I was a grad student there) telling me that he thought about the harmonic oscillator in Bohr representation as well and how our results fit together with theirs.
We had put out the paper just before Giuseppe and I were leaving Cambrige (he to LMU Munic and me to IU Bremen) and thus we haven’t got around puting out a revised version but that will happen soon. Hopefully.
I have already talked about this stuff in Golm and here and I will do as well at DESY on 10/11 and in Jena on 18/11. If you are interested, here are the slides.
And then there was the discussion on s.p.s. That was essentially killed by Lubos way of not even responding but inserting distorting moderator comments into my postings. I also found his attidue of knowing all and not listening not worth spending more time on further discussion.
Posted by: Robert on October 29, 2004 4:43 AM | Permalink | Reply to this
### Re: This and that
Quite a few.
Anyone who is concerned with the implications for proposals to quantize gravity?
In your paper you make the point that non-continuous reps are quite different in a more formal way than has been done before. On the other hand, this difference has been known and addressed before, for instance by Ashtekar,Fairhurst & Willis. But at the end you also discuss that their proposal for obtaining one from the other (by means of ‘shadow’ states) does not work. That should interest them.
Posted by: Urs Schreiber on October 29, 2004 8:42 AM | Permalink | PGP Sig | Reply to this
### Re: This and that
And then there was the discussion on s.p.s. That was essentially killed
Whenever you don’t want to discuss on s.p.s. you could still move the discussion to some blog, like - let’s see - the SCT maybe? :-)
I bet that as soon as some critical number of regular posters at the SCT establishes the attention and comments it will get will be comparable to that of a newsgroup.
Posted by: Urs Schreiber on October 29, 2004 9:14 AM | Permalink | PGP Sig | Reply to this
### Blockquotes
[how do I get blockquote to work?]
Several ways:
1. Choose Textile formatting and write
some stuff
bq. quoted material
some more stuff
2. Choose Markdown formatting and write
some stuff
> quoted material
some more stuff
3. Use the default Convert Linebreaks formatting and write
some stuff
<blockquote><p>quoted material</p></blockquote>
some more stuff
The same thing works if you are using one of the “itex to MathML with …” filters.
Posted by: Jacques Distler on October 28, 2004 12:15 PM | Permalink | PGP Sig | Reply to this
### Re: Roundup
[…] thicket of quantum cosmology. Those discussions never go anywhere because, sooner or later, someone mentions ‘the wave function of the universe,’ the physics-equivalent of Godwin’s Law, and all rational discussion comes to an end.
Not that I would want to discuss the ‘wave function of the universe’ but this concept, popular in the old days of quantum cosmology, while pretty intractable, has resurfaced in a slightly different guise in terms of the ‘landscape’, I’d think. Your comments on that are one example that rational discussion need not come to an end at this point.
Posted by: Urs Schreiber on October 28, 2004 8:17 AM | Permalink | PGP Sig | Reply to this
### Re: Roundup
Jacques, thanks for the link. I’m not sure about eternal de Sitter either, but in fact it’s not required by our scenario. So long as the decay rate to a lower vacuum energy is less than the Hubble time (which it must be, phenomenologically), the phase transition never percolates, just as in old inflation. More and more space has decayed, but the physical volume in the de Sitter phase grows without bound, which is all we need.
Not that I would also place great odds that our scenario is “correct” in some strict sense. I’d be happy if something like it were on the right track.
Posted by: Sean on October 28, 2004 10:19 AM | Permalink | Reply to this
|
2022-12-05 15:42:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5007264018058777, "perplexity": 2851.2761476082214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00429.warc.gz"}
|
https://mathspace.co/textbooks/syllabuses/Syllabus-409/topics/Topic-7251/subtopics/Subtopic-96880/?activeTab=interactive
|
NZ Level 6 (NZC) Level 1 (NCEA)
Multiply and divide algebraic fractions
Interactive practice questions
Simplify $\frac{c}{2}\times\frac{d}{4}$c2×d4.
Easy
Less than a minute
Simplify the following:
$\frac{c}{p}\times\frac{f}{v}$cp×fv
Simplify the following:
$\frac{1}{r}\times\frac{1}{m}$1r×1m
Simplify the following:
$\frac{7x}{6}\times\frac{5y}{2}$7x6×5y2
Outcomes
NA6-5
Form and solve linear equations and inequations, quadratic and simple exponential equations, and simultaneous equations with two unknowns
NA6-6
Generalise the properties of operations with rational numbers, including the properties of exponents
91027
Apply algebraic procedures in solving problems
|
2021-09-19 19:08:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4050866365432739, "perplexity": 7216.109869844763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00210.warc.gz"}
|
https://pheustal.com/2019/12-18/thinkpython9
|
# 文件
Python
fin = open('filename','w')
fin.write('1234\n')
fin.close()
>>>import os
>>>cwd = os.getcwd()
>>>cwd
'C:\\Users\\hasee\\Desktop\\python\\.vscode'
>>> os.path.abspath('emma')
'C:\\Users\\hasee\\Desktop\\python\\.vscode\\emma'
14.4 查看文档
OS
os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (dirpath, dirnames,filenames).
dirpath is a string, the path to the directory. dirnames is a list of the names of the subdirectories in dirpath(excluding '.' and '..'). filenames is a list of the names of the non-directory files in dirpath. Note that the names in the lists contain no path components. To get a full path (which begins with top) to a file or directory in dirpath, do os.path.join(dirpath, name).
If optional argument topdown is True or not specified, the triple for a directory is generated before the triples for any of its subdirectories (directories are generated top-down). If topdown is False, the triple for a directory is generated after the triples for all of its subdirectories (directories are generated bottom-up). No matter the value of topdown, the list of subdirectories is retrieved before the tuples for the directory and its subdirectories are generated.
When topdown is True, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is Falsehas no effect on the behavior of the walk, because in bottom-up mode the directories in dirnames are generated before dirpath itself is generated.
By default, errors from the scandir() call are ignored. If optional argument onerror is specified, it should be a function; it will be called with one argument, an OSError instance. It can report the error to continue with the walk, or raise the exception to abort the walk. Note that the filename is available as the filename attribute of the exception object.
By default, walk() will not walk down into symbolic links that resolve to directories. Set followlinks to True to visit directories pointed to by symlinks, on systems that support them.
Be aware that setting followlinks to True can lead to infinite recursion if a link points to a parent directory of itself. walk() does not keep track of the directories it visited already.
If you pass a relative pathname, don't change the current working directory between resumptions of walk(). walk() never changes the current directory, and assumes that its caller doesn't either.
This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn't look under any CVS subdirectory:
Python
import os
from os.path import join, getsize
for root, dirs, files in os.walk('python/Lib/email'):
print(root, "consumes", end=" ")
print(sum(getsize(join(root, name)) for name in files), end=" ")
print("bytes in", len(files), "non-directory files")
if 'CVS' in dirs:
dirs.remove('CVS') # don't visit CVS directories
In the next example (simple implementation of shutil.rmtree()), walking the tree bottom-up is essential, rmdir() doesn't allow deleting a directory before the directory is empty:
Python
# Delete everything reachable from the directory named in "top",
# assuming there are no symbolic links.
# CAUTION: This is dangerous! For example, if top == '/', it
# could delete all your disk files.
import os
for root, dirs, files in os.walk(top, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
Python
import os
def walk2(dirname):
"""Prints the names of all files in dirname and its subdirectories.
This is the exercise solution, which uses os.walk.
dirname: string name of directory
"""
for root, dirs, files in os.walk(dirname):
for filename in files:
print(os.path.join(root, filename))
14.5
Python
try:
fin = open('emma.txt')
except:
print('something went worong')
14-1
Python
def sed(str1, str2, filename1, filename2):
"""练习的说明有些问题,这个程序其实就是把固定的字符串换成想要目标字符串,然后把文件1的内容换到文件2去
str1: 想要换掉的str
str2: 目标str
filename1: 源文件
filename2: 目标文件"""
try:
fin1 = open(filename1)
fin2 = open(filename2,'w')
for line in fin1:
line = line.replace(str1, str2)
fin2.write(line)
fin1.close()
fin2.close()
print('done')
except:
print('something went worong')
sed('the', 'a', '1.txt', '2.txt')
14-2
Python
import dbm
import pickle
def signature(s):
"""Returns the signature of this string.
Signature is a string that contains all of the letters in order.
s: string
"""
# TODO: rewrite using sorted()
t = list(s)
t.sort()
t = ''.join(t)
return t
def all_anagrams(filename):
"""Finds all anagrams in a list of words.
filename: string filename of the word list
Returns: a map from each word to a list of its anagrams.
"""
d = {}
for line in open(filename):
word = line.strip().lower()
t = signature(word)
# TODO: rewrite using defaultdict
if t not in d:
d[t] = [word]
else:
d[t].append(word)
return d
def store_anagrams(d):
db = dbm.open('shelf', 'c')
for key in d:
db[key] = pickle.dumps(d[key])
db.close()
db = dbm.open('shelf')
# store_anagrams(all_anagrams('words.txt'))
14-3
Python
import os
def walk(dirname):
"""Finds the names of all files in dirname and its subdirectories.
dirname: string name of directory
"""
names = []
if '__pycache__' in dirname:
return names
for name in os.listdir(dirname):
path = os.path.join(dirname, name)
if os.path.isfile(path):
names.append(path)
else:
names.extend(walk(path))
return names
def compute_checksum(filename):
"""Computes the MD5 checksum of the contents of a file.
filename: string
"""
cmd = 'md5sum ' + filename
return pipe(cmd)
def check_diff(name1, name2):
"""Computes the difference between the contents of two files.
name1, name2: string filenames
"""
cmd = 'diff %s %s' % (name1, name2)
return pipe(cmd)
def pipe(cmd):
"""Runs a command in a subprocess.
cmd: string Unix command
Returns (res, stat), the output of the subprocess and the exit status.
"""
# Note: os.popen is deprecated
# now, which means we are supposed to stop using it and start using
# the subprocess module. But for simple cases, I find
# subprocess more complicated than necessary. So I am going
# to keep using os.popen until they take it away.
fp = os.popen(cmd)
stat = fp.close()
assert stat is None
return res, stat
def compute_checksums(dirname, suffix):
"""Computes checksums for all files with the given suffix.
dirname: string name of directory to search
suffix: string suffix to match
Returns: map from checksum to list of files with that checksum
"""
names = walk(dirname)
d = {}
for name in names:
if name.endswith(suffix):
res, stat = compute_checksum(name)
checksum, _ = res.split()
if checksum in d:
d[checksum].append(name)
else:
d[checksum] = [name]
return d
def check_pairs(names):
"""Checks whether any in a list of files differs from the others.
names: list of string filenames
"""
for name1 in names:
for name2 in names:
if name1 < name2:
res, stat = check_diff(name1, name2)
if res:
return False
return True
def print_duplicates(d):
"""Checks for duplicate files.
Reports any files with the same checksum and checks whether they
are, in fact, identical.
d: map from checksum to list of files with that checksum
"""
for key, names in d.items():
if len(names) > 1:
print('The following files have the same checksum:')
for name in names:
print(name)
if check_pairs(names):
print('And they are identical.')
if __name__ == '__main__':
d = compute_checksums(dirname='.', suffix='.py')
print_duplicates(d)
|
2021-06-14 00:00:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3158682882785797, "perplexity": 14872.588726576976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00248.warc.gz"}
|
https://www.physicsforums.com/threads/moments-completely-lost.314329/
|
Moments, completely lost!
Homework Statement
Calculate the magnitude of the bending moment and shear forces at the centre of the beam shown in the attached figure
The Attempt at a Solution
For the shear forces I think its the sum of the vertical components,
Bending moment = not quite sure.
I would appreciate if someone could give me a few pointers on bending moments and shear forces. Have looked over some textbooks but it really isn't explained well. Appreciate ANY help.
Related Introductory Physics Homework Help News on Phys.org
cd19: unfortunately you are dead in the water until a mentor approves your attachment.
I would suggest using a free image host like photobucket.com or imageshack.
Then you will not have to wait for approval, you can just upload your images to photobucket and then copy/paste the image tag in to your post.
sorry here it is:
Last edited:
Any hints?
PhanthomJay
Homework Helper
Gold Member
You should first determine the vertical end reactions usung the equilibrium equations, or take advantage of the symmetry of the loading to determine those end reactions. Once you have them, the internal shear and the bending moment at the beam's center can be found by applying the equilibrium equations to a free body diagram of the left part of the beam (from its left end to the beam center). The shear at the center is not just the sum of the external vertical forces.
Ok so I have found the vertical end reactions to be:
r1 = 630.25kN
r2 = 8.9167kN
So what i gather is I have to draw a free body diagram exposing sections between the left support and the load. I do this in intervals, summing forces and taking moments about a point in the section.
so what I have attempted is the above from the intervals 0<x<6, 6<x<9, 9<x<12 but how do i solve for x and get a single answer for the shear force and bending moment?
PhanthomJay
Homework Helper
Gold Member
Ok so I have found the vertical end reactions to be:
r1 = 630.25kN
r2 = 8.9167kN
So what i gather is I have to draw a free body diagram exposing sections between the left support and the load. I do this in intervals, summing forces and taking moments about a point in the section.
so what I have attempted is the above from the intervals 0<x<6, 6<x<9, 9<x<12 but how do i solve for x and get a single answer for the shear force and bending moment?
The problem asks for the (internal) shear and bending moment at the center. So just expose the section from the left end to the center. At the center, there wiil be an unknown vertical load (the shear) and an unknown moment (the bending moment). You'll need to solve for these unknowns using sum of F_y = 0 , and sum of moments = 0.
Ok so I see that as everything is perfectly symmetrical, the end reactions at A and D must be 30 Kn respectively.
Also said:
I don't understand why I have to find the external reaction at D.
This is my attempt at the next part:
I have also tried working out the Mbending but i have a feeling my attempt is all wrong, find it very hard to grasp moments, I tend to over complicate.
PhanthomJay
Homework Helper
Gold Member
Ok so I see that as everything is perfectly symmetrical, the end reactions at A and D must be 30 Kn respectively.
yes, correct.
$I don't understand why I have to find the external reaction at D. You already did, but it's good to check your work. This is my attempt at the next part:$Vload+15 = 30 therefore the Vload=15
No, you must sum ALL forces to solve for the unknown shear, V, in your FBD of the left part of the beam. That includes the concentrated load of 15 kN down, the end reaction of 30 kN up, the downward contribution of the distributed load, and V, all summed to 0 to solve for V.
I have also tried working out the Mbending but i have a feeling my attempt is all wrong, find it very hard to grasp moments, I tend to over complicate.
to solve for the bending moment, sum moments about the right end of your FBD. It will be easier to assist you if you show your work.
This Is what I got:
For ''V'' ; 15+15-30+V=0 therefore v = 0 Kn (shear)
Summing the moments about the right end of my FBD i got:
-(30)*(9) + (15)*(6) = 180kNm (Bending Moment)
Is this looking correct?
PhanthomJay
Homework Helper
Gold Member
This Is what I got:
For ''V'' ; 15+15-30+V=0 therefore v = 0 Kn (shear)
Looks Good!
Summing the moments about the right end of my FBD i got:
-(30)*(9) + (15)*(6) = 180kNm (Bending Moment)
Is this looking correct?
I can see you don't like that distributed load, because you keep leaving it out of your FBD! Represent it as a 15 kN load acting at its c.g., then sum moments again. Indicate if the bending moment at the center of the beam is clockwise or counterclockwise.
30*(9) - 15(6) -15(1.5) = bending moment =157.5(Kn) acting clockwise
I would have never of got that distributed load right i.e the perpendicular distance (1.5), I'm terrible at FBDs
|
2021-02-24 23:03:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6939184665679932, "perplexity": 896.0443295520429}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00318.warc.gz"}
|
https://math.stackexchange.com/questions/2852105/expected-time-till-absorption-in-specific-state-of-a-markov-chain
|
# Expected time till absorption in specific state of a Markov chain
This question is a follow-up to Expected number of steps for reaching a specific absorbing state in an absorbing Markov chain because I don't understand the answer given there. I think I need to see a concrete example.
Suppose I play red and black at a casino. On each play, I either win the amount I staked, with probability $p<\frac12$ or I lose my stake. Let's say I start with a bankroll of $2$ and I decide to play until I have won $3$ or lost everything. My strategy is to bet just enough to reach my goal, or everything I have, whichever is less.
We have a Markov chain with $6$ states, $2$ of which are absorbing. There are well-known methods to determine the probability of winning, and of determining the average number of plays I make, but what if I want to know the average number of plays I make if I reach my goal? The transition matrix, with $q=1-p$ is $$\begin {bmatrix} 1&0&0&0&0&0\\ q&0&p&0&0&0\\ q&0&0&0&p&0\\ 0&q&0&0&0&p\\ 0&0&0&q&0&p\\ 0&0&0&0&0&1 \end{bmatrix}$$
Henning Malcolm suggests two approaches, neither of which I can follow. (This is because of my limitations, and is in no way intended as a criticism of the answer.) The first assumes that I have figured out the probability of winning starting with every possible bankroll. Then we are to compute new transition probabilities that describe the experience of the winners, as I understand it, and compute the time to absorption in the new chain. Let $p_k$ be the probability of winning, if my bankroll is $k$. How should I calculate the new transition matrix?
Henning Malcolm gives an alternative method if there is only one starting state we're interested in, and I'm only interested in the actual case where I start in state $2$. He says, "first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed." If we let $e_k$ be this number for state $k=1,2,3,4,$ how do we construct the equations relating the $e_k?$ I can see how to do this if we only care about the number plays until the game ends, but how do I make it reflect only the number of plays until winning?
• Did you deliberately choose an example where each state has only two successors? I think in this case, in the first method you can set up a system of linear equations for the transition probabilities conditional on winning; but that won't work in the same way if you have a more general transition matrix, as you'll have too many unknowns and too few equations. – joriki Jul 15 '18 at 7:20
• @joriki No I didn't. I wanted to make a simple example, so that it wasn't asking to much to request a solution, but it didn't occur to me that I might be over-simplifying the problem. – saulspatz Jul 15 '18 at 14:06
State $$1$$ represents losing the game, so remove it from the system and condition the remaining transition probabilities on the event that the system does not transition from state $$i$$ to state $$1$$. In practical terms, this means that you delete from your transition matrix column $$1$$ and every row that had a $$1$$ in this column, and scale each remaining row $$i$$ by $$1/(1-p_{1i})$$. For your example system, this produces the reduced matrix $$P' = \begin{bmatrix}0&1&0&0&0 \\ 0&0&0&1&0 \\ q&0&0&0&p \\ 0&0&q&0&p \\ 0&0&0&0&1 \end{bmatrix}.$$ Applying standard techinques to this matrix, we can verify that the absorption probability is $$1$$ for any starting state and that the expected conditional absorption times are $$\begin{bmatrix}{3+q\over1-q^2} \\ {2+q+q^2\over1-q^2} \\ {1+3q\over1-q^2} \\ {1+q+2q^2\over1-q^2}\end{bmatrix}.$$
|
2019-09-18 11:28:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.764886736869812, "perplexity": 192.17211581769502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00421.warc.gz"}
|
http://mathoverflow.net/questions/18134/poincare-duality
|
# Poincare duality
I am reading the proof of Theorem Poincare duality in "princibles in AG" of Griffith.
They constructed "dual cell decomposition" of a polyhedra decomposition of manifold M and the cochain complex of this dual cells.
I don't know why this cohomogy group of this cochain complex is the singular cohomogy group of M?
Someone help me give answer? Thanks in advance.
-
Given a cell $\epsilon$ of $M$ there is precisely one dual cell that intersects $\epsilon$. That is the key idea for how you define the map from the chain complex to the cochain complex. If dealing with integer coefficients you have to be careful about orientations. – Ryan Budney Mar 26 '11 at 20:52
See this thread: math.stackexchange.com/questions/14467/… Also, your question is more appropriate for the math.stackexchange site since it's standard material from coursework. – Ryan Budney Mar 26 '11 at 21:14
Hatcher's "Algebraic Topology" book has a nice explanation (starting on p232). – Mark Grant Mar 27 '11 at 7:29
## 2 Answers
For any CW complex $X$ one defines a chain complex $C_*(X)$: choose an orientation of each cell; the group $C_n(X)$ is the free abelian group with a basis whose elements correspond to the $n$-cells of $X$ and the differential $C_n(X)\to C_{n-1}(X)$ is defined by $c\mapsto \sum_{c'\subset\partial c} (c,c')c'$ where $(c,c')$ is the incidence number of $c$ and $c'\subset\partial c$ defined as follows.
By the definition of a CW complex one can extend the homeo of an open $n$-ball to $c$ to a map of the closed ball; compose the restriction of this map to the boundary $S^{n-1}$ of the closed ball with the map $X_{n-1}\to S^{n-1}$ obtained by collapsing all cells of the $n-1$ skeleton of $X$ but $c'$ to a point; the incidence number of $c$ and $c'$ is the degree of the resulting map $S^{n-1}\to S^{n-1}$ where the first sphere is oriented using the "outgoing normal first" rule and the orientation of the second one is induced from $c'$.
This generalizes the chain complex of a simplicial set. The homlogy of $C_*(X)$ is isomorphic to the singular homology of $X$, see e.g. Hatcher, Algebraic topology, p. 137 (freely available online) or Milnor, Stasheff, Characteristic classes, Appendix A.
-
My question is why this cohomogy group of this cochain complex is the singular cohomogy group of M. I have tried to check four axioms for cohomogy theory of a topological space X in Spanier's book (algebraic topology). However, i haven't finished to check them. – vu viet Mar 15 '10 at 9:00
The cochain complex of the "dual cell decomposition" can actually be identified with the dual cochain complex.
To specify a linear map on the free module spanned by the cells of the decomposition is the same thing as assigning a scalar to each cell of the decomposition. If we assume the decomposition to be finite, then the module of cochains is therefore canonically isomorphic to the module spanned by the cells of the dual decomposition (as well as those of the original decomposition but it is less interesting).
It is clear that the degrees match, and it is easy to check that the codifferentials are also identified. The cohomology of both cochain complexes are therefore identified as well, as the underlying complexes are isomorphic.
If the original cell decomposition is not finite, then instead of considering the module spanned by the dual cells, one considers instead of a direct sum the cartesian product.
-
|
2015-09-04 21:27:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8577991127967834, "perplexity": 227.71979812127017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645366585.94/warc/CC-MAIN-20150827031606-00306-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-connecting-concepts-through-application/chapter-3-exponents-polynomials-and-functions-3-1-rules-for-exponents-3-1-exercises-page-233/39
|
Chapter 3 - Exponents, Polynomials and Functions - 3.1 Rules for Exponents - 3.1 Exercises - Page 233: 39
$a^2+10ab+25b^2$
Work Step by Step
Use the FOIL method to multiply the terms within the two sets of parentheses. Multiply the first value in the first set of parentheses by the first value in the second set, and then the first value in the first by the second value in the second set. Then multiply the second value in the first set by the first value in the second set, and then the second value in the first set by the second value in the second set. Then combine all the like terms. $(a+5b)^2$ $(a+5b)(a+5b)$ $a^2+5ab+5ab+25b^2$ $a^2+10ab+25b^2$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2020-02-25 10:18:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7431744933128357, "perplexity": 279.23743595897}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00207.warc.gz"}
|
https://www.gamedev.net/forums/topic/42709-question-about-random-numbers-part-2/
|
• ### Announcements
#### Archived
This topic is now archived and is closed to further replies.
# Question about Random numbers PART 2
## Recommended Posts
Dark Star 100
Hi all, again, I have discovered how the rand() function in C/C++ works, well in the GNU C/C++ anyway (I use DJGPP) Its function is
int rand(void) { randSeed = (69069 * randSeed+1); return randSeed & 0x7fff; } // the srand function is defined as: void srand(unsigned seed) { randSeed = seed; }
what I want to know from yesturdays post about random numbers is that from this formula used in rand can it be reversed to give you the last version of randSeed? I''ll try to make this more clear because I am kinda confusing myself here. Say for example if randSeed was 356456 before the line in rand() return randSeed & 0x7fff; and that maybe gave 563464 (made up value!) could I reverse the statement randSeed & 0x7fff; to give me the value of randSeed before it hit that line so that I can rearrange this formula randSeed=(69069 * randSeed + 1); so that I can get the value of randSeed before it hit this line too. I know this all sounds weird and I can''t really go into why I wanna play around with random numbers but it is a secret project I am working on to some how revolutionise something in computing. (One clue, Winzip''s job, don''t ask how cos I dont fully know, but I kinda found a link between a new file format for compressed files and random numbers) Can you also tell me if the rand() function given its definition that I have researched and written for you can produce any set of random numbers in the world or is it only good for producing deterministic numbers that never repeat so often, you see I would like numbers as realistic as for example picking 12 numbers out of a hat (numbers from 1 to 12 and on every pick its put back and the hat is shuffled) and managing to some how pick a sequence maybe like: 1,4,4,4,4,4,5,2,6,12,5,4,2,7,9 what I mean is can rand() function ever produce something like this because in reality, someone may pick out the number 4, from a hat 5 times in a row. Are there any random functions or algorithms that can produce random numbers as real as the set above. This is really important to me because once I can find a function that produces life like random numbers not deterministic numbers that will never look like the number set above then I can work on how to derive the seed back from the last random number all the way to the beginning so that I can derive the initial seed that sparked the entire random sequence. This is a windful, I know and some how doubt this can ever work, but if it can believe me, it would revolutionise something big time !!!!!! Any Help is good help Dark Star
##### Share on other sites
Ironblayde 130
So what you''re asking is, from the value (randSeed & 0x7fff), can you recover the value of randSeed? If that is your question, the answer is unfortunately no. That bitwise AND serves to report only the lower 15 bits of randSeed. Any higher bits are simply discarded, so there is no way to recover what they were.
The example you provided can be generated by a random number function provided you are not using its full range. That is, if you''re getting random numbers between 1 and 12 by taking a large random number modulo 12, plus 1, then repeats can certainly occur.
-Ironblayde
Aeon Software
Down with Tiberia!
"All your women are belong to me." - Nekrophidius
##### Share on other sites
MfA 122
One random number generator is not a model for another, sure if you have an ideal pseudo random number generator (obviously dont exist) you can in theory encode any sequence of numbers as an offset and seed for that generator... but what will you find is that it was a waste of time because on average the entropy in the offset and seed will be the same as the sequence''s you were trying to encode.
Try reading some of Charles Blooms stuff (and compression&math in general perhaps) and his explanation of why there are no magic compressors.
|
2017-08-16 21:55:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5331141352653503, "perplexity": 1041.440695071797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102663.36/warc/CC-MAIN-20170816212248-20170816232248-00061.warc.gz"}
|
https://eprints.keele.ac.uk/id/eprint/7426/
|
Geballe, TR, Banerjee, DPK, Evans, A, Gehrz, RD, Woodward, CE, Mroz, P, Udalski, A, Munari, U, Starrfield, S, Page, KL, Sokolovsky, K, Hambsch, F-J, Myers, G, Aydi, E, Buckley, DAH, Walter, F and Wagner, RM (2019) Infrared spectroscopy of the recent outburst in V1047 Cen (Nova Centauri 2005). Astrophysical Journal Letters, 886 (1). ISSN 2041-8213
Preview
Text
Geballe_2019_ApJL_886_L14.pdf - Published Version
Fourteen years after its eruption as a classical nova (CN), V1047 Cen (Nova Cen 2005) began an unusual re-brightening in 2019 April. The amplitude of the brightening suggests that this is a dwarf nova (DN) eruption in a CN system. Very few CNe have had DN eruptions within decades of the main CN outburst. The 14 years separating the CN and DN eruptions of V1047 Cen is the shortest of all instances recorded thus far. Explaining this rapid succession of CN and DN outbursts in V1047 Cen may be challenging within the framework of standard theories for DN outbursts. Following a CN eruption, the mass accretion rate is believed to remain high $(\dot{M}\sim10^{-8}$M$_\odot$yr$^{-1})$ for a few centuries, due to the irradiation of the secondary star by the still-hot surface of the white dwarf. Thus a DN eruption is not expected to occur during this high mass accretion phase as DN outbursts, which result from thermal instabilities in the accretion disk, and arise during a regime of low mass accretion rate $(\dot{M}\sim10^{-10}$M$_\odot$yr$^{-1})$. Here we present near-infrared spectroscopy to show that the present outburst is most likely a DN eruption, and discuss the possible reasons for its early occurrence. Even if the present re-brightening is later shown to be due to a cause other than a DN outburst, the present study provides invaluable documentation of this unusual event.
|
2022-12-01 07:04:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540671229362488, "perplexity": 9725.598419087559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00705.warc.gz"}
|
https://crypto.stackexchange.com/tags/substitution-cipher/hot
|
Tag Info
37
The key space of a cryptographic algorithm whose key length is $n$ is given by $2^n$ No. There is confusion between: keyspace (or key space) $\mathcal K$, which is the set of possible keys. keyspace size (or size of the keyspace) $\|\mathcal K\|$, which is the number of possible keys (an integer). key length (or key size) in bit, which can be defined as ...
28
When trying to break an unknown cipher, one first needs to figure out what kind of cipher one it is. Generally, a good starting point would be to start with the most common and well known classical ciphers, eliminate those that obviously don't fit, and try the remaining ones to see if any of them might work. An obvious first step is to look at the ...
28
If the substitution ciphers belong to the same family, then their composition will also (typically, assuming that the family is closed under composition) belong to the same family. Thus, breaking the combined cipher will be no harder than breaking an arbitrary cipher in the family. For a simple example, combining two Caesar shift ciphers with shifts ...
25
He is talking about the original version of the Caesar Cipher where the substitution was just a +3: A -> D B -> E C -> F D -> G E -> H F -> I G -> K H -> L ... X -> A Y -> B Z -> C Because the shift is fixed, it does not have a key (but you could say it is a substitution cipher with a key equal to +3). However it is common ...
16
DES actually demonstrated that a Feistel structure was not a guarantee against attacks. In "academic" terms, DES is broken by both differential and linear cryptanalysis, because they require, respectively, $2^{47}$ chosen plaintexts and $2^{43}$ known plaintexts, whereas the DES key is (effectively) 56 bits. Of course, for practical attacks, we would brute ...
10
I'll assume that the plaintext consists entirely of capital ASCII letters as in the example. This implies the high 3 bits of each byte of plaintext are 010. It is useful to visualize how 3 consecutive bytes of plaintext map to 4 consecutive Base64 characters. 1. Frequency analysis of the last character of 4-char blocks in ciphertext We see there is a ...
9
Some additions to the other answer: any given letter can only correspond to a fairly limited number of ciphertext letters: only the ones in the same column or row, and never to itself. So a highly frequent letter like E will still stick out in longer texts and then we will also find its row and column mates, which helps in reconstructing the square. There ...
9
The composition of any number of substitution ciphers is still a substitution cipher, hence no.
9
In the substitution cipher, the answer lies in the permutations, a key is one of the all possible permutations of the alphabet (keyspace), i.e. each letter is substituted with another. Therefore, for a key of an alphabet with 26 characters, the first letter can have the 26, the second can have 25, etc. and the last letter can get only one letter for ...
8
If it is a simple substitution cipher, there are a few standard techniques: Frequency analysis. Count how many times each letter appears in the ciphertext. The most common ciphertext-letters probably correspond to the most-common letters in English. The most common letters in English are ETAOINSHRDLU... (in decreasing order of prevalence). Therefore, ...
7
Since this is homework, let me just give you a hint: consider the two-character messages $m_1 = \text{"aa"}$ and $m_2 = \text{"ab"}$. Given a ciphertext $c$ encrypted with a monoalphabetic substitution cipher, can you tell which of $m_1$ or $m_2$ it corresponds to, even without knowing the key? Why (not)? What does this imply about perfect secrecy? Ps. ...
7
It's called a keyword cipher. See this question for some ways to break it.
7
A substitution cipher consist of a mapping from letters in the alphabet to letters in the alphabet (not necessarily the same alphabet, but probably is in this case). There are many forms that a key can take on. Ones I've seen in practice are: The key is the mapping (i.e. a->m, b->x, c->q,...). The key represents a shift. A key of 5 would mean the ...
7
Actually, we have a four-way (that is, four words that will can be converted into any of the others with the right shift). These words are: ax, by, he, if Other two letter words are: am <-> my at <-> pi do <-> it hi <-> no We also have the <-> max I didn't do a systematic search for words over two letters; there certainly ...
7
any other considerations? Yes. In many common use cases the mapping table needs to be retained. That map changes each time a number is added; that's a backup / continuity of service headache. The map is security-sensitive: it contains all the clear phone numbers, and information which (combined with other information) allows getting back to users. The map ...
6
Yes, we can almost certainly break this, given enough ciphertext. One approach would be to use a dictionary and use word patterns. For instance, if the ciphertext word is qddxfozogf, then the plaintext word was probably ammunition. Notice how the 2nd and 3rd letters are the same; and the 5th and 10th letters, and the 6th and 8th letters? The word ...
6
Is this something that exists and could be plausible? Yes, things like that already exist and have even been used by well-known serial killers! (So much for creating a dramatic intro – lol) Monoalphabetic Substitution Cipher What you are referring to, could be categorized as a classical “pigpen cipher”; a monoalphabetic substitution cipher where ...
6
If one can prove that a large amount of image and audio data doesn't exhibit a frequency patter, only then can we consider frequency analysis as a non-viable attack. For a modified simple example, let us say each guitar chord is encoded as a byte of audio data. If you analyse about 70+ songs, you will see that 4 chords are the most frequently used (as ...
6
In a Feistel networks (from the German IBM cryptographer Horst Feistel), the input is divided into two blocks ($L_0$ and $R_0$) which interact with each other. Main example is DES. basic construction: In a SPN (Substitution Permutation Network), the input is divided into multiple small blocks, applied to a S-box (substitution), then the bits positions are ...
6
A monoalphabetic substitution cipher uses a fixed permutation of an alphabet $A$ namely $$\pi:A\rightarrow A,$$ to encrypt a plaintext $P=(P_1,\ldots P_n)\in A^n$ of length $n$ into the ciphertext $C=(C_1,\ldots,C_n) \in A^n$ via $$C_i=\pi(P_i),\quad i=1,\ldots,n.$$ For a natural language, the letter frequencies are non-uniform. Thus, given a long enough ...
6
Your construction is completely insecure: a single known plaintext / ciphertext block pair is sufficient to decrypt all blocks encrypted with the same key. Specifically, let me write your block encryption function $E_K$ as $$c = E_K(p) = P^{(n)}(S^{(n)}(p \oplus K_1 \oplus K_2 \oplus \dots \oplus K_n)),$$ where $p$ is the plaintext block, $c$ is the ...
5
Based on your sample code I do not consider the scheme secure enough for implementation. Additionally you will run into a few problems if you actually try to implement this to generate DNA strands with encrypted messages in them (ala some kind of futuristic scifi thriller). As the other answer suggests, it would be best to think about using the DNA sequence ...
5
Yes (guessing you are doing the cypher challenge?) The "Beaufort Decoder" is a really good decoding tool (saves you time), then trial and error keywords. Also, the "Vigenère cracking tool can be used to find the length of the keyword. Paste the texts you're decoding; the number of the column(s) with the most x's is the length of the keyword.
5
Given that the permutation is fixed and the key step is independent of the permutation you can reduce this to an ordinary text-substitution cipher. If the key is as long as the input you have a weak one-time pad, because the per letter change is limited to 10 instead of 26. However if the key is short then you have a Vigenère cipher (if you "decode" with \$...
5
There are different approaches to crack a substitution cipher. A human would use a different strategy than a computer. But as the word boundaries are not preserved it will be rather challenging for a human solving this cipher. The quipqiuq tool mentioned by John is using word lists, but there are other methods as well. Resources: http://...
5
There's two missing pieces. First, the ring setting changes the output letter, it doesn't rotate the whole exit pattern. Second, the rotors are advanced before the letter is encrypted. If your rotor (Enigma I Rotor I) is set up like this with the ring at A. abcdefghijklmnopqrstuvwxyz ekmflgdqvzntowyhxuspaibrcj Then if you advance the ring to B all the ...
5
This is, in fact, not a Vigenère cipher. One clue to this is the fact that the ciphertext (which, conveniently, includes unenciphered word breaks) contains lots of repeated words like UTL, VCI, V, UB and QVRY that would be very unlikely to occur by chance in the output of a polyalphabetic cipher like Vigenère. Another clue can be obtained by examining the ...
5
I don't have enough space to expand on yyyyyyy's answer in a comment so I am making this an answer in and of itself. TruthSerum is correct, but it seems like an explanation is wanted, so here goes. Imagine you have a regular (all the sides have the same length, all the angles are the same) n-gon. That sounds complex, but trust me it isn't. A 4-gon is ...
5
This is called homophonic encryption, and has been around for a long time. In terms of cryptanalysis of such ciphers, there is a nice thesis from SJSU on this topic which is available here. The attacks tested in that cipher were based on hill climbing and local optimization techniques. The conclusion states: We designed and implemented an efficient ...
5
Regarding the first part of the question, I will just link to another answer I wrote in the past: Why don't homophones hide multiple-letter patterns? Summary: If you adjust the frequencies so that every single symbol is equally likely, then bigrams can be used for frequency analysis because they won't be uniform distributed. The structure of language is ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2020-10-29 22:41:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6323165893554688, "perplexity": 846.2904721155136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00465.warc.gz"}
|
https://physics.stackexchange.com/questions/226477/deriving-a-povm-from-a-projective-measurement
|
# Deriving a POVM from a projective measurement
I understand how to show that every POVM is equivalent to a projective measurement on a larger Hilbert space, but I don't understand why the converse is true. The vast majority of explanations of POVM's start by defining a POVM, and then show that given any POVM, you can tensor an appropriate ancilla onto your system and convert that POVM into a projective measurement on the combined system. But I want to know how to go the other direction: that is, given a projective measurement on a larger system, can I reduce it to a POVM on a subsystem? And if so, can I do it in a state-independent way?
For example, suppose I have a pure state whose Hilbert space is a product of two subsystems $A$ and $B$: $| \psi \rangle = \sum_{ab} \psi_{ab} | a \rangle | b \rangle$. I want to make a specified projective measurement $\hat{M} = \sum_m m\, \hat{P}_m$ on the entire combined system, where the $\{\hat{P}_m\}$ are orthogonal projectors. Is there a way to express the expectation value $\langle \psi | \hat{M} | \psi \rangle$ in terms of a POVM on system $A$ alone? If so, does it depend on the state $| \psi \rangle$, or just on $\hat{M}$ and the Hilbert spaces of the systems $A$ and $B$? If it depends on the state $| \psi \rangle$, than this seems like a rather serious limitation, because it means that there's no state-independent way to convert an ordinary projective measurement into a POVM. It seems to me that in an experiment, we might know the details of the measurement we want to make, but not the details of the state we're measuring.
The closest I can find to an explanation of this is at http://arxiv.org/pdf/1110.6815v2.pdf on pgs. 10-11. The author says "any standard measurement involving more than one physical system may be described as a generalized measurement on one of the subsystems," which seems promising. But in the statement of the theorem, he assumes that the measurement is only performed on the ancilla, which seems like a quite restrictive assumption which weakens his claim. (He also assumes that the system and the ancilla are originally unentangled and then undergo arbitrary unitary evolution. But if you were to start from an arbitrary experimental state, there is no state-independent unitary operator that unentangles $A$ and $B$, so again this setup seems quite state-dependent.)
Edit: Perhaps I misunderstood the point of the POVM formulation. The Wikipedia article on POVM says "In rough analogy, a POVM is to a PVM what a density matrix is to a pure state ... POVMs on a physical system are used to describe the effect of a projective measurement performed on a larger system." I took this to mean that a POVM measurement is a way of restricting the effect of an arbitrary projective measurement of the purified state onto just the original system, but perhaps this is incorrect.
The standard proof I've seen shows that an arbitrary POVM measurement is equivalent to a very specific type of projective measurement on a composite system. How do we know that a more complicated/general projective measurement on a composite system (e.g. a joint measurement on both the original system and any added ancilla) can be expressed as a POVM measurement?
What you are proposing cannot work: You cannot replace a (projective) measurement on a general composite system AB by a (POVM) measurement on part A only. To see this, simply consider the case where the joint state $\vert\psi\rangle$ is of the form $$\vert\psi\rangle_{AB} = \vert0\rangle_A\vert\vartheta\rangle_B\ .$$ The reduced state of A is $\vert0\rangle\langle0\vert$ and thus completely independent of $\vert\vartheta\rangle$. No measurement on A will thus be able to reveal any information about $\vert\vartheta\rangle$.
However, you are misunderstanding the "POVM <-> projective measurement on a larger system" relation. The statement is that any POVM on a system A is equivalent to (i) adding an ancilla B in a well-defined state (say, $\vert0\rangle_B$), (ii) performing a specific unitary on AB, and (iii) carrying out a projective measurement on AB. In that case, the state of AB after step (ii) carries exactly the same information as the state A before step (i), and everything works out fine.
|
2022-09-25 20:57:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8392812013626099, "perplexity": 159.71182914265233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00724.warc.gz"}
|
https://mjolnir.readthedocs.io/en/latest/Introduction.html
|
# Introduction¶
This is the introduction page for the MJOLNIR software package. The main purpose of this document is to give an overview of different features of the software and how you can contribute to it.
The software is currently developed by Jakob Lass, PhD student at both the Niels Bohr Institute, Copenhagen - Denmark, and the Paul Scherrer Institute, Villigen - Switzerland, and is not developed by a professional team. The software is intended to be used for data treatment and visualization for the CAMEA upgrade at the RITA II instrument at SINQ, PSI Villigen - Switzerland.
The software is found at GitHub and is intended to be used together with Python versions 2.7, (3.4,) 3.5, 3.6, and 3.7. This compatibility is ensured by the use of automated unit test through the Travis project (Travis). Python 3.4 is no longer tested due to updates in the Travis testing framework. Further than just testing, as to ensure a thorough testing the coverage of these are monitored using Coveralls (Coveralls). However, certain algorithms and methods are not suited to be tested through simple tests. This includes graphical methods where one for example uses a plotting routine to generate a specific output. Though the visual inspection is far outside of the testing scope for this software, some of the methods are still tested by simple run through test. That is, if they can be run and generate a plot without crashing and throwing an error, it is believed that they work as intended. This is where actual user testing is needed.
## IPython¶
The MJOLNIR software package makes use of many features of the interactive part of matplotlib. Thus, if the code/tutorials are run through an IPython kernel, these might be absent. However, including the following code snippet to the top of the scripts changes the IPython matplotlib back-end to be interactive:
try:
import IPython
shell = IPython.get_ipython()
shell.enable_matplotlib(gui='qt')
except:
pass
This block can in principal also be included for regular python kernels as it will then through an exception and pass. If the ‘qt’ back-end is not available, one could try a number of others; For a list run “matplotlib.rcsetup.interactive_bk” after import of matplotlib.
## Software Structure¶
The software is divided into individual modules being Instrument, DataSet, and Statistics. With this division it is intended that each part of the software suit is to be fully independent of the others but may be used together. The same goes for the tutorials that are intended to cover all of the methods and workflow a user would come into contact with while using the software.
## Installation¶
The MJOLNIR software package is available for download an installation through the python package installer PyPI. This then allows for an installation by simply writing in a terminal
pip install MJOLNIR
Depending on the set-up on the computer, one recomendation is to install MJOLNIR in a virtual environment through the use of e.g. conda or Anaconda with different packages. Currently, MJOLNIR is not installable through conda thus one needs to first set up the virtual environment and afterwards install the package through PyPI
conda create --name MJOLNIR python=3.6 spyder=3.1.4
pip install MJOLNIR
The specific version of python is not a requirement but merely a suggestion. The same is valid for Spyder which is a recommendable IDE for python and more. The specific version is the last that supports running scripts in interactive python environments.
For a nightly-build version of MJOLNIR, that are currently being developed on the Develop branch one needs only to change an argument to PyPI
pip install -i https://test.pypi.org/simple/ MJOLNIR
|
2020-02-23 10:53:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2864442765712738, "perplexity": 1757.5518121874738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00477.warc.gz"}
|
https://mathsgee.com/36995/what-does-the-residue-theorem-state
|
0 like 0 dislike
99 views
What does the residue theorem state?
| 99 views
0 like 0 dislike
Suppose that $C$ is a simple closed contour contained in the interior of a simply connected domain $D$. If $f$ is analytic everywhere on and within $C$, except possibly for at a finite number of isolated singularities (call them $\left.z_{1}, \ldots, z_{n}\right)$ inside $C$, then
$\oint_{C} f(z) d z=2 \pi i \sum_{k=1}^{n} \operatorname{Res}\left(f(z), z_{k}\right)$
by Platinum (142,760 points)
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
0 like 0 dislike
1 like 0 dislike
2 like 0 dislike
|
2022-11-29 00:26:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8148607611656189, "perplexity": 2047.7164819002667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00122.warc.gz"}
|
https://www.transtutors.com/questions/mr-valdez-has-10-000-to-invest-at-time-t-0-and-three-ways-to-invest-it-investment-ac-5221966.htm
|
# Mr. Valdez has $10,000 to invest at time t = 0, and three ways to invest it. Investment account I... Mr. Valdez has$10,000 to invest at time t = 0, and three ways to invest it. Investment account I is governed by compound interest with an annual effective discount rate of 3%. Investment account II has force of interest equal to 204,2. Investment account III is governed by the accumulation function a" (t) = (1 - .00512). Mr. Valdez can transfer his money between the three investments at any time. What is the maximum amount he can accumulate at time t = 5? [HINT: At all times, Mr. Valdez wishes to have his money in the account that has the greatest force of interest at that moment. Therefore, be- gin by determining the force of interest function for each of the investment accounts. Next decide for which time interval Mr. Valdez should have his money in each of the accounts. Assume that he accordingly moves his money to maximize his return. You will then need the accumulation functions for the accounts in order to determine Mr. Valdez's balance at t = 5. Remember to use Important Fact (1.7.4).)
IMPORTANT FACT 1.7.4 If we wish to invest money ti years from now in order to have $S t2 years from now, we should invest$Sv(12)a(ti) = $S24 v(t2)The answer is$12,140.26.Show your work. I'm told that you must first find the force of interest and then determine the time interval the money stays in each account. please indicate how and at what time interval Mr. Valdez should transfer the money between account. He should be able to get \$12,140.26 by changing accounts at the right time and staying at the highest force of interest at the right time in each account.
|
2020-04-07 04:15:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34423497319221497, "perplexity": 865.6056272265126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00202.warc.gz"}
|
https://www.gamedev.net/forums/topic/482896-basic-guidewhere-can-i-get-pdf-of-win-api-referance/
|
# basic guide+WHere can I get pdf of win API referance?
This topic is 3581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi Im just preparing myself for when this "Programming role playing games in direct x arrives", Anyway Im wanting to get the basics out the way so Im ready for some serious reading and work when the book finally comes. I am looking at Basic directx Guide and the format of directx seems to be making alot more sense since Ive come back from working on a console OS. i.e
int MessageBox(HWND hWnd,
LPCTSTR lptext,
LPCTSTR lpcaption,
UINT utype);
The above basic code made me realise It would be extremely beneficial for me to get a hold of a good windows API referance formatted similiar to the above and explaining what each function does. I found the online msdn but was wandering if theres a pdf version of the API? Finally , Its been a long time since I used the site, whats the format for code tags? "
" doesnt seem to work. Sorry about the dumb question , I did come across some info on the latter part of my post when I first started using the forums again but cant find it now.
Thanks :)
[Edited by - firefly28 on February 14, 2008 3:19:45 PM]
##### Share on other sites
Do you have a visual basic/c#/c++ ide?
Are you going to program in c or c++?
First: if you've never programmed in the windows environment, there may be a lot more to do that getting "the basics" out of the way. Windows programming is entirely different than console programming as it is event-driven instead of being under program flow-control.
In any case, a Visual Studio environment would be invaluable. It will have a tremendous help system (with lots of API info) and good examples.
I would suggest you try some of the windows tutorials before you jump into directx. If you're just beginning in windows programming, you may want to try out MFC (Microsoft Foundation Classes).
I'm not familiar with the various format tags (that's a different subject!).
##### Share on other sites
That book would be rediciously huge. Whenever I want to learn about a Win32 method, I use the resource you already found: the MSDN. However, I NEVER use the MSDN search since it has horrible accuracy and will turn up many false positives. The best search would be to use Google, prepending the method name with 'MSDN'. It always works for me. After a while, you will learn the functionality of each Win32 function by this method.
##### Share on other sites
Quote:
Original post by dmciminiThat book would be rediciously huge. Whenever I want to learn about a Win32 method, I use the resource you already found: the MSDN. However, I NEVER use the MSDN search since it has horrible accuracy and will turn up many false positives. The best search would be to use Google, prepending the method name with 'MSDN'. It always works for me. After a while, you will learn the functionality of each Win32 function by this method.
Heh, I do exactly the same thing. E.g. "MSDN MessageBox" (no quotes). Some links are more accurate than others though.
To the OP: The book is a little old now (Although I don't know of any more recent ones), so some things might be out of date. I can't remember if it uses DX8 or DX9, but you should really be using DX9 with the latest SDK rather than the one that comes on the CD with the book; there's been a few changes in DX9 APIs, bust almost all with D3DX stuff, and if you compare the notes in the book and comments in the code to the SDK docs it should be fairly obvious where the differences are.
This, unfortunately is why I stopped buying DirectX books; they go out of date way too fast.
Oh, and no despite what the book tells you, Don't use DirectInput. At least not for keyboard and mouse input.
##### Share on other sites
Ok I will use the MSDN :) really wanted a document but what the heck. Also to answer the previous question and add what I should have had in the thread initially , the plan is to use c++ and DirectX , I have the Microsoft Visual studio IDE. I have worked with C++ before with SDL and I made my program OO (2d pong)
As mentioned in a previous thread I dont plan to set my sites too high, only have 1 2d space background, 2 2d spaceships for player 1 and 2 then have 1 static planet that I can use to test basic collisions. Once I get all that done in an acceptable manner I will try to cement the concepts of MP which is the only element I am 100% clueless about.
I have used the following langauges:
C(Setup a testbed program which was very basic to simulate a user which comes into a mud and then another user enters and can chat, used sockets but was going through a guide to help me do this)[I have used C for various differant things.
C++
Assembly (Begginer but have read and immplemented a fair bit in my console OS)
Java(Have created GUI programs in this and I have immplemented sockets for a mock server program that displays a text message to clients)
PhP
Mysql
Visual Basic(Used this years ago to create small GUI programs)
Pascal (Used many moons ago at college)
I did try DX a few years back and a concept that stumped me was callback functions, however I think I have a better understanding of things overall this time around, Ive just been looking up callback functions prior to my book arriving and my basic understanding of it is that a function gets passed a pointer to another function. (Im sure its alot more complex than this and its something Ive opnly re-visted today but Ive read its good for library writers)
The thing is at this point I know directX is going to be hard either way and I really want to learn it, because I know directX has ugly code Im setting my sites very low initially, in fact before attempting MP Im just going to try a 2player game where to 2d ships can blow each other up on a space background, the ship animations will be made up of 2 or 3 sprites. The book Im getting actually takes you through creating an rpg , even if it is difficult I can gleam some info from it for referance.
I dont know if I will manage it but Im in a position at present where I can work daily on a project and after comming from trying to debug a console OS I can appreciate how much the windows API is doing, whereas in the past I just thought it was unecessarly ugly and overcomplicated code.
Just to comment on the last post, yeah I think I will need to make some changes as a few things are indeed out of date from what Ive read. Im just checking out your direct input link at this momment. Thanks for that , the more info the better :)
##### Share on other sites
edit----
Accidental double post
##### Share on other sites
Quote:
Original post by firefly28Im just checking out your direct input link at this momment. Thanks for that , the more info the better :)
This is a useful link too. The debug runtimes are extremely useful, I really don't know where I'd be without them.
##### Share on other sites
Awesome steve :) , Those should help immensely thanks!
I cant wait to get this book :) even though its dated it should help somewhat but not having it just now, means Im hauling in heaps of useful DX resources into a folder on my favourites!
Finally thats me just fully checked the direct input link and It does put forward very concise reasons for not using it so I will definatly try and make sure I dont use it in my code.
Responses Appreciated All
Firefly
##### Share on other sites
The Windows API (whose 32-bit version is usually referred to as Win32) is part of the Windows SDK. A older version of this comes with Visual Studio 2005 Pro, and the latest version comes with all versions of Visual Studio 2008. All of the Windows API documentation can be found in the MSDN library, but the platform SDK comes with documentation that you can browse offline.
##### Share on other sites
Just checking back - Im installing those debug runtimes right now. A week ago I was doing a guide where in parts of the code the guy uses a function to convert various numbers into degrees and I was getting confused when he used radians in another part of the code but Ive managed to get through all that and I just got a spinning 3d prism built so Im getting there :) Just before moving onto meshes Im messing about with the early lessons code and those debug runtimes would be handy now! Im glad I found this post again :)
(Still waiting on my book though :( , it got sent from the US and there was some botch at customs![Someone could read the address? who knows hehe] Hopefully I will finally get the book soon)
##### Share on other sites
This topic is 3581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628710
• Total Posts
2984319
• 23
• 11
• 9
• 13
• 14
|
2017-12-15 06:49:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2750287353992462, "perplexity": 1511.6798355706835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00465.warc.gz"}
|
https://discuss.codechef.com/questions/19990/knowing-running-time-from-given-algorithm-complexity
|
×
# knowing running time from given algorithm complexity
0 Hi guys, let's say we know our algorithm complexity, so we know how many operations(give or take) our program will do, but how to know if it will stand in the time limits? Let's say we algorithm with complexity of N*sqrt(N), and N = 10^5, will this algorithm finish in 1 second? For purpose of this question let's assume we are using c++. Thanks! asked 08 Aug '13, 22:37 5★boochman 31●2●4●7 accept rate: 0%
1 regarding complexity : O(2n) ~ O(1000n). regarding execution time : O(2n) !~ O(1000n). thus, we can't answer your question without knowing what you assume to be "constant time" operation, and what is not. it's not because operations do not depend on inputs that they take no time to execute on a machine. :) answered 09 Aug '13, 22:26 3.4k●2●19●55 accept rate: 20%
1 As cyberax says, without knowing how you define a constant time operation, it's difficult to answer accurately. But you can do some profiling on your own. Write a simple looping program that performs different sorts of computations and submit it on the judge you want to profile. Now apply a binary search on the number of iterations in your program (TLE signifying that you should reduce number of iterations and WA signifying that you should increase the iterations) and find a closer approximation of the time taken. But make sure that your looping program doesn't get wiped out in compiler optimization. I hope that helps! :) answered 10 Aug '13, 23:25 4★sid_gup 861●4●7●15 accept rate: 14%
0 I am no expert on this topic. But usually, ~106 steps will easily get executed under a second. The usual average value is more than this. answered 08 Aug '13, 22:46 4.2k●5●23●64 accept rate: 15% I'm not sure about it but I think 10^-6 is too easy (08 Aug '13, 23:22) boochman5★ I even think 10^7 will pass easily, under a second. (Not sure, though.) (08 Aug '13, 23:26)
toggle preview community wiki:
Preview
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×248
×133
question asked: 08 Aug '13, 22:37
question was seen: 4,738 times
last updated: 10 Aug '13, 23:25
|
2018-11-16 01:53:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7399970889091492, "perplexity": 3341.1217043997112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742968.18/warc/CC-MAIN-20181116004432-20181116025720-00007.warc.gz"}
|
https://ftp.aimsciences.org/article/doi/10.3934/proc.2007.2007.212
|
Article Contents
Article Contents
# Relaxation approximation of the Kerr model for the impedance initial-boundary value problem
• The Kerr-Debye model is a relaxation of the nonlinear Kerr model in which the relaxation coefficient is a finite response time of the nonlinear material. We establish the convergence of the Kerr-Debye model to the Kerr model when this relaxation coefficient tends to zero.
Mathematics Subject Classification: 35L50, 35Q60.
Citation:
Open Access Under a Creative Commons license
|
2022-12-08 16:44:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.245937779545784, "perplexity": 1696.632039054543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00351.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.1996.2.281
|
Article Contents
Article Contents
# Control of plate vibrations by means of piezoelectric actuators
• We consider initial and boundary value problems modelling the vibrations of a plate with piezoelectric actuator. The simplest model leads to the Bernoulli-Euler plate equation with right hand side given by a distribution concentrated in an interior curve multiplied by a real valued time function representing the voltage applied to the actuator. We prove that, generically with respect to the curve, the plate vibrations can be strongly stabilized and approximatively controlled by means of the voltage applied to the actuator.
Mathematics Subject Classification: 35B37, 93C20.
Citation:
|
2023-03-31 22:34:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5528106093406677, "perplexity": 666.8798570769569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00567.warc.gz"}
|
https://physics.stackexchange.com/questions/138792/floquet-quasienergy-spectrum-continuous-or-discrete/138874
|
Floquet quasienergy spectrum, continuous or discrete?
I haven't got a feeling about Floquet quasienergy, although it is talked by many people these days.
Floquet theorem:
Consider a Hamiltonian which is time periodic $$H(t)=H(t+\tau)$$. The Floquet theorem says that there exists a basis of solutions to the Schrödinger equation of the form $$\psi(r,t)=e^{-i\varepsilon t}u(r,t)\ ,$$ where $$u(r,t)$$ is a function periodic in time.
We can rewrite the Schrödinger equation as
$$\mathscr{H}u(r,t)=\left[H(t)-\mathrm{i}\hbar\frac{\partial}{\partial t}\right]u(r,t)=\varepsilon u(r,t)\ ,$$
where the Floquet hamiltonian $$\mathscr{H}$$ can be thought as a Hermitian operator in the Hilbert space $$\mathcal{R}\otimes\mathcal{T}$$, where $$\mathcal{R}=L_2(\mathbb R^3)$$ is the Hilbert space of square-integrable functions of $$\vec r$$, and $$\mathcal{T}$$ is a Hilbert space with all square integrable periodic functions with periodicity $$\tau$$. Then the above equation can be thought of as an analogue to the stationary Schrödinger equation, with the real eigenvalue $$\varepsilon$$ defined as Floquet quasienergy.
My question is, since for the stationary Schrödinger equation we can have both continuous or discrete spectra, how about the Floquet quasienergy?
Another thing is, is this a measurable quantity? If it is, in what sense it is measurable? (I mean, in the stationary case, the eigenenergy difference is a gauge invariant quantity, what about quasienergy?)
• Evolution equations with time-dependent generators are difficult to treat in a rigorous way. One standard source is the book by Pazy. A reference that seems more tailored on your question is this book. – yuggib Oct 5 '14 at 14:06
In the stationary Schrödinger equation, we can have a continuous or a discrete spectrum. How about Floquet quasienergies?
You can have both. In one sense it is trivial to show this, since any constant hamiltonian is also periodic, but presumably you want some more physical examples, so here's two.
• For a continuous spectrum, start with a nonrelativistic free charged particle and add an oscillating uniform electric field,so the hamiltonian is $$\hat H(t)=\frac12\hat p^2+\hat x\,E_0\cos(\omega t).$$ The cleanest solutions are Volkov states $|\Psi_p(t)⟩$, which are plane waves with canonical momentum $p$ but a kinematic momentum $p+A(t)=p+\tfrac{E_0}{\omega}\sin(\omega t)$ which follows the vector potential of the field, i.e. $$⟨x|\Psi_p(t)⟩=\frac{1}{\sqrt{2\pi}}e^{-\frac i2\int^t(p+A(\tau))^2\mathrm d\tau}e^{i(p+A(t))x}.$$ (Modulo constants and signs, which you should check yourself.) The Volkov states are Floquet states, with quasienergy $$\varepsilon_p=\frac{p^2}{2}+U_p=\frac{p^2}{2}+\frac{E_0^2}{4\omega^2},$$ where $U_p$ is the ponderomotive potential of the field, i.e. the mean energy of the oscillatory motion. They're also a complete set, with $\int |\Psi_p(t)⟩⟨\Psi_p(t)|\mathrm dp=1$ and $⟨\Psi_p(t)|\Psi_{p'}(t)⟩=\delta(p-p')$, which is nice, but it also means that they're not the only possible Floquet basis as any linear combination of $|\Psi_p(t)⟩$ and $|\Psi_{\pm\sqrt{p^2+2n\omega}}(t)⟩$, $n\in\mathbb Z$, is also a Floquet state. So the Floquet manifold is either one big continuum, or multiple overlapping continua, which are equivalent given the usual Floquet-ladder degeneracy.
• For a discrete spectrum, simply take any finite-dimensional initial Hilbert space $\mathcal{H}$ and add any periodic hamiltonian $H(t)=H(t+T)$. Then the quasienergies $\varepsilon$ (or rather, the exponentiated form $e^{i\varepsilon T}$) are the eigenvalues of the one-period propagator $U(t_0+T,t_0)$ for any starting time $t_0$, where the propagator obeys $i\partial_t U(t,t')=H(t)U(t,t')$ and $U(t,t')=1$. Since $U(t_0+T,t_0)$ is an operator on the finite-dimensional $\mathcal H$, it can only have a discrete set of eigenvalues.
I can hear you grumble and say that that's cheating, and that one should take a "natural" discrete-spectrum problem and show that its Floquet quasienergies are still discrete. For some examples of this nature, see e.g. Commun. Math. Phys. 177 no. 2, 327 (1996).
• Thanks for your edit and instructive answer. Do you have an idea how the floquet quasi-energy can be measured? That is my second part of the question, since E and E+nhw are the same states, it is a little bit confussing that when you measure the same state, you got two energies as in the experiment linked in the other answer. Can the floquet quasi-energies represent the same state be distinguished within the floquet formalism? I think no. Do you agree? – an offer can't refuse Jan 5 '16 at 9:13
• I'm not completely sure, but the answer is probably "it depends what you mean". In a purely monochromatic case, maybe, but real experiments take finite time so some of the Floquet energy surfaces can couple (example). These are subtle questions and they're not easy to answer, and they're often covered in ambiguities: does the molecule go through a light-induced conical intersection, or does it simply absorb a photon? If you ask a more precise question I may be able to help. – Emilio Pisanty Jan 5 '16 at 18:30
• (If you do, it would be good if you can ask it separately and in a way that Xcheckr's answer can be migrated there. You're asking two completely different questions in the current version.) – Emilio Pisanty Jan 5 '16 at 18:32
You can think of a Floquet energy in a similar way to a Bloch state. In the latter case, because space is periodic, the momentum states are repeated at every reciprocal lattice vector, $\textbf{G}$. For a Floquet state, because time is periodic, energy states are repeated every $n\hbar \omega$ where $n$ is an integer and $\hbar \omega$ depends on the time, $\tau$ (where $\tau$ in the experiment is the time between laser pulses).
Here is an image from the attached paper in case you cannot view it, but I highly recommend reading the below paper if you are interested in Floquet states. You can see (barely) in the image below that the Dirac Cone (which was chosen to be the system studied here for no particular reason), is repeated at several values of $\hbar \omega$ above and below the "actual" Dirac Cone at $n=0$. You can see the $n=1$, $n=2$, and $n=-1$ states pretty clearly.
See the paper here:
https://www.sciencemag.org/content/342/6157/453?related-urls=yes&legid=sci;342/6157/453
• I happened to see this paper before. So your answer seem to say quasi-energy is a gauge invariant in the same sense as Bloch energy. Is that what you mean? Also, what about my first question? – an offer can't refuse Oct 6 '14 at 5:28
• @luming Yes, the quasi-energy must be gauge-invariant or it wouldn't be measurable! Also, perhaps I don't understand in what sense you mean discrete vs. continuous? It seems to me that the quasi-energies are not very different than Bloch states with different bands, except separated by constant energy steps. – Xcheckr Oct 6 '14 at 13:45
|
2021-05-10 19:56:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8861624598503113, "perplexity": 366.4765783591977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00123.warc.gz"}
|
https://en.m.wikibooks.org/wiki/GLSL_Programming/GLUT/Lighting_of_Bumpy_Surfaces
|
# GLSL Programming/GLUT/Lighting of Bumpy Surfaces
## Contents
“The Incredulity of Saint Thomas” by Caravaggio, 1601-1603.
This tutorial covers normal mapping.
It's the first of two tutorials about texturing techniques that go beyond two-dimensional surfaces (or layers of surfaces). In this tutorial, we start with normal mapping, which is a very well established technique to fake the lighting of small bumps and dents — even on coarse polygon meshes. The code of this tutorial is based on the tutorial on smooth specular highlights and the tutorial on textured spheres.
### Perceiving Shapes Based on Lighting
The painting by Caravaggio that is depicted to the left is about the incredulity of Saint Thomas, who did not believe in Christ's resurrection until he put his finger in Christ's side. The furrowed brows of the apostles not only symbolize this incredulity but clearly convey it by means of a common facial expression. However, why do we know that their foreheads are actually furrowed instead of being painted with some light and dark lines? After all, this is just a flat painting. In fact, viewers intuitively make the assumption that these are furrowed instead of painted brows — even though the painting itself allows for both interpretations. The lesson is: bumps on smooth surfaces can often be convincingly conveyed by the lighting alone without any other cues (shadows, occlusions, parallax effects, stereo, etc.).
### Normal Mapping
Normal mapping tries to convey bumps on smooth surfaces (i.e. coarse triangle meshes with interpolated normals) by changing the surface normal vectors according to some virtual bumps. When the lighting is computed with these modified normal vectors, viewers will often perceive the virtual bumps — even though a perfectly flat triangle has been rendered. The illusion can certainly break down (in particular at silhouettes) but in many cases it is very convincing.
More specifically, the normal vectors that represent the virtual bumps are first encoded in a texture image (i.e. a normal map). A fragment shader then looks up these vectors in the texture image and computes the lighting based on them. That's about it. The problem, of course, is the encoding of the normal vectors in a texture image. There are different possibilities and the fragment shader has to be adapted to the specific encoding that was used to generate the normal map.
A typical example for the appearance of an encoded normal map.
### Normal Mapping
We will use the normal map to the left and write a GLSL shader to use it.
Normal maps can be tested and created with Blender (among others); see the description in the Blender 3D: Noob to Pro wikibook.
For this tutorial, you should use a cube mesh instead of the UV sphere that was used in the tutorial on textured spheres. Apart from that you can follow the same steps to assign a material and the texture image to the object. Note that you should specify a default UV Map in the Properties window > Object Data tab. Furthermore, you should specify Coordinates > UV in the Properties window > Textures tab > Mapping.
When decoding the normal information, it would be best to know how the data was encoded. However, there are not so many choices; thus, even if you don't know how the normal map was encoded, a bit of experimentation can often lead to sufficiently good results. First of all, the RGB components are numbers between 0 and 1; however, they usually represent coordinates between -1 and 1 in a local surface coordinate system (since the vector is normalized, none of the coordinates can be greater than +1 or less than -1). Thus, the mapping from RGB components to coordinates of the normal vector n ${\displaystyle =(n_{x},n_{y},n_{z})}$ could be:
${\displaystyle n_{x}=2R-1}$ , ${\displaystyle n_{y}=2G-1}$ , and ${\displaystyle n_{z}=2B-1}$
However, the ${\displaystyle n_{z}}$ coordinate is usually positive (because surface normals are not allowed to point inwards). This can be exploited by using a different mapping for ${\displaystyle n_{z}}$ :
${\displaystyle n_{x}=2R-1}$ , ${\displaystyle n_{y}=2G-1}$ , and ${\displaystyle n_{z}=B}$
If in doubt, the latter decoding should be chosen because it will never generate surface normals that point inwards. Furthermore, it is often necessary to normalize the resulting vector.
An implementation in a fragment shader that computes the normalized vector n ${\displaystyle =(n_{x},n_{y},n_{z})}$ in the variable localCoords could be:
vec4 encodedNormal = texture2D(normalmap, texCoords);
vec3 localCoords = 2.0 * encodedNormal.rgb - vec3(1.0);
Tangent plane to a point on a sphere.
Usually, a local surface coordinate systems for each point of the surface is used to specify normal vectors in the normal map. The ${\displaystyle z}$ axis of this local coordinates system is given by the smooth, interpolated normal vector N and the ${\displaystyle x-y}$ plane is a tangent plane to the surface as illustrated in the image to the left. Specifically, the ${\displaystyle x}$ axis is specified by the tangent attribute T that the 3D engine provides to vertices. Given the ${\displaystyle x}$ and ${\displaystyle z}$ axis, the ${\displaystyle y}$ axis can be computed by a cross product in the vertex shader, e.g. B = T × N. (The letter B refers to the traditional name “binormal” for this vector.)
Note that the normal vector N is transformed with the transpose of the inverse model-view matrix from object space to view space (because it is orthogonal to a surface; see “Applying Matrix Transformations”) while the tangent vector T specifies a direction between points on a surface and is therefore transformed with the model-view matrix. The binormal vector B represents a third class of vectors which are transformed differently. (If you really want to know: the skew-symmetric matrix B corresponding to “B×” is transformed like a quadratic form.) Thus, the best choice is to first transform N and T to view space, and then to compute B in view space using the cross product of the transformed vectors.
Also note that the configuration of these axes depends on the tangent data that is provided, the encoding of the normal map, and the texture coordinates. However, the axes are practically always orthogonal and a bluish tint of the normal map indicates that the blue component is in the direction of the interpolated normal vector.
With the normalized directions T, B, and N in view space, we can easily form a matrix that maps any normal vector n of the normal map from the local surface coordinate system to view space because the columns of such a matrix are just the vectors of the axes; thus, the 3×3 matrix for the mapping of n to view space is:
${\displaystyle \mathrm {M} _{{\text{surface}}\to {\text{view}}}=\left[{\begin{matrix}T_{x}&B_{x}&N_{x}\\T_{y}&B_{y}&N_{y}\\T_{z}&B_{z}&N_{z}\end{matrix}}\right]}$
These calculations are performed by the vertex shader, for example this way:
attribute vec3 v_tangent;
varying mat3 localSurface2View; // mapping from
// local surface coordinates to view coordinates
varying vec4 texCoords; // texture coordinates
varying vec4 position; // position in view coordinates
void main()
{
mat4 mvp = p*v*m;
position = m * v_coord;
// the signs and whether tangent is in localSurface2View[1] or
// localSurface2View[0] depends on the tangent attribute, texture
// coordinates, and the encoding of the normal map
localSurface2World[0] = normalize(vec3(m * vec4(v_tangent, 0.0)));
localSurface2World[2] = normalize(m_3x3_inv_transp * v_normal);
localSurface2World[1] = normalize(cross(localSurface2World[2], localSurface2World[0]));
varyingNormal = normalize(m_3x3_inv_transp * v_normal);
texCoords = v_texcoords;
gl_Position = mvp * v_coord;
}
In the fragment shader, we multiply this matrix with n (i.e. localCoords). For example, with this line:
vec3 normalDirection = normalize(localSurface2World * localCoords);
With the new normal vector in view space, we can compute the lighting as in the tutorial on smooth specular highlights.
### Complete Shader Code
The complete fragment shader simply integrates all the snippets and the per-pixel lighting from the tutorial on smooth specular highlights.
attribute vec4 v_coord;
attribute vec3 v_normal;
attribute vec2 v_texcoords;
attribute vec3 v_tangent;
uniform mat4 m, v, p;
uniform mat3 m_3x3_inv_transp;
varying vec4 position; // position of the vertex (and fragment) in world space
varying vec2 texCoords;
varying mat3 localSurface2World; // mapping from local surface coordinates to world coordinates
void main()
{
mat4 mvp = p*v*m;
position = m * v_coord;
// the signs and whether tangent is in localSurface2View[1] or
// localSurface2View[0] depends on the tangent attribute, texture
// coordinates, and the encoding of the normal map
localSurface2World[0] = normalize(vec3(m * vec4(v_tangent, 0.0)));
localSurface2World[2] = normalize(m_3x3_inv_transp * v_normal);
localSurface2World[1] = normalize(cross(localSurface2World[2], localSurface2World[0]));
texCoords = v_texcoords;
gl_Position = mvp * v_coord;
}
uniform mat4 m, v, p;
uniform mat4 v_inv;
uniform sampler2D normalmap;
varying vec4 position; // position of the vertex (and fragment) in world space
varying vec2 texCoords; // the texture coordinates
varying mat3 localSurface2World; // mapping from local surface coordinates to world coordinates
struct lightSource
{
vec4 position;
vec4 diffuse;
vec4 specular;
float constantAttenuation, linearAttenuation, quadraticAttenuation;
float spotCutoff, spotExponent;
vec3 spotDirection;
};
lightSource light0 = lightSource(
vec4(0.0, 2.0, -1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
vec4(1.0, 1.0, 1.0, 1.0),
0.0, 1.0, 0.0,
180.0, 0.0,
vec3(0.0, 0.0, 0.0)
);
vec4 scene_ambient = vec4(0.2, 0.2, 0.2, 1.0);
struct material
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
material frontMaterial = material(
vec4(0.2, 0.2, 0.2, 1.0),
vec4(0.920, 0.471, 0.439, 1.0),
vec4(0.870, 0.801, 0.756, 0.5),
50.0
);
void main()
{
vec4 encodedNormal = texture2D(normalmap, texCoords);
vec3 localCoords = 2.0 * encodedNormal.rgb - vec3(1.0);
vec3 normalDirection = normalize(localSurface2World * localCoords);
vec3 viewDirection = normalize(vec3(v_inv * vec4(0.0, 0.0, 0.0, 1.0) - position));
vec3 lightDirection;
float attenuation;
if (0.0 == light0.position.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(vec3(light0.position));
}
else // point light or spotlight (or other kind of light)
{
vec3 positionToLightSource = vec3(light0.position - position);
float distance = length(positionToLightSource);
lightDirection = normalize(positionToLightSource);
attenuation = 1.0 / (light0.constantAttenuation
+ light0.linearAttenuation * distance
+ light0.quadraticAttenuation * distance * distance);
if (light0.spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection, light0.spotDirection));
if (clampedCosine < cos(radians(light0.spotCutoff))) // outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine, light0.spotExponent);
}
}
}
vec3 ambientLighting = vec3(scene_ambient) * vec3(frontMaterial.ambient);
vec3 diffuseReflection = attenuation
* vec3(light0.diffuse) * vec3(frontMaterial.diffuse)
* max(0.0, dot(normalDirection, lightDirection));
vec3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0) // light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0); // no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation * vec3(light0.specular) * vec3(frontMaterial.specular)
* pow(max(0.0, dot(reflect(-lightDirection, normalDirection), viewDirection)), frontMaterial.shininess);
}
gl_FragColor = vec4(ambientLighting + diffuseReflection + specularReflection, 1.0);
}
### Summary
Congratulations! You finished this tutorial! We have looked at:
• How human perception of shapes often relies on lighting.
• What normal mapping is.
• How to decode common normal maps.
• How a fragment shader can decode a normal map and use it for per-pixel lighting.
|
2019-04-24 18:26:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43069568276405334, "perplexity": 3884.8776103362766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578655155.88/warc/CC-MAIN-20190424174425-20190424200425-00357.warc.gz"}
|
https://testbook.com/question-answer/ravi-and-mohan-together-can-complete-a-task-in-3-d--60474f8c3ce77cd6004f21ee
|
# Ravi and Mohan together can complete a task in 3 days. Ravi alone can complete the same task in 7 days. How many days will Mohan alone take to complete the same task?
This question was previously asked in
SSC CHSL Previous Paper 112 (Held On: 26 Oct 2020 Shift 2)
View all SSC CHSL Papers >
1. $$5\frac{1}{4}$$ days
2. $$4\frac{1}{5}$$ days
3. 10 days
4. 4 days
Option 1 : $$5\frac{1}{4}$$ days
## Detailed Solution
Given
Ravi and Mohan can complete work = 3 days
Ravi alone complete the work = 7 days
Formula Used
Work = time × efficiency
Calculation
⇒ LCM of (3, 7) = 21
⇒ Let the total work be 21
⇒ Efficiency of Ravi and Mohan together = 21/3 = 7
⇒ Efficiency of Ravi alone = 21/7 = 3
⇒ Efficiency of Mohan = 7 - 3 = 4
⇒ Mohan takes time to complete the work alone = 21/4 = $$5\frac{1}{4}$$ days
|
2021-09-23 12:38:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.707629919052124, "perplexity": 3599.6721077869774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057421.82/warc/CC-MAIN-20210923104706-20210923134706-00517.warc.gz"}
|
https://matheducators.stackexchange.com/questions/12450/what-do-the-common-core-standards-expect-secondary-students-to-learn-about-logar
|
# What do the Common Core Standards expect secondary students to learn about logarithms or the number $e$?
I've been looking through the Common Core State Standards (http://www.corestandards.org/Math/) and have been surprised to find very little reference to exponential functions and logarithms. Specifically, as far as I can tell, none of the following are included in the Standards:
• Formal properties of logarithms, e.g. $$\log(ab)=\log(a)+\log(b)$$, etc.
• Exponential functions written with the base $$e$$, i.e. functions of the form $$f(x)=Ae^{kx}$$.
• Continuously compounded interest — or in fact anything about compounding interest.
As a matter of fact, the only standards I can find that mention logarithms or the number $$e$$ are the following:
F-IF.7e: Graph exponential and logarithmic functions, showing intercepts and end behavior
F-BF.5: Understand the inverse relationship between exponents and logarithms and use this relationship to solve problems involving logarithms and exponents.
F-LE.4: For exponential models, express as a logarithm the solution to $$a\cdot b^{ct} = d$$ where $$a$$, $$c$$, and $$d$$ are numbers and the base $$b$$ is $$2$$, $$10$$, or $$e$$; evaluate the logarithm using technology.
I know that the CCSS are not intended to be a comprehensive listing of absolutely everything that students are expected to learn, but it still seems bizarre to me that they do not include basic facts about $$e$$ or the fundamental properties of logarithms. Am I just missing them? Are they in there somewhere?
(Note: I do not intend or hope for this question to prompt an opinion-based discussion about whether the Standards are good, bad, or otherwise. I just want to know what, if anything, the Standards say about exponential and logarithmic functions, other than what I have already listed.)
• There are some mentions in Appendix A. (Search through the document for, e.g., "logarithm" to find example phrasing.) Note that these mentions contain a $+$ ... see Overview #2 (p. 2) for a brief explanation of this symbol. For one more example, check here. – Benjamin Dickman Jun 13 '17 at 5:39
• Yeah, I knew about those other mentions and probably should have included them in my list -- but they are so non-specific that they didn't seem worth including. (Lots of the Standards about functions include boilerplate language along the lines of "...such as polynomials, rational functions, trigonometric functions, exponential functions, and logarithmic functions.") – mweiss Jun 13 '17 at 17:31
• You might find this more general discussion about why logarithms aren't included earlier in high school curricula of interest: matheducators.stackexchange.com/questions/1820/… – James S. Jun 28 '17 at 9:16
• I think one thing to note is that the CCSS really only go through algebra 2, with occasional hints as to what might be included in trig and pre-calc. I imagine you'd go into a lot more depth about logarithms in pre-calc. – James S. Jun 28 '17 at 9:17
The standards that you identify actually do cover the things you assume are not covered. The formal properties of logarithms, for example, are proved using exponents, thus: F-BF.5: Understand the inverse relationship between exponents and logarithms and use this relationship to solve problems involving logarithms and exponents.
Exponential functions of base e are covered by:
For exponential models, express as a logarithm the solution to $a\cdot b^{ct} = d$ where $a$, $c$, and $d$ are numbers and the base $b$ is $2$, $10$, or $e$; evaluate the logarithm using technology.
And although CCSS doesn't often reference finance based applications, it continually reiterates real-world problems and applications, of which finance is a major category. This makes sense, since CCSS does not spell out every application a teacher might utilize, but rather leaves it open as to what real-world situations are used for each topic.
The CCSS focuses less on procedures and rules, and more on analysis of functions. So there is a lot more attention given to the graphical and analytic properties of logarithms and exponents than the rules for manipulating them.
The CCSS is also intended to be parsimonious rather than comprehensive, in that it does not give a laundry list of topics to "cover" but rather tries to get at the core of what needs to be known.
Finally, remember that CCSS only goes up through algebra two, with an occasional + standard hinting at what might be covered in pre-calc. The idea is to create a basic level of what needs to be learned before going on to college and careers, not to be exhaustive about everything a potential STEM major might want to learn in high school.
• Good answer overall, but this sentence is confusing: "For exponential models, express as a logarithm the solution to a⋅bct=da⋅bct=d where aa, cc, and dd are numbers and the base bb is 22, 1010, or ee". Is there a formatting issue there? (FYI LaTeX code is accepted when you type.) – Brendan W. Sullivan Jan 27 '18 at 14:57
• It comes directly from the original poster’s post; I cut and pasted it. Is there any way to view their post in LaTeX? – James S. Jan 30 '18 at 12:44
• Aha I see that now. If you click "Edit" on OP, you can see the source formatting. You can copy the text from there, then close that tab, and then paste into your answer. Perhaps there is a better solution (and @quid would know) but that's all I can think of. – Brendan W. Sullivan Jan 30 '18 at 14:44
|
2021-01-27 08:06:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7046807408332825, "perplexity": 696.1029888994433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704821253.82/warc/CC-MAIN-20210127055122-20210127085122-00740.warc.gz"}
|
https://web.cs.elte.hu/egres/www/tr-18-05.html
|
## An Edmonds-Gallai-Type Decomposition for the $j$-Restricted $k$-Matching Problem
### Abstract
Given a non-negative integer $j$ and a positive integer $k$, a $j$-restricted $k$-matching in a simple undirected graph is a $k$-matching, so that each of its connected components has at least $j+1$ edges. The maximum non-negative node weighted $j$-restricted $k$-matching problem was recently studied by Li who gave a polynomial-time algorithm and a min-max theorem for $0 \leq j < k$, and also proved the NP-hardness of the problem with unit node weights and $2 \leq k \leq j$. In this paper we derive an Edmonds-Gallai-type decomposition theorem for the $j$-restricted $k$-matching problem with $0 \leq j < k$, using the analogous decomposition for $k$-piece packings given by Janata, Loebl and Szabó, and give an alternative proof to the min-max theorem of Li.
Bibtex entry:
@techreport{egres-18-05,
AUTHOR = {Li, Yanjun and Szab{\'o}, J{\'a}cint},
TITLE = {An Edmonds-Gallai-Type Decomposition for the $j$-Restricted $k$-Matching Problem},
NOTE= {{\tt www.cs.elte.hu/egres}},
INSTITUTION = {Egerv{\'a}ry Research Group, Budapest},
YEAR = {2018},
NUMBER = {TR-2018-05}
}
|
2021-01-26 03:36:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8651297092437744, "perplexity": 1791.1540130073272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00084.warc.gz"}
|
https://thesevenworlds.wordpress.com/tag/science/
|
## Ol’ Gyroscopes
#### Gyroscope
This most attractive example is in the apparatus collection at Bowdoin College. It has no maker’s name and is about 40 cm high.
The gyroscope was invented in 1852 by the French experimental physicist Leon Foucault (1819-1868) as part of a two-pronged investigation of the rotation of the earth. The better-known demonstration of the Foucault pendulum showed that the plane of rotation of a freely-swinging pendulum rotated with a period that depends on the latitude of its location.
His gyroscope was a rapidly rotating disk with a heavy rim, mounted in low-friction gimbals. As the earth rotated beneath the gyroscope, it would maintain its orientation in space. This proved to be hard to do in practice because the frictional forces bring the spinning system to rest before the effect could be observed. The gimbal bearings also introduce unwanted torque. But, the principle is well-known to all children who move their toy gyroscopes about and observe that the spinning disk stays in the same orientation. Continue reading
## Get A Gyroscope
#### How Gyroscopes Work
by M. Brain
Gyroscopes can be very perplexing objects because they move in peculiar ways and even seem to defy gravity. These special properties make gyroscopes extremely important in everything from your bicycle to the advanced navigation system on the space shuttle. A typical airplane uses about a dozen gyroscopes in everything from its compass to its autopilot. The Russian Mir space station used 11 gyroscopes to keep its orientation to the sun, and the Hubble Space Telescope has a batch of navigational gyros as well. Gyroscopic effects are also central to things like yo-yos and Frisbees!
Precession
If you have ever played with toy gyroscopes, you know that they can perform all sorts of interesting tricks. They can balance on string or a finger; they can resist motion about the spin axis in very odd ways; but the most interesting effect is called precession. This is the gravity-defying part of a gyroscope. Continue reading
## Force of Quet
#### Torque
In physics, torque can be thought of informally as “rotational force”. The concept of torque, also called moment or couple, originated with the work of Archimedes on levers. The force applied to a lever, multiplied by its distance from the lever’s fulcrum, is the torque. Torque is measured in units of newton metres, and its symbol is τ.
For example, a force of three newtons applied two metres from the fulcrum exerts the same torque as one newton applied six metres from the fulcrum. This assumes the force is in a direction at right angles to the straight lever. More generally, one may define torque as the cross product:
$\boldsymbol{\tau} = \mathbf{r} \times \mathbf{F}$
where
F is the vector of force.
r is the vector from the axis of rotation to the point on which the force is acting.
The rotational analogues of force, mass and acceleration are torque, moment of inertia and angular acceleration respectively. Continue reading
## Prime Encryption
#### Introduction
Every cipher we have worked with up to this point has been what is called a symmetric key cipher, in that the key with which you encipher a plaintext message is the same as the key with which you decipher a ciphertext message. As we have discussed from time to time, this leads to several problems. One of these is that, somehow, two people who want to use such a system must privately and secretly agree on a secret key. This is quite difficult if they are a long distance apart (it requires either a trusted courier or an expensive trip), and is wholly impractical if there is a whole network of people (for example, an army) who need to communicate. Even the sophisticated Enigma machine required secret keys. In fact, it was exactly the key distribution problem that led to the initial successful attacks on the Enigma machine.
However, in the late 1970’s, several people came up with a remarkable new way to solve the Continue reading
## 1kb mtDNA Loop
Mutations in mtDNA D-loop region of mtDNA in various tissues of Papuan individuals
Johnson Siallagan,1 Agnes Maryuni,1 Jukwati,2 Rosye H. R. Tanjung3 and Yohanis Ngili1, Der Pharmacia Lettre, 2016, 8 (14):73-79
1 Department of Chemistry, Faculty of Mathematics and Natural Science, University of Cenderawasih, Jayapur a, Indonesia. 2 Study Program of Chemistry, Faculty of Teacher Training and Education, University of Cenderawasih, Jayapura, Indonesia. 3 Department of Biology, Faculty of Mathematics and Natural Sciences, University of Cenderawasih, Jayapura, Indonesia.
ABSTRACT
High mutation rate of mtDNA causes the difference in the nucleotide sequence of mtDNA between individual (high degree of polymorphism). At the mtDNA there are areas that do not encode controller (noncoding region), which is known by the local displacement loop (D-loop), which has two areas with high variations which hypervariable region I (HVR1) and hypervariable region II (HVR2). But there is no information on whether the nucleotide sequence of mtDNA D-loop is the same for the different cells in certain individuals. The purpose of this study to obtain nucleotide sequence information area mtDNA D-loop different cells on each individual to five individuals with different ages. Stages of research performed includes preparation of template mtDNA by way of cell lysis. Amplification fragments of mtDNA D-loop with the method of Polymerase Chain Reaction (PCR) using the primers M1 and HV2R. Continue reading
## D Loop and Arm
D loop
a structure in replicating circular DNA.
Synonym(s): displacement loop
Farlex Partner Medical Dictionary
A Box – A highly conserved (i.e., the DNA nucleotide sequence is similar among many eukaryotic species) region located between base pairs +10 and +20 “upstream” on the tRNA gene, which have the dual role of encoding functional tRNA and promoting tRNA transcription, and acting as a site of receptive protein binding.
Segen’s Medical Dictionary.
D-loop
A simplified drawing illustrating D-loops forming on the sense strand of DNA isolated during transcription of RNA. The double-helical nature of the DNA–DNA portions is omitted in this drawing.
|
2017-08-20 11:45:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48901546001434326, "perplexity": 2159.6669081996306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00468.warc.gz"}
|
https://secure.sky-map.org/starview?object_type=1&object_id=1765&object_name=Keun+Nan+Mun+Secunda&locale=DE
|
SKY-MAP.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
# χ And (Keun Nan Mun)
Contents
### Images
DSS Images Other Images
### Related articles
TRIDENT: An Infrared Differential Imaging Camera Optimized for the Detection of Methanated Substellar CompanionsWe describe a near-infrared camera in use at the Canada-France-HawaiiTelescope (CFHT) and at the 1.6 m telescope of the Observatoire du montMégantic (OMM). The camera is based on a Hawaii-1 1024 ×1024 HgCdTe array detector. Its main feature is the acquisition of threesimultaneous images at three wavelengths across the methane absorptionbandhead at 1.6 μm, enabling, in theory, an accurate subtraction ofthe stellar point-spread function (PSF) and the detection of faintclose, methanated companions. The instrument has no coronagraph andfeatures fast data acquisition, yielding high observing efficiency onbright stars. The performance of the instrument is described, and it isillustrated by laboratory tests and CFHT observations of the nearbystars GL 526, υ And, and χ And. TRIDENT can detect (6 σ)a methanated companion with ΔH=9.5 at 0.5" separation from thestar in 1 hr of observing time. Non-common-path aberrations andamplitude modulation differences between the three optical paths arelikely to be the limiting factors preventing further PSF attenuation.Instrument rotation and reference-star subtraction improve the detectionlimit by a factor of 2 and 4, respectively. A PSF noise attenuationmodel is presented to estimate the non-common-path wave-front differenceeffect on PSF subtraction performance.Based on observations obtained at the Canada-France-Hawaii Telescope(CFHT), which is operated by the National Research Council of Canada,the Institut National des Science de l'Univers of the Centre National dela Recherche Scientifique of France, and the University of Hawaii. Statistical Constraints for Astrometric Binaries with Nonlinear MotionUseful constraints on the orbits and mass ratios of astrometric binariesin the Hipparcos catalog are derived from the measured proper motiondifferences of Hipparcos and Tycho-2 (Δμ), accelerations ofproper motions (μ˙), and second derivatives of proper motions(μ̈). It is shown how, in some cases, statistical bounds can beestimated for the masses of the secondary components. Two catalogs ofastrometric binaries are generated, one of binaries with significantproper motion differences and the other of binaries with significantaccelerations of their proper motions. Mathematical relations betweenthe astrometric observables Δμ, μ˙, and μ̈ andthe orbital elements are derived in the appendices. We find a remarkabledifference between the distribution of spectral types of stars withlarge accelerations but small proper motion differences and that ofstars with large proper motion differences but insignificantaccelerations. The spectral type distribution for the former sample ofbinaries is the same as the general distribution of all stars in theHipparcos catalog, whereas the latter sample is clearly dominated bysolar-type stars, with an obvious dearth of blue stars. We point outthat the latter set includes mostly binaries with long periods (longerthan about 6 yr). Astrometric orbits of SB^9 starsHipparcos Intermediate Astrometric Data (IAD) have been used to deriveastrometric orbital elements for spectroscopic binaries from the newlyreleased Ninth Catalogue of Spectroscopic Binary Orbits(SB^9). This endeavour is justified by the fact that (i) theastrometric orbital motion is often difficult to detect without theprior knowledge of the spectroscopic orbital elements, and (ii) suchknowledge was not available at the time of the construction of theHipparcos Catalogue for the spectroscopic binaries which were recentlyadded to the SB^9 catalogue. Among the 1374 binaries fromSB^9 which have an HIP entry (excluding binaries with visualcompanions, or DMSA/C in the Double and Multiple Stars Annex), 282 havedetectable orbital astrometric motion (at the 5% significance level).Among those, only 70 have astrometric orbital elements that are reliablydetermined (according to specific statistical tests), and for the firsttime for 20 systems. This represents a 8.5% increase of the number ofastrometric systems with known orbital elements (The Double and MultipleSystems Annex contains 235 of those DMSA/O systems). The detection ofthe astrometric orbital motion when the Hipparcos IAD are supplementedby the spectroscopic orbital elements is close to 100% for binaries withonly one visible component, provided that the period is in the 50-1000 drange and the parallax is >5 mas. This result is an interestingtestbed to guide the choice of algorithms and statistical tests to beused in the search for astrometric binaries during the forthcoming ESAGaia mission. Finally, orbital inclinations provided by the presentanalysis have been used to derive several astrophysical quantities. Forinstance, 29 among the 70 systems with reliable astrometric orbitalelements involve main sequence stars for which the companion mass couldbe derived. Some interesting conclusions may be drawn from this new setof stellar masses, like the enigmatic nature of the companion to theHyades F dwarf HIP 20935. This system has a mass ratio of 0.98 but thecompanion remains elusive. Synthetic Lick Indices and Detection of α-enhanced Stars. II. F, G, and K Stars in the -1.0 < [Fe/H] < +0.50 RangeWe present an analysis of 402 F, G, and K solar neighborhood stars, withaccurate estimates of [Fe/H] in the range -1.0 to +0.5 dex, aimed at thedetection of α-enhanced stars and at the investigation of theirkinematical properties. The analysis is based on the comparison of 571sets of spectral indices in the Lick/IDS system, coming from fourdifferent observational data sets, with synthetic indices computed withsolar-scaled abundances and with α-element enhancement. We useselected combinations of indices to single out α-enhanced starswithout requiring previous knowledge of their main atmosphericparameters. By applying this approach to the total data set, we obtain alist of 60 bona fide α-enhanced stars and of 146 stars withsolar-scaled abundances. The properties of the detected α-enhancedand solar-scaled abundance stars with respect to their [Fe/H] values andkinematics are presented. A clear kinematic distinction betweensolar-scaled and α-enhanced stars was found, although a one-to-onecorrespondence to thin disk'' and thick disk'' components cannot besupported with the present data. Searching for Faint Companions with the TRIDENT Differential Simultaneous Imaging CameraWe present the first results obtained at CFHT with the TRIDENT infraredcamera, dedicated to the detection of faint companions close to brightnearby stars. Its main feature is the acquisition of three simultaneousimages in three wavelengths (simultaneous differential imaging) acrossthe methane absorption bandhead at 1.6 microns, that enables a precisesubtraction of the primary star's PSF while keeping the companionsignal. Gl229 and 55 Cnc observations are presented to demonstrateTRIDENT subtraction performances. It is shown that a faint companionwith an H magnitude difference of 10 magnitudes would be detected at 0.5arcsec from the primary. Reprocessing the Hipparcos Intermediate Astrometric Data of spectroscopic binaries. II. Systems with a giant componentBy reanalyzing the Hipparcos Intermediate Astrometric Data of a largesample of spectroscopic binaries containing a giant, we obtain a sampleof 29 systems fulfilling a carefully derived set of constraints andhence for which we can derive an accurate orbital solution. Of these,one is a double-lined spectroscopic binary and six were not listed inthe DMSA/O section of the catalogue. Using our solutions, we derive themasses of the components in these systems and statistically analyzethem. We also briefly discuss each system individually.Based on observations from the Hipparcos astrometric satellite operatedby the European Space Agency (ESA 1997) and on data collected with theSimbad database. The Rotation of Binary Systems with Evolved ComponentsIn the present study we analyze the behavior of the rotational velocity,vsini, for a large sample of 134 spectroscopic binary systems with agiant star component of luminosity class III, along the spectral regionfrom middle F to middle K. The distribution of vsini as a function ofcolor index B-V seems to follow the same behavior as their singlecounterparts, with a sudden decline around G0 III. Blueward of thisspectral type, namely, for binary systems with a giant F-type component,one sees a trend for a large spread in the rotational velocities, from afew to at least 40 km s-1. Along the G and K spectral regionsthere are a considerable number of binary systems with moderate tomoderately high rotation rates. This reflects the effects ofsynchronization between rotation and orbital motions. These rotatorshave orbital periods shorter than about 250 days and circular or nearlycircular orbits. Except for these synchronized systems, the largemajority of binary systems with a giant component of spectral type laterthan G0 III are composed of slow rotators. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 K-Band Calibration of the Red Clump LuminosityThe average near-infrared (K-band) luminosity of 238 Hipparcos red clumpgiants is derived and then used to measure the distance to the Galacticcenter. These Hipparcos red clump giants have been previously employedas I-band standard candles. The advantage of the K-band is a decreasedsensitivity to reddening and perhaps a reduced systematic dependence onmetallicity. In order to investigate the latter, and also to refer ourcalibration to a known metallicity zero point, we restrict our sample ofred clump calibrators to those with abundances derived fromhigh-resolution spectroscopic data. The mean metallicity of the sampleis [Fe/H]=-0.18 dex (σ=0.17 dex). The data are consistent with nocorrelation between MK and [Fe/H] and only weakly constrainthe slope of this relation. The luminosity function of the sample peaksat MK=-1.61+/-0.03 mag. Next, we assemble published opticaland near-infrared photometry for ~20 red clump giants in a Baade'swindow field with a mean metallicity of [Fe/H]=-0.17+/-0.09 dex, whichis nearly identical to that of the Hipparcos red clump. Assuming thatthe average (V-I)0 and (V-K)0 colors of these twored clumps are the same, the extinctions in the Baade's window field arefound to be AV=1.56, AI=0.87, andAK=0.15, in agreement with previous estimates. We derive thedistance to the Galactic center: (m-M)0=14.58+/-0.11 mag, orR=8.24+/-0.42 kpc. The uncertainty in this distance measurement isdominated by the small number of Baade's window red clump giantsexamined here. Speckle Interferometry of New and Problem HIPPARCOS BinariesThe ESA Hipparcos satellite made measurements of over 12,000 doublestars and discovered 3406 new systems. In addition to these, 4706entries in the Hipparcos Catalogue correspond to double star solutionsthat did not provide the classical parameters of separation and positionangle (rho,theta) but were the so-called problem stars, flagged G,''O,'' V,'' or X'' (field H59 of the main catalog). An additionalsubset of 6981 entries were treated as single objects but classified byHipparcos as suspected nonsingle'' (flag S'' in field H61), thusyielding a total of 11,687 problem stars.'' Of the many ground-basedtechniques for the study of double stars, probably the one with thegreatest potential for exploration of these new and problem Hipparcosbinaries is speckle interferometry. Results are presented from aninspection of 848 new and problem Hipparcos binaries, using botharchival and new speckle observations obtained with the USNO and CHARAspeckle cameras. A catalog of rotational and radial velocities for evolved starsRotational and radial velocities have been measured for about 2000evolved stars of luminosity classes IV, III, II and Ib covering thespectral region F, G and K. The survey was carried out with the CORAVELspectrometer. The precision for the radial velocities is better than0.30 km s-1, whereas for the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Spectroscopic binary orbits from photoelectric radial velocities. Paper 140: Chi AndromedaeNot Available The ROSAT all-sky survey catalogue of optically bright late-type giants and supergiantsWe present X-ray data for all late-type (A, F, G, K, M) giants andsupergiants (luminosity classes I to III-IV) listed in the Bright StarCatalogue that have been detected in the ROSAT all-sky survey.Altogether, our catalogue contains 450 entries of X-ray emitting evolvedlate-type stars, which corresponds to an average detection rate of about11.7 percent. The selection of the sample stars, the data analysis, thecriteria for an accepted match between star and X-ray source, and thedetermination of X-ray fluxes are described. Catalogue only available atCDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. On the link between rotation and coronal activity in evolved stars.We analyse the behaviour of coronal activity as a function of rotationfor a large sample of single and binary evolved stars for which we haveobtained CORAVEL high precision rotational velocities. This study showsthat tidal effects play a direct role in determining the X-ray activitylevel in binary evolved stars. The circularisation of the orbit is anecessary property for enhanced coronal activity in evolved binarystars. Improved Mean Positions and Proper Motions for the 995 FK4 Sup Stars not Included in the FK5 ExtensionNot Available A measurement of the primordial helium abundance using MU CassiopeiaeSpeckle interferometric observations of the Population II astrometricbinary star Mu Cas have been made at four epochs with a direct imagingCCD system. Using the available orbital data on the system, the massesof the stars have been found to be 0.728 +/- 0.049 solar mass and 0.171+/- 0.008 solar mass. Application of the theoretical mass-luminosity lawto the primary yields a helium abundance of 0.23 +/- 0.05 by mass for ametal abundance of Z = 0.0021 assuming a system age of 13 billion years. X-ray activity as statistical age indicator - The disk G-K giantsFor a sample of late-type disk giant stars, the dependence of coronalemission on age as defined by metallicity and kinematics indicators hasbeen studied. It is found that the mean level of X-ray emission forstars with strong metallic lines and/or small peculiar velocities islarger by about one order of magnitude than the mean level of emissionfor stars with weak lines and/or high peculiar velocities. Hence, it issuggested that the X-ray activity can be used as a statistical ageindicator for late-type giants, as well as the classical metallicity orkinematics indicators. It is found that the spread in metallicitytypical of the Galactic disk accounts for less than 50 percent of theobserved difference in X-ray emission. To explain the observations it isargued that other effects should be invoked, such as changes in theefficiency of the stellar magnetic dynamo or the influence ofmetallicity itself on the coronal heating processes. Velocity dispersions and mean abundances for Roman's G5-K1 spectroscopic groupsThe velocity dispersions and U-V distributions of Roman's (1950, 1952)four spectroscopic groups (weak CN, weak line, strong line, and 4150)are compared with those of groups based only on Fe/H ratio. It is shownthat the velocity-dispersion gradient for the Roman spectroscopic groupsis greater than for the comparison group for all three velocitycomponents and for the mean orbital eccentricity, with a clearly definedminimum for the strong-line stars and a sharp upturn for the 4150 stars.The results suggest that the Roman's assignment to distinctspectroscopic groups results in more homogeneous groups than the binningon the basis of metallicity. Rotation and transition layer emission in cool giantsGray (1981, 1982) found that field giants with T(eff) less than about5500 K experience a steep decrease in rotational velocities coupled witha decrease in transition layer emission. This decrease may beattributable to fast magnetic braking or to redistribution of angularmomentum for rapidly increasing depths of the convection zones if theserotate with depth independent specific angular momentum. Additionalarguments in favor of the latter interpretation are presented. Theincrease of N/C abundances due to deep mixing occurs at the same pointas the decrease in v sin i. On the other hand, the ratios of the C IV toC II emission line fluxes decrease at this point indicating smallercontributions of MHD wave heating. The X-ray fluxes decrease at nearlythe same T(eff). Thus, no observations are found which would indicatelarger magnetic activity which could lead to fast magnetic braking.Theory predicts a rapid increase in the convection zone depth at theT(eff) where the decrease in v sin i is observed. This can explain theobserved phenomena. A critical appraisal of published values of (Fe/H) for K II-IV stars'Primary' (Fe/H) averages are presented for 373 evolved K stars ofluminosity classes II-IV and (Fe/H) values beween -0.9 and +0.21 dex.The data define a 'consensus' zero point with a precision of + or -0.018 dex and have rms errors per datum which are typically 0.08-0.16dex. The primary data base makes recalibration possible for the large(Fe/H) catalogs of Hansen and Kjaergaard (1971) and Brown et al. (1989).A set of (Fe/H) standard stars and a new DDO calibration are given whichhave rms of 0.07 dex or less for the standard star data. For normal Kgiants, CN-based values of (Fe/H) turn out to be more precise than manyhigh-dispersion results. Some zero-point errors in the latter are alsofound and new examples of continuum-placement problems appear. Thushigh-dispersion results are not invariably superior to photometricmetallicities. A review of high-dispersion and related work onsupermetallicity in K III-IV star is also given. CA II H and K measurements made at Mount Wilson Observatory, 1966-1983Summaries are presented of the photoelectric measurements of stellar CaII H and K line intensity made at Mount Wilson Observatory during theyears 1966-1983. These results are derived from 65,263 individualobservations of 1296 stars. For each star, for each observing season,the maximum, minimum, mean, and variation of the instrumental H and Kindex 'S' are given, as well as a measurement of the accuracy ofobservation. A total of 3110 seasonal summaries are reported. Factorswhich affect the ability to detect stellar activity variations andaccurately measure their amplitudes, such as the accuracy of the H and Kmeasurements and scattered light contamination, are discussed. Relationsare given which facilitate intercomparison of 'S' values with residualintensities derived from ordinary spectrophotometry, and for convertingmeasurements to absolute fluxes. High-resolution spectroscopic survey of 671 GK giants. I - Stellar atmosphere parameters and abundancesA high-resolution spectroscopic survey of 671 G and K field giants isdescribed. Broad-band Johnson colors have been calibrated againstrecent, accurate effective temperature, T(eff), measurements for starsin the range 3900-6000 K. A table of polynomial coefficients for 10color-T(eff) relations is presented. Stellar atmosphere parameters,including T(eff), log g, Fe/H, and microturbulent velocity, are computedfor each star, using the high-resolution spectra and various publishedphotometric catalogs. For each star, elemental abundances for a varietyof species have been computed using a LTE spectrum synthesis program andthe adopted atmosphere parameters. Einstein Observatory magnitude-limited X-ray survey of late-type giant and supergiant starsResults are presented of an extensive X-ray survey of 380 giant andsupergiant stars of spectral types from F to M, carried out with theEinstein Observatory. It was found that the observed F giants orsubgiants (slightly evolved stars with a mass M less than about 2 solarmasses) are X-ray emitters at the same level of main-sequence stars ofsimilar spectral type. The G giants show a range of emissions more than3 orders of magnitude wide; some single G giants exist with X-rayluminosities comparable to RS CVn systems, while some nearby large Ggiants have upper limits on the X-ray emission below typical solarvalues. The K giants have an observed X-ray emission level significantlylower than F and F giants. None of the 29 M giants were detected, exceptfor one spectroscopic binary. Chromospheric activity in evolved stars - The rotation-activity connection and the binary-single dichotomyA tabulation of measured values of the Ca II H and K (S) index aretransformed to the original Mount Wilson definition of the index. Thetabulation includes main-sequence, evolved, single, and tidally coupled(RS CVn) binary stars. The (S) indices are analyzed against Wilson's(1976) I(HK) intensity estimates, showing that Wilson's estimates areonly a two-state indicator. Ca II H and K fluxes are computed andcalibrated with published values of rotation periods. It is found thatthe single and binary stars are consistent with a single relationshipbetween rotation and Ca II excess emission flux. Catalogue of the energy distribution data in spectra of stars in the uniform spectrophotometric system.Not Available Energy Distribution Data in the Spectra of 72 Stars in the Region Lambda 3200A to 7600ANot Available Binary stars unresolved by speckle interferometry. IIIThe KPNO's 4-m telescope was used in 1975-1981 to determine the epochsof 1164 speckle observations for 469 unresolved, known or suspectedbinary stars. The data, presented in tabular form, encompass visualbinaries with eccentric orbits, occultation binaries, astrometricbinaries, Hyades stars of known or suspected duplicity, and many longperiod spectroscopic binaries.
Submit a new article
• - No Links Found -
Submit a new link
### Member of following groups:
#### Observation and Astrometry data
Constellation: Andromeda Right ascension: 01h39m21.00s Declination: +44°23'10.0" Apparent magnitude: 4.98 Distance: 74.294 parsecs Proper motion RA: -23.8 Proper motion Dec: 11.8 B-T magnitude: 6.106 V-T magnitude: 5.106
Catalogs and designations:
Proper Names Keun Nan Mun (Edit) Bayer χ And Flamsteed 52 And HD 1989 HD 10072 TYCHO-2 2000 TYC 2826-2183-1 USNO-A2.0 USNO-A2 1275-00982752 BSC 1991 HR 469 HIP HIP 7719 → Request more catalogs and designations from VizieR
|
2019-11-14 11:59:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6261708736419678, "perplexity": 7595.988930523784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00505.warc.gz"}
|
https://www.sawaal.com/problems-on-numbers-questions-and-answers/the-product-of-two-numbers-is-9375-and-the-quotient-when-the-larger-one-is-divided-by-the-smaller-is_2172
|
46
Q:
# The product of two numbers is 9375 and the quotient, when the larger one is divided by the smaller, is 15. The sum of the numbers is :
A) 100 B) 200 C) 300 D) 400
Explanation:
Let the numbers be x and y.
Then, xy = 9375 and x/y = 15.
$xyxy=9375/15$
=>
=> y = 25
=> x = 15y = 15 x 25 = 375.
Sum of the numbers = 375 + 25 = 400.
Q:
If a 10-digit number 7220x558y2 is divisible by 88, then the value of (5x + 5y) can be :
A) 35 B) 25 C) 15 D) 10
Explanation:
0 596
Q:
If a 10-digit number 1230x558y2 is divisible by 88, then the value of (5x + 5y) is:
A) 40 B) 20 C) 50 D) 30
Explanation:
1 1680
Q:
If an 8-digit number 30x558y2 is divisible by 88, then the value of (6x +6y) is:
A) 30 B) 35 C) 42 D) 66
Explanation:
0 1622
Q:
If the seven digit number 54x29y6 (x > y)is divisible by 72, what is the value of (2x + 3y)?
A) 38 B) 13 C) 32 D) 23
Explanation:
0 3891
Q:
Which among the following numbers is exactly divisible by 7, 11 and 13 ?
A) 14982 B) 15004 C) 14993 D) 15015
Explanation:
0 11194
Q:
Which among the following numbersis exactly divisible by 11, 13 and 7?
(a) 624613
(b) 624624
(c) 624635
(d) 624646
A) (b) B) (c) C) (a) D) (d)
Explanation:
0 6639
Q:
The ten-digit number 2x600000y8 is exactly divisible by 24. If x ≠ 0 and y ≠ 0, then the least value of (x + y) is equal to:
A) 5 B) 8 C) 9 D) 2
Explanation:
0 305
Q:
The 10-digit number 79x00001y6 is exactly divisible by 88. What is the value of (x + 3)?
A) 5 B) 9 C) 6 D) 7
|
2022-07-02 23:36:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6238963007926941, "perplexity": 872.526290742918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00227.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=2587836
|
# The Vielbein postulate
by haushofer
Tags: postulate, vielbein
Sci Advisor P: 888 Hi, I have a question on the vielbein postulate. By this I mean $\nabla_{\mu}e_{\nu}^a = \partial_{\mu}e_{\nu}^a - \Gamma_{\mu\nu}^{\rho}e_{\rho}^a + \omega_{\mu}^{\ a}_{\ b}e_{\nu}^b \equiv D_{\mu}e_{\nu}^a - \Gamma_{\mu\nu}^{\rho}e_{\rho}^a = 0$ Someone like Carrol derives this from rewriting the covariant derivative of a vector field X in a coordinate basis and a general basis, so in that sense it's a statement that the index-free object $\nabla X$ doesn't care about being described by a coordinate basis or a general basis, right? He explicitly says, "Note that this is always true; we did not need to assume anything about the connection in order to derive it." So, covariance (you have the freedom to write any tensor in any basis you like) would then automatically imply the vielbein postulate. Somehow, I don't feel comfortable with this. In GR, saying that the metric is "covariantly constant", $\nabla_{\rho}g_{\mu\nu}=0$, enables us to express the Levi Civita connection in terms of the metric, which I'll call the metric postulate. We can do the same thing with the vielbeins by saying that the curvature of the vielbein disappears, $R_{\mu\nu}(e_{\rho}^a)=0$. But doesn't the vielbein postulate already implies the metric postulate? So, I'm a little puzzled by the precise relation between the metric postulate and the vielbein postulate, and I'm wondering if the vielbein postulate follows from covariance. I ofcourse understand that in some sense the vielbein postulate is just a way of putting constraints on the vielbein and that antisymmetrizing this constraint gives you information about the torsion, but can someone shed a light on this?
Emeritus Sci Advisor PF Gold P: 9,248 It would help if you posted some definitions and part of the derivation that you don't like. (You don't have to define the connection, covariant derivative or the Christoffel symbol, but at least explain the e and the omega, and what you meant by rewrite in a coordinate basis and a general basis).
Sci Advisor P: 8,374 The Vielbein postulate According to to Wald (3.4.16) the antisymmetry of the Christoffel symbols implies torsion freeness, whereas the antisymmetry of the connection one-forms implies metric compatibility.
P: 888
Quote by Fredrik It would help if you posted some definitions and part of the derivation that you don't like. (You don't have to define the connection, covariant derivative or the Christoffel symbol, but at least explain the e and the omega, and what you meant by rewrite in a coordinate basis and a general basis).
Ah, ok, sorry. The e is the vielbein $e_{\mu}^a$ with inverse $e^{\mu}_a$ satisfying
$g_{\mu\nu} = e_{\mu}^a e_{\nu}^b \eta_{ab}$
and the omega is the spinconnection which can be defined by
$\nabla_{\mu}X^a = \partial_{\mu}X^a + \omega_{\mu}^a_{\ b}X^b$
By a "general basis" I ment a "non-coordinate basis",
[itex]
\hat{e}_{a} = e_a^{\mu}\partial_{\mu}
[/tex]
I'll take a look at Wald, but I think I already start to see things here. :) The point is that a lot of people seem to "postulate" the vielbein"postulate" as a constraint, but as I now see it it's really a consequence of covariance.
Related Discussions Beyond the Standard Model 1 Quantum Physics 3 Special & General Relativity 2 Special & General Relativity 39 General Physics 2
|
2014-07-29 10:46:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7772795557975769, "perplexity": 497.85470919279805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00275-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/48514/the-interleaved-transition-system-for-2-independent-and-concurrent-transition-sy
|
# The Interleaved Transition System for 2 Independent and Concurrent Transition Systems
This question is about Model Checking for Software Formal Verification
How do you model the joint behavior of 2 independent and concurrent transition systems?
Specifically, given the two independent and concurrent Transition Systems below; TS1 and TS2 (left to right).
A tutor proposed that the resulting Interleaving Transition System ITS is
I understand how all the shown states and transitions of this ITS were gotten, however, why is there no transition from state (l1,q1) to (l3,q2)--as would be expected if both TS1 and TS2 transitioned on action a?
• The definition of the interleaving operator (as opposed to e.g. the interface parallel operator in CSP) is that each component can take $a$- or $b$-transitions on its own - there is no synchronization. – Klaus Draeger Oct 21 '15 at 13:12
• @KlausDraeger, thanks for your reply; I think I get it now: the interleaving operator is not synchronized by definition. In that case, can we say that the interleaving operator does not model the joint behavior of the two systems--since it does not account for synchronization? – eyeezzi Oct 22 '15 at 1:56
• Only if you assume that systems are supposed to synchronize on all shared symbols. The point of having interleaving (and, as in CSP, parametrizing the parallel composition operator with a set of symbols on which to synchronize) is to allow you to represent a greater variety of models of interaction. – Klaus Draeger Oct 22 '15 at 8:52
By definition, the interleaving operator is not synchronized, so any transition in $TS1\ |||\ TS2$ corresponds to a transition in a single component while the other one remains in whatever state it is.
More generally, in process algebras like CSP, the parallel composition operator $||$ can be parameterized with a set of symbols on which to synchronize - for example, you could consider $TS1\ |[\{a\}]|\ TS2$, which has the same states and $b$-transitions as $TS1\ |||\ TS2$, but only two (synchronized) $a$-transitions $(l_1,q_1)\to(l_3,q_2)$ and $(l_2,q_1)\to(l_1,q_2)$. In particular, $|||$ is the special case $|[\emptyset]|$.
|
2020-10-24 01:16:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.772246241569519, "perplexity": 751.4470407236571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00386.warc.gz"}
|
http://en.wikipedia.org/wiki/Grubbs'_test_for_outliers
|
# Grubbs' test for outliers
Grubbs' test (named after Frank E. Grubbs), also known as the maximum normed residual test or extreme studentized deviate test, is a statistical test used to detect outliers in a univariate data set assumed to come from a normally distributed population.
## Definition
Grubbs' test is based on the assumption of normality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs' test.[1]
Grubbs' test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or less since it frequently tags most of the points as outliers.
Grubbs' test is defined for the hypothesis:
H0: There are no outliers in the data set
Ha: There is at least one outlier in the data set
The Grubbs' test statistic is defined as:
$G = \frac{\displaystyle\max_{i=1,\ldots, N}\left \vert Y_i - \bar{Y}\right\vert}{s}$
with $\overline{Y}$ and s denoting the sample mean and standard deviation, respectively. The Grubbs test statistic is the largest absolute deviation from the sample mean in units of the sample standard deviation.
This is the two-sided version of the test. The Grubbs test can also be defined as a one-sided test. To test whether the minimum value is an outlier, the test statistic is
$G = \frac{\bar{Y}-Y_\min}{s}$
with Ymin denoting the minimum value. To test whether the maximum value is an outlier, the test statistic is
$G = \frac{Y_\max - \bar{Y}}{s}$
with Ymax denoting the maximum value.
For the two-sided test, the hypothesis of no outliers is rejected at significance level α if
$G > \frac{N-1}{\sqrt{N}} \sqrt{\frac{t_{\alpha/(2N),N-2}^2}{N - 2 + t_{\alpha/(2N),N-2}^2}}$
with tα/(2N),N−2 denoting the upper critical value of the t-distribution with N − 2 degrees of freedom and a significance level of α/(2N). For the one-sided tests, replace α/(2N) with α/N.
## Related techniques
Several graphical techniques can, and should, be used to detect outliers. A simple run sequence plot, a box plot, or a histogram should show any obviously outlying points. A normal probability plot may also be useful.
|
2014-03-14 10:58:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7959424257278442, "perplexity": 716.868096158813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678692742/warc/CC-MAIN-20140313024452-00076-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://warwick.ac.uk/fac/sci/maths/research/events/seminars/areas/algebra/21-22/
|
# 2021-22
### In Term 2 seminars will be held on Mondays at 17:00 sometimes in B3.02 and via MSTeams (join the Warwick Algebra Seminar Team).
#### Term 2:
17th January (B3.02 and online): Dave Benson (Aberdeen)
Title: Blocks with normal defect groups.
Abstract: Let G be a finite group and k a field of characteristic p. Suppose that G has a block with normal defect group and abelian p’-inertial quotient. Then I shall describe the structure of the block as a quantum deformation of the group algebra of the defect group. This is joint work with Radha Kessar and Markus Linckelmann.
24th January (online): Vincent Knibbeler (Loughborough)
Title: Automorphic Lie algebras and modular forms
Abstract: We revisit twisted loop algebras and the classical Onsager algebra that appeared in statistical mechanics. Automorphic Lie algebras are generalisations of these two examples. In recent work we describe the automorphic Lie algebras consisting of holomorphic maps from the complex upper half plane to a complex finite dimensional Lie algebra, which are equivariant with respect to the modular group SL(2,Z). We obtain an analogue of a classical theorem of V.G.Kac on twisted loop algebras and a realisation of the Onsager algebra in terms of matrix-valued modular forms. The talk is based on joint work with Sara Lombardo and Alexander Veselov.
7th February (B3.02 online): Nick Gill (Open University)
Title: Subgroup lattices in PSL(2,q)
Abstract: I will report on some recent work with my PhD student, Scott Hudson. We consider the action of G=PSL(2,q) on the cosets of a subgroup H. We are interested in various properties of the associated "stabilizer lattice", i.e. the lattice of subgroups that crop up as intersections of conjugates of H. The results that I will present lead naturally to conjectures about such lattices in other simple groups of Lie type.
14th February (online): Emily Hall (Bristol)
Title: Almost Elusive Groups
Abstract: Let G be a transitive permutation group acting on a finite set X with |X|>1. A derangement in G is an element of G that has no fixed points on X, and as a consequence of the orbit-counting lemma we know that such elements always exist in G. But what happens if we seek derangements with special properties i.e specific order? In this talk I will discuss this question and introduce the notion of Almost elusive groups. I will provide motivation behind the concept of Almost elusive groups and discuss key concepts behind the classification of these groups in the primitive setting.
28th February: Gareth Tracey (Birmingham)
Title: How many subgroups are there in a finite group?
Abstract: Counting the number of subgroups in a finite group has numerous applications, ranging from enumerating certain classes of finite graphs (up to isomorphism), to counting how many isomorphism classes of finite groups there are of a given order. In this talk, I will discuss the history behind the question; why it is important; and what we currently know.
14th March: Martin Liebeck (Imperial)
Title: Orbits of compact linear groups
Abstract: For a compact subgroup G of GL(n,R), define the vector closure of G to be the largest subgroup of GL(n,R) that has the same orbits on vectors as G. Subgroups G that are equal to their vector closures are particularly interesting, as these are isometry groups of norms. I will present some examples and results on such vector closures.
#### Term 1 (seminars were held on Tuesdays at 15:00 on Zoom jointly with Birmingham):
16th November: Lucia Morotti (Hannover University).
Title: Decomposition of spin representations of symmetric groups in characteristic 2
Abstract: Any representation of a double cover of a symmetric group $tilde Sn$ can also be viewed as a representation of Sn when reduced to characteristic 2. However not much is known about the corresponding decomposition matrices. For example, while decomposition numbers of Specht modules indexed by 2-parts partitions are known, the decomposition numbers of spin irreducible modules indexed by 2-parts partitions are still mostly unknown, with in most cases only multiplicities of maximal composition factors (under a certain ordering of the modular irreducible representations) being known.
In this talk I will characterise irreducible representations that appear when reducing 2-parts spin representations to characteristic 2 and describe part of the corresponding rows of the decomposition matrices.
30th November: Dávid Szabó (Rényi Institute, Budapest).
Title: Class 2 nilpotent and twisted Heisenberg groups
Abstract: Every finitely generated nilpotent group G of class at most 2 can be
obtained from 2-generated such groups using central and subdirect
products. As a corollary, G embeds to a generalisation of the 3 x 3
Heisenberg matrix group with entries coming from suitable abelian groups
depending on G. In this talk, we present the key ideas of these statements and briefly
mention how they emerged from investigating the so-called Jordan
property of various transformation groups.
7th December: Barbara Baumeister (Bielefeld University).
Title: The dual approach to Coxeter and Artin groups
Abstract: Independently Brady and Watt as well as Bessis, Digne and Michel started to study Coxeter systems (W,S) and Artin groups by replacing the simple system S by the set of all reflections. In particular this provides new presentations for the Artin groups of spherical type. I will introduce into this fascinating world. I also will present a slight modification of the new concept.
14th December: Ralf Koehl (Giessen University).
Title: Statistical topological data analysis: some musings about networks and applications
Abstract: I will start with an overview how linear algebra (in particular eigenvalue techniques) help with the understanding of networks. Then I mention random walks (which for me is a combination of linear algebra with limit arguments). Then I go to the core topic of the talk: persistent homology, starting with plenty of examples. Then I mention how a group-theorist can end up working with networks. And finally I explain how a pure mathematician can train themselves for applying methods from network theory by studying properties of Riemannian manifolds via approximations.
|
2022-11-29 16:07:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6529093384742737, "perplexity": 851.3296507480131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00290.warc.gz"}
|
https://en.wikibooks.org/wiki/Physics_Course/Types_of_Waves/Light_Waves
|
# Physics Course/Types of Waves/Light Waves
## Light Wave
Light wave comes from many sources
## Electromagnetic Radiation as Light Wave Characteristics
Electromagnetic radiation as light wave travels at a speed of Light in vaccum C = 3 x 108 and since speed of wave equal product of wavelength and frequency . If one either wavellength or frequency is known then the other quantity can be found by
V = λ f = C
${\displaystyle \lambda ={\frac {C}{f}}}$ or ${\displaystyle f={\frac {C}{\lambda }}}$ .
Electromagnetic Light wave has frequency range from few Hz to GHz and divided into the following spectrums
## Light Wave Phenomenons
When Light wave travel through a medium if there is a bloking object that has a height greater than the wavelength of light wave . Light wave will suffer from the following effects
### Reflection
Light wave being reflected back to the medium it comes from and can create interference with other wave in the medium
### Refraction
Only part of the light wave is passing through . Sound does not reach its destination
### Diffraction
Light passing through slits or narrow openings will produce fringes of Light and Dark Pattern
### Dispersion
Light passing through prism will produce Colored Light of the following color Orange , Yellow , Red , Green , Blue , violet
### Interference
When two light waves interfere either Constructively or Destructively to produce Light Interferences like Bright Light , Stream Light , Dim Light , No Light
• []
|
2016-08-29 13:11:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6487795114517212, "perplexity": 1825.2176701103149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982957972.74/warc/CC-MAIN-20160823200917-00141-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://www.kimsereylam.com/angular/2017/06/27/angular-cli.html
|
By Kimserey Lam with
# Angular Cli
Jun 27th, 2017 - written by Kimserey with .
In order to facilitate the creation of a new Angular project, it is possible to use the Angular CLI. Angular CLI is a CLI providing functionality to bootstrap, upgrade and serve your Angular app. Today we will see how we can use Angular CLI to improve our workflow.
1. Bootstrap a new project
2. Creating new components
3. Serve the application
## 1. Bootstrap a new project
Start first by installing Angular CLI with the following command:
1
npm install -g @angular/cli
After the installation, the cli should be available globally via the ng command. You can try it with ng help. To bootstrap a project, we use ng new my-new-project. We can also specify some options, here we specify that we want to skip the tests creation which is set to true by default, we want inline style and inline template.
1
ng new my-new-project --skip-tests --inline-style --inline-template
This creates the simplest Angular app. We have the main module app.module.ts and component app.module.ts. In order to build the project we can use ng build.
## 2. Creating new components
AngularCLI also helps in creating all the boilerplate needed for creating components.
1
ng g my-component --flat -is -it
Here we specify -is for inline style and -it for inline template both are short forms for --inline-xxx. Other than component, we can also use this command to create directives, services, pipes and guards.
## 3. Serve the application
We can then try out the app using ng serve. The app is served by default on localhost on the port 4200.
1
** NG Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200 **
Our application will run and the source code will be watched too therefore every changes in the code will provoke a recompilation and update of the browser.
# Conclusion
AngularCLI makes life easier for developing Angular application. It also handle test generation and execution which I haven’t covered. A full list of command can be found by doing ng help. Hope you liked this post as much I enjoyed writing it. See you next time!
Designed, built and maintained by Kimserey Lam.
|
2020-08-03 15:05:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32515212893486023, "perplexity": 5569.215548078922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00083.warc.gz"}
|
https://ham.stackexchange.com/questions/17166/heat-shrink-as-dielectric-for-bazooka-balun
|
# Heat-shrink as dielectric for bazooka balun?
I'm intending to make a sleeve balun (aka bazooka) for 2m, but rather than keep the lossy coax jacket as a dielectric (I have some RG58), was considering to remove it and replace it with heat shrink tubing.
I haven't been able to find anything here or via google, and am hopeful that some folks here might have some experience, insight, or resource pointers.
Otherwise I'm just going to try it both ways and measure.
• I really like this question...I've also searched around a bit for assembly details of a 2m sleeve balun or choke and haven't really found anything useful other than the fact that a "sleeve choke" is also a martial arts move, but it probably has pretty limited use in ham radio.
– user14945
Aug 19 '20 at 23:35
• Maybe someone with experience will chime in here, but if nobody does, then I hope you will present your findings as an answer to your own question. Aug 20 '20 at 0:09
• Looking like I'll be doing an experiment in the next few days! I'll be sure to post the results here. Aug 20 '20 at 11:31
The spies from CIA had this problem solved long ago. See https://www.cryptomuseum.com/covert/bugs/ec/sleevex/index.htm You need to have the relative dielectric constant between the coax shield and the sleeve equal to the dielectric of the cable. By this you maintain same velocity. Also you want to keep the impedance same, which requires quite a thick layer of polyethylene tube in place of the PVC on the outside of the shield. There is a fine calculator at https://www.pasternack.com/t-calculator-coax-cutoff.aspx Take 2.3 as relative dielectric constant of PE.
There is an article covering every aspect of sleeve baluns at http://www.w8ji.com/sleeve_baluns.htm
If you do not need flexibility, try this http://www.w6nbc.com/articles/2009-07QSTcoaxialdipole.pdf which uses air as dielectric.
• Oooh! Thanks for the calc link, and the cryptomuseum link is very cool and informative! Oct 16 '20 at 12:19
For a bazooka balun, you want the $$Z_0$$ of the transmission line to be as high as possible. A thin layer of heat shrink is not ideal.
If you must use a sleeve balun for mechanical reasons, rather use a 30 mm diameter pipe with the coax inside it.
The impedance of a short circuited transmission line is
$$Z_{SC} =jZ_0\tan(\beta x)$$ where $$\beta=2\pi/\lambda$$
At a quarter wave this is an open circuit, but at any frequency away from a quarter wave, the impedance drops quickly and the line $$Z_0$$ matters a lot.
If the frequency changes by 10%, (or you get the velocity factor of the line wrong by 10%, very likely if you don't know the dielectric well, or measure it), then the impedance is $$j6Z_0$$. If your line $$Z_0$$ is $$25\space\Omega$$, that's only $$150\space\Omega$$, not a very good choking impedance. If you make a $$120\space\Omega$$ two wire line with a second piece of coax taped against the feed, it could be almost $$1000\space\Omega$$ at the same frequency offset.
So I suggest a kind of pitch fork of two parallel wires, a quarter wave long, one each side for symmetry, spaced about one coax diameter apart. Braids soldered together at the short circuit side, don't worry about the inners. No need to remove the jackets.
Like this, if you're using the outside of the bazooka as a dipole element (diameters and widths exaggerated a lot):
Or like this if you need a balun for a balanced antenna like a dipole:
Finally, you don't know the RF properties of the plastic and glue in the heat shrink. You can compare it to the coax jacket by putting a piece of both in a microwave oven, with a small glass of water on the far side. Cook them for a bit and see which plastic gets hot first. A good dielectric like PTFE or PE won't get hot at all.
It's a great idea to build and measure these things. The way to test for unbalanced currents is to mount the antenna say 2 or 3 metres high on a wooden pole, and watch the VSWR display while you run your hand down coax. If the graph jumps around then there are currents on the coax. Be sure to evaluate the whole frequency range you're interested in. At slightly higher frequencies, I remember seeing the trace dancing around in most of the graph, and being stable in the middle where the balun was working.
• Thanks @tomnexus! My project requires a sleeve balun. Re the relevant part of your answer, "For a bazooka balun, you want the Zo of the transmission line to be as high as possible. A thin layer of heat shrink is not ideal." Can you expand on that part? Why is the heat shrink not ideal vs the jacket of the coax? Aug 20 '20 at 11:28
• Heatshrink OR jacket is a bad idea, because of the low Z. You can choose between them based on their loss, I can't say which will be better but you can try the experiment above. If you use any dielectric you need to measure the balun length to get it right, with a VNA or dip meter or something, measuring won't be good enough. Air dielectric like a larger diameter metal pipe is easier to guess the length. Aug 22 '20 at 0:51
|
2021-12-08 03:28:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42900994420051575, "perplexity": 1035.1891774651892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00622.warc.gz"}
|
http://www.zora.uzh.ch/id/eprint/129848/
|
# Vector-Boson Fusion Higgs Production at Three Loops in QCD
Dreyer, Frédéric A; Karlberg, Alexander (2016). Vector-Boson Fusion Higgs Production at Three Loops in QCD. Physical Review Letters, 117(7):072001.
## Abstract
We calculate the next-to-next-to-next-to-leading-order (${\mathrm{N}}^{3}\mathrm{LO}$) QCD corrections to inclusive vector-boson fusion Higgs production at proton colliders, in the limit in which there is no color exchange between the hadronic systems associated with the two colliding protons. We also provide differential cross sections for the Higgs transverse momentum and rapidity distributions. We find that the corrections are at the 1‰-2‰ level, well within the scale uncertainty of the next-to-next-to-leading-order calculation. The associated scale uncertainty of the ${\mathrm{N}}^{3}\mathrm{LO}$ calculation is typically found to be below the 2‰ level. We also consider theoretical uncertainties due to missing higher order parton distribution functions, and provide an estimate of their importance.
## Abstract
We calculate the next-to-next-to-next-to-leading-order (${\mathrm{N}}^{3}\mathrm{LO}$) QCD corrections to inclusive vector-boson fusion Higgs production at proton colliders, in the limit in which there is no color exchange between the hadronic systems associated with the two colliding protons. We also provide differential cross sections for the Higgs transverse momentum and rapidity distributions. We find that the corrections are at the 1‰-2‰ level, well within the scale uncertainty of the next-to-next-to-leading-order calculation. The associated scale uncertainty of the ${\mathrm{N}}^{3}\mathrm{LO}$ calculation is typically found to be below the 2‰ level. We also consider theoretical uncertainties due to missing higher order parton distribution functions, and provide an estimate of their importance.
## Statistics
### Citations
Dimensions.ai Metrics
17 citations in Web of Science®
17 citations in Scopus®
### Altmetrics
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics English 2016 30 Dec 2016 07:24 02 Feb 2018 11:11 American Physical Society 0031-9007 Hybrid https://doi.org/10.1103/PhysRevLett.117.072001 1606.00840v1
|
2018-03-19 05:45:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4490971565246582, "perplexity": 2692.3429515959488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646375.29/warc/CC-MAIN-20180319042634-20180319062634-00091.warc.gz"}
|
https://tex.stackexchange.com/questions/200409/biblatex-chicago-etoolbox-error-invalid-boolean-expression
|
# biblatex-chicago etoolbox error: Invalid boolean Expression
I am using bibtex as the backend for my biblatex bibliography but there seems to be some clash between it and the etoolbox package that I don't understand. This is on a fresh install of TeXlive 2014. The error only appears if the following conditions are met:
1: BibTeX is the backend
2: I am using biblatex-chicago (there's no tag for that - sorry!)
3: A bib entry is cited twice in the same paragraph
A minimal example follows (in which I have embedded the bibliography in the file for the sake of keeping it simple - doing it this way didn't change the error).
\documentclass{article}
\usepackage{filecontents}
\begin{filecontents}{biblio.bib}
@book{smith_john_1970,
edition = {1st},
title = {Help},
publisher = {A. N. Other},
author = {Smith, John},
year = {1970},
}
\end{filecontents}
\usepackage[backend=bibtex]{biblatex-chicago}
\begin{document}
\cite[732]{smith_john_1970} and \cite[732]{smith_john_1970}
\end{document}
• Welcome to TeX.SX! Where does etoolbox come in within your MWE? – user31729 Sep 9 '14 at 23:10
• @ChristianHupfer The error I get is ! Package etoolbox Error: Invalid boolean expression. I get no error when the backend is Biber. – egreg Sep 9 '14 at 23:14
• @egreg: Yes, I confirm that... I have not looked into the biblatex-chicago file so far. – user31729 Sep 9 '14 at 23:28
• Thank you for taking the time to try the example. Yes biber works (which is why I listed bibtex as a condition) but it would be a real headache for me to make that switch (for unrelated reasons). – Kevin Sep 9 '14 at 23:38
• Since the 'notes' style is not yet fully dependent on biber, you should file a bug report. The developer has always been quite responsive in times past. (Note, however, that biber seems poised to become a requirement for biblatex-chicago. Note also that biber is quite good at converting to and from BibTeX-compliant bibliographies, which may prove useful to you.) – jon Sep 10 '14 at 2:30
If one does some digging in chicago-notes.cbx, in the definition for the cite bibmacro one finds
\ifboolexpr{
test {\iffieldundef{shorthand}}%
or
(
togl {blx@skipbiblist}%
and
togl {cms@inheritshhand}%
and
not test {\iffieldundef{crossref}}%
)
}
And indeed that seems to be the boolean expression that causes etoolbox to get the hiccups. All but one of these conditions are completely fine, the problem arises from togl {blx@skipbiblist}.
skipbiblist is - like its friendsskipbib and skiplab - a Biber-only option (unfortunately, skipbiblist is quite new and not documented in the docs, it is an extension of skiplos and works in similar fashion to its "friends", see p. 62 §3.1.3.2 Type/Entry Options of the biblatex documentation)
If you do not use Biber, but BibTeX or BibTeX8, the file biblatex1.sty is loaded (with Biber it would be biblatex2.sty), biblatex1.sty, however does not have a definition for blx@skipbiblist (it does provide one for blx@skipbib, blx@skiplos and blx@skiplab though - so it rather seems like a biblatex1 bug for me) and therefore the conditional is invalid for etoolbox, because blx@skipbiblist does not exist if you do not use Biber.
# But I want a fix now!
There are two fixes for this issue
## 1. Use Biber
Just use Biber as your backend, enjoy all its functionalities and capabilities and get rid of this little bug; you merely have to pass backend=biber as loading-time option to biblatex/biblatex-chicago (and obviously have Biber installed and run it instead of BibTeX).
That is the preferred solution, see for example page 1 (!) of the biblatex-chicago documentation:
I also strongly encourage all users who haven’t already done so to switch to Biber as their backend; it has long been a requirement for the author-date styles, but it is now becoming indispensable for accessing all the features of the notes & bibliography style, as well.
Or
## 2. Provide the missing toggle yourself
Just put
\providetoggle{blx@skipbiblist}
MWE
\documentclass{article}
\usepackage[notes,backend=bibtex]{biblatex-chicago}
• I have filed a bug report biblatex bug tracker #267: skipbiblist vs skiplos, in the meantime you can use one of the two fixes above. – moewe Sep 10 '14 at 6:10
• You have to put a \makeatletter before and \makeatotherafter the \providetoggle. – Sveinung Sep 10 '14 at 7:54
• blx@skiplos has now been internally renamed to blx@skibiblist (e9f4f3d) so with the upcoming version of biblatex that issue should be resolved. – moewe Sep 10 '14 at 10:12
• @Sveinung because at some point the argument must be placed between \csname ... \endcsname. They build the command name from the whole (expanded) argument, also with catcode 12 tokens... – clemens Sep 11 '14 at 5:59
|
2019-07-19 09:11:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564910292625427, "perplexity": 3533.5169690779576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00552.warc.gz"}
|
http://clay6.com/qa/28226/find-the-first-5-terms-of-the-sequence-whose-general-term-is-t-n-1-5-
|
Browse Questions
# Find the first $5$ terms of the sequence whose general term is $t_n=(-1)^{n-1}.\:5^{n+1}$
$\begin{array}{1 1}25,125,625,3125,15625 \\25,-125,625,3125,15625 \\ 25,125,625,-3125,15625 \\ 25,-125,625,-3125,15625 \end{array}$
Given: $t_n=(-1)^{n-1}.\:5^{n+1}$
By putting $n=1,2,3.....$ we get the first 5 terms as
$t_1=(-1)^{1-1}.\:5^{1+1}=25$
$t_2=(-1)^{2-1}.\:5^{2+1}=-125$
$t_3=(-1)^{3-1}.\:5^{3+1}=625$
$t_4=(-1)^{4-1}.\:5^{4+1}=-3125$
$t_5=(-1)^{5-1}.\:5^{5+1}=15625$
|
2017-01-22 10:14:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8554739356040955, "perplexity": 501.07771734592563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00475-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.groundai.com/project/analysis-and-synthesis-of-attractive-quantum-markovian-dynamics/
|
Analysis and synthesis of attractive quantum Markovian dynamics
# Analysis and synthesis of attractive quantum Markovian dynamics
Francesco Ticozzi and Lorenza Viola F. Ticozzi is with the Dipartimento di Ingegneria dell’Informazione, Università di Padova, via Gradenigo 6/B, 35131 Padova, Italy (ticozzi@dei.unipd.it).L. Viola is with the Department of Physics and Astronomy, Dartmouth College, 6127 Wilder Laboratory, Hanover, NH 03755, USA (Lorenza.Viola@Dartmouth.edu).
July 11, 2019
###### Abstract
We propose a general framework for investigating a large class of stabilization problems in Markovian quantum systems. Building on the notions of invariant and attractive quantum subsystem, we characterize attractive subspaces by exploring the structure of the invariant sets for the dynamics. Our general analysis results are exploited to assess the ability of open-loop Hamiltonian and output-feedback control strategies to synthesize Markovian generators which stabilize a target subsystem, subspace, or pure-state. In particular, we provide an algebraic characterization of the manifold of stabilizable pure states in arbitrary finite-dimensional Markovian systems, that leads to a constructive strategy for designing the relevant controllers. Implications for stabilization of entangled pure states are addressed by example.
Keywords: Quantum control; Quantum dynamical semigroups; Quantum subsystems.
## 1 Introduction
Stabilization problems have a growing significance for a variety of quantum control applications, ranging from state preparation of optical, atomic, and nano-mechanical systems to the generation of noise-protected realizations of quantum information in realistic devices [1, 2]. Dynamical systems undergoing Markovian evolution [3, 4] are relevant from the standpoint of typifying irreversible quantum dynamics and present distinctive control challenges [5]. In particular, open-loop quantum-engineering and (approximate) stabilization methods based on dynamical decoupling cease to be viable in the Markovian regime [6, 7]. It is our goal in this work to show how a wide class of Markovian stabilization problems can nevertheless be effectively treated within a general framework, provided by invariant and attractive quantum subsystems.
After providing the relevant technical background, we proceed establishing a first analysis result that fully characterizes the attractive subspaces for a given generator. This is done by analyzing the structure induced by the generator in the system’s Hilbert space, and by invoking Krasowskii-LaSalle’s invariance principle. We next explore the application of the result to stabilization problems for Markovian Hamiltonian and output-feedback control. Our approach leads to a complete characterization of the stabilizable pure states, subspaces, and subsystems as well as to constructive design strategies for the control parameters. Some partial results in this sense have been presented in [8], and in the conference paper [9]. We also refer to the journal article [8] for a more detailed discussion of the connection between invariant, attractive and noiseless subsystems, along with a thorough analysis of model robustness issues which shall not be our focus here.
## 2 Preliminaries and background
### 2.1 Quantum Markov processes
Consider a separable Hilbert space over the complex field . Let represent the set of linear bounded operators on , with denoting the real subspace of Hermitian operators, and , being the identity and the zero operator, respectively. Throughout our analysis, we consider a finite-dimensional quantum system : following the standard quantum statistical mechanics formalism [10], we associate to a complex, finite-dimensional . Our (possibly uncertain) knowledge of the state of is condensed in a density operator on , with and . Density operators form a convex set , with one-dimensional projectors corresponding to extreme points (pure states, ). Observable quantities are represented by Hermitian operators in and expectation values are computed by using the trace functional: If is the composite system obtained from two distinguishable quantum systems , the corresponding mathematical description is carried out in the tensor product space, , observables and density operators being associated with Hermitian and positive-semidefinite, normalized operators on , respectively. The partial trace over is the unique linear operator , ensuring that for every , . Partial trace is used to compute marginal states and partial expectations on multipartite systems.
In the presence of either intended or unwanted couplings (such as with a measurement apparatus, or with a surrounding quantum environment), the evolution of a subsystem of interest is no longer unitary and reversible, and the general formalism of open quantum systems is required [11, 3, 4]. A wide class of open quantum systems obeys Markovian dynamics [3, 12, 13, 4]. Let denote the physical quantum system of interest, with associated Hilbert space , . Assume that we have no access or control over the state of the system’s environment, and that the dynamics in is continuous in time and described at each instant by a Trace-Preserving Completely Positive (TPCP) linear map [14]. If a forward composition law is also assumed, we obtain a quantum Markov process, or Quantum Dynamical Semigroup (QDS):
###### Definition 1 (Qds)
A quantum dynamical semigroup is a one-parameter family of TPCP maps that satisfies:
• ,
• is a continuous function of , , .
Due to the trace- and positivity-preserving assumptions, a QDS is a semigroup of contractions. As proven in [12, 15], the Hille-Yoshida generator for the semigroup exists and can be cast in the following canonical form:
˙ρ(t) = L(ρ(t))=−iℏ[H,ρ(t)]+p∑k=1γkD(Lk,ρ(t)) = −iℏ[H,ρ(t)]+p∑k=1γk(Lkρ(t)L†k−12{L†kLk,ρ(t)}),
with denoting the spectrum of . The effective Hamiltonian and the noise operators (also known as “Lindblad operators”) completely specify the dynamics, including the effect of the Markovian environment. In general, is equal to the Hamiltonian for the isolated, free evolution of the system, , plus a correction, , induced by the coupling to the environment (aka “Lamb shift”). The non-Hamiltonian terms in (2.1) account for the non-unitary character of the dynamics, specified by noise operators .
In principle, the exact form of the generator of a QDS may be rigorously derived from a Hamiltonian model for the joint system-environment dynamics under appropriate limiting conditions (the so-called “singular coupling limit” or the “weak coupling limit,” respectively [3, 4]). In most physical situations, however, an analytical derivation is unfeasible, since the full microscopic Hamiltonian describing the system-environment interaction is unavailable. A Markovian generator of the form (2.1) is then postulated on a phenomenological basis. In practice, it is often the case that knowledge of the noise effect may be assumed, allowing to specify the Markovian generator by directly assigning a set of noise operators (not necessarily orthogonal or complete) in (2.1), and the corresponding noise strengths . Each of the noise operators may be associated to a distinct noise channel , by which information irreversibly leaks from the system to the environment.
### 2.2 Quantum subsystems: Invariance and attractivity
Quantum subsystems are the basic building block for describing composite systems in quantum mechanics [10], and provide a general framework for scalable quantum information engineering in physical systems. In fact, the so-called subsystem principle [2, 1, 16] states that any “faithful” representation of information in a quantum system requires to specify a subsystem desired information. Many of the control tasks considered in this paper are motivated by the need for strategies to create and maintain quantum information in open quantum systems. A definition of quantum subsystem suitable to our scopes is the following:
###### Definition 2 (Quantum subsystem)
A quantum subsystem of a system defined on is a quantum system whose state space is a tensor factor of a subspace of ,
HI=HSF⊕HR=(HS⊗HF)⊕HR, (2)
for some co-factor and remainder space . The set of linear operators on , , has the same statistical properties and is isomorphic to the (associative) subalgebra of of operators of the form .
Let , and let be orthonormal bases for respectively. The decomposition (2) is then naturally associated with the following basis for :
{|φm⟩}={|ϕSj⟩⊗|ϕFk⟩}n,fj,k=1∪{|ϕRl⟩}rl=1.
This induces a block structure for matrices acting on :
X=(XSFXPXQXR), (3)
where, in general, . We denote by the projector onto , that is, .
In this paper, we study Markov dynamics of a quantum system with a given decomposition of the associated Hilbert space of the form (2), with respect to the quantum subsystem associated to . By describing the dynamics in the Schrödinger picture, i.e. , with evolving states and time-invariant observables, the first step is to specify whether the system has been properly initialized in a state which faithfully represents a state of the subsystem , and what is the structure of such states.
###### Definition 3 (State initialization)
The system with state is initialized in with state if the blocks of satisfy:
(i) ρSF=ρS⊗ρF for some ρF∈D(HF); (ii) ρP=0,ρR=0.
We denote by the set of states that satisfy (i)-(ii) for some .
Condition (ii) guarantees that is a valid (normalized) state of , while condition (i) ensures that measurements or dynamics affecting the factor have no effect on the state in .
We now proceed to characterize in which sense, and under which conditions, a quantum subsystem may be defined as invariant. Recall that a set is said to be invariant for a dynamical system if the trajectories that start in remain in for all .111For clarity, let us also recall other standard dynamical systems notions relevant in our context. Given and a suitable norm for the state manifold, we call invariant, stationary, or equilibrium state any such that An equilibrium state is said to be stable if for every there exists such that if , then any trajectory starting from does not leave the ball of radius centered in . A state is said to be (globally) attractive if the trajectories from any initial condition converge to it. In view of Definition 3, the natural definition considering dynamics in the state space may be phrased as follows:
###### Definition 4 (Invariance)
Let evolve under a family of TPCP maps . is an invariant subsystem if is an invariant subset of .
In explicit form, as given in [8], this means that , the state of obeys
Tt(ρS⊗ρF000)=(TSt(ρS)⊗TFt(ρF)000),t≥0, (4)
where, for every and are TPCP maps on and , respectively, not depending on the initial state. For Markovian evolution of , and are required to be QDSs on their respective domain.
We next recall a characterization of dynamical models able to ensure invariance for a fixed subsystem, based on appropriately constraining the block-structure of the matrix representation of the operators specifying the Markovian generator. We refer to [8] for the proofs.
###### Lemma 1 (Markovian invariance)
Assume that , and let be the Hamiltonian and the error generators of a Markovian QDS as in (2.1). Then supports an invariant subsystem iff :
Lk=(LS,k⊗LF,kLP,k0LR,k), iHP−12∑k(L†S,k⊗L†F,k)LP,k=0, (5) HSF=HS⊗IF+IS⊗HF,
where for each either or (or both).
One may require to have dynamics independent from evolution affecting and also in the case where is not initialized in the sense of Definition 3. If neither conditions (i)-(ii) are satisfied, one may still define an unnormalized reduced state for the subsystem:
~ρS=traceF(ΠSFρΠ†SF),trace(~ρS)≤1.
This allows for entangled mixed states to be supported on as well as for blocks to differ from zero. Similar to the case of so-called “initialization-free” subsystems considered in [17, 8], an additional constraint on the Lindblad operators is required to ensure independent reduced dynamics in this case. That is, with respect to the matrix block-decomposition above, it must be for every . Such a constraint decouples the evolution of the -block of the state from the rest, rendering both and separately invariant.
This imposes tighter conditions on the noise operators, which may be hard to ensure in reality and, from a control perspective, leave less room for Hamiltonian compensation as examined in Section 4.1. In order to address situations where such extra constraints cannot be met, as well as a question which is interesting on its own, we explore conditions for a subsystem to be attractive:
###### Definition 5 (Attractive subsystem)
Assume that . Then supports an attractive subsystem with respect to a family of TPCP maps if the following condition is asymptotically obeyed:
limt→∞(Tt(ρ)−(¯ρS(t)⊗¯ρF(t)000))=0, (6)
where
¯ρS(t)=\rm traceF[ΠSFTt(ρ)Π†SF],
¯ρF(t)=\rm traceS[ΠSFTt(ρ)Π†SF].
This implies that every trajectory in converges to Thus, an attractive subsystem may be thought of as a subsystem that “self-initializes” in the long-time limit, by reabsorbing initialization errors. Although such a desirable behavior only emerges asymptotically, for QDSs one can see that convergence is exponential, as long as the relevant eigenvalues of have strictly negative real part.
We conclude this section by recalling two partial results on attractive subsystems which we established in [8]. The first is a negative result, which shows, in particular, how the possibility of “initialization-free” and attractive behavior are mutually exclusive.
###### Proposition 1
Assume that , and let be the Hamiltonian and the error generators as in (2.1), respectively. Let support an invariant subsystem. If for every , then is not attractive.
Note that he conditions of the above Proposition are obeyed, in particular, if , . As a consequence, attractivity is never possible for the class of unital (-preserving) Markovian QDSs with purely self-adjoint ’s. Still, even if the condition condition holds, attractive subsystems may exist in the pure-factor case, where . Sufficient conditions are provided by the following:
###### Proposition 2
Assume that (), and let be invariant under a QDS with generator of the form
L=LS⊗IF+IS⊗LF.
If has a unique attractive state , then is attractive.
Interesting linear-algebraic conditions for determining whether a generator has a unique attractive state (though not necessarily pure) are presented in [18, 19].
## 3 Characterizing attractive Markovian dynamics
We begin by presenting new necessary and sufficient conditions for attractivity of a subspace, which will provide the basis for the synthesis results in the next sections. Notice that if supports an attractive subsystem, the entire set of states with support on is attractive. Once this is verified, the dynamics confined to the invariant subspace (that supports a pure subsystem) may be studied with the aid of the results recalled in the previous section. The following Lemma will be used in the proof of the main result, but is also interesting on its own. We denote with the support of i.e., the orthogonal complement of its kernel.
###### Lemma 2
Let be an invariant subset of for the QDS dynamics generated by and define:
HW=\em supp(W)=⋃ρ∈W\em supp(ρ).
Then is invariant.
Proof. Let be the convex hull of Thus, every element of may be expressed as where , and By using linearity of the dynamics,
Tt(^ρ)=∑kpkTt(ρk)=∑kpkρ′k,∀t≥0.
with Hence is invariant. Furthermore, from the definition of , there exist a such that Consider and the corresponding matrix partitioning:
X=(XWXPXQXR).
With respect to this partition, the block of is full-rank, while are zero-blocks. The trajectory is contained in only if:
ddt¯ρ=(LW(¯ρW)000),
so that, upon computing explicitly the generator blocks, we must impose:
⎧⎪⎨⎪⎩¯ρW(iHP−12∑kL†W,kLP,k)=0,−12∑k{L†Q,kLQ,k,¯ρW}=0.
Since is full-rank and positive, it must be:
{iHP−12∑kL†W,kLP,k=0,LQ,k=0,∀k.
Comparing with the conditions given in Corollary 1, we infer that is invariant, hence we conclude.
We are now in a position to prove our main result:
###### Theorem 1 (Subspace attractivity)
Let and assume that is an invariant subspace for the QDS dynamics generated by (2.1). Define:
HR′=p⋂k=1ker(LP,k), (7)
with the matrix blocks representing linear operators from to Then is an attractive subspace iff does not support any invariant subsystem.
Proof. Clearly, if supports an invariant set , then cannot be attractive, since for every the dynamics is confined to To prove the other implication, we shall prove that if does not support an invariant set, then is attractive. Consider the non-negative, linear functional It is zero iff , i.e., for perfectly initialized states. By LaSalle’s invariance principle (see e.g. [20]), every trajectory will converge to the largest invariant subset contained in the set:
Z={ρ∈D(HI)|˙V(ρ)=0}.
Explicit calculation of the blocks of the generator (see also [8]) yields:
By the cyclic property of the trace, the last term is equivalent to the trace of which is a sum of positive operators, and thus can be zero iff each term is zero. Being the ’s fixed, this can hold iff has support contained in , defined as above. Thus, the support of is Call the support of the maximal invariant set in : By Lemma 2, is invariant. But is defined as the maximal invariant set in , so it must be Recalling that by hypothesis is itself an invariant subset contained in , it must be with We next prove that supports an invariant set iff i.e. 222To the scope of this proof, the “if” implication would suffice, but since the converse arises naturally, we prove both.. Consider a such that has non-trivial support on If no such state exists, has support only on so clearly If such a state exists, let
^ρ′=ΠR′^ρΠR′% trace(ΠR′^ρ),
where is the orthogonal projector on Since has support only on its trajectory is confined to On the other hand, hence it must be for all By observing that and that are continuous, we can conclude that the trajectory must have support only on some endowing with an invariant set, and by the Lemma above, the invariant subsystem associated to . We conclude by observing that if does not support an invariant set, then hence is attractive.
In spite of its non-constructive nature, the power of this characterization will be apparent in the proofs of the results concerning active stabilization of states and subspaces by Hamiltonian control in the next section. We observe here that Lemma 2 lends itself to the following useful specialization:
###### Proposition 3
If is an invariant state for the QDS dynamics generated by and then is invariant. Conversely, if supports an invariant subset, it contains at least an invariant state.
Proof. The first implication follows from Lemma 2 above. If supports an invariant subset, then by the same Lemma it supports an invariant subsystem, and the density operators with support on form a convex, compact set that evolves accordingly to a (reduced) QDS. Hence, it must admit at least an invariant state [3].
This result provides us with an explicit criterion for verifying whether contains an invariant subset: It will suffice to check if supports an invariant state. Invariant (or “fixed”) states may be found by analyzing the structure of . An efficient algorithm for generic TPCP maps has been recently presented in [16].
## 4 Engineering attractive Markovian dynamics
In this section, we illustrate the relevance of the theoretical framework developed thus far to a wide class of Markovian stabilization problem associated with the task of making a desired (fixed) quantum subsystem invariant or attractive. Interestingly, these problems may be regarded as instances of Markovian reservoir engineering, which has long been investigated on a phenomenological basis by the physics community in the context of both decoherence mitigation and the quantum-classical transition, see e.g. [21, 22, 23].
In the special yet relevant case of sinthesizing attractive dynamics with respect to an intended pure state, our results fully characterize the manifold of pure states that may be “prepared” given a reference dissipative dynamics using either open-loop Hamiltonian or feedback control resources. As discussed in [8], provided a sufficient level of accuracy in tuning the relevant control parameters may be ensured, the “direct” Markovian feedback considered here has the important advantage of substantially relaxing implementation constraints in comparison with “Bayesian” feedback techniques requiring real-time state estimation update [24, 25].
### 4.1 Open-loop Hamiltonian control
We begin by exploring what can be achieved by considering only open-loop Hamiltonian control, specifically, the application of time-independent Hamiltonians to the dynamical generator. This allows us to consider generators involving, in general, multiple and yields interesting characterizations of the possibilities offered by this class of controls for stabilization problems, complementing previous work from a controllability perspective [26, 27]. The results established below will also be of key importance in the proofs of the theorems on closed-loop stabilization. Lastly, a separate presentation will serve to clarify the different scopes and limitations of the two class of control strategies. As a direct consequence of the Markovian invariance theorem, we have the following:
###### Corollary 1 (Open-loop invariant subspaces)
Let Then can be made invariant by open-loop Hamiltonian control iff for every .
Proof. By specializing Corollary 1, supports an invariant subsystem iff:
LQ,k=0,∀k (8) iHP−12∑kL†S,kLP,k=0. (9)
The only condition that is affected by a change of Hamiltonian is (9), which however can always be satisfied by an appropriate choice of control Hamiltonian. This leaves us with condition (8) alone.
The above result makes it possible to enforce invariant subspaces for the controlled dynamics by solely using Hamiltonian resources, without directly modifying the non-unitary part. The ability of open-loop Hamiltonian control to induce stronger attractivity properties is characterized in the following:
###### Theorem 2 (Open-loop attractive subspaces)
Let and assume that supports an invariant subsystem. Then can be made attractive by open-loop Hamiltonian control iff is not invariant.
Proof. If supports an invariant subsystem, then by Corollary 1 it must be for every . Since invariant, this implies Any Hamiltonian control perturbation that preserves invariance on must satisfy this condition, hence preserve invariance on too, thus cannot be rendered attractive. If the whole does not support an invariant subsystem, we can devise an iterative procedure that builds up a control Hamiltonian such that becomes attractive. Theorem 1 states that if there is no invariant subsystem supported in (defined in (7)), then is attractive. If there is an invariant subsystem with support let us consider the following Hilbert space decomposition:
HI=HS⊕HT⊕HZ.
After imposing the invariance conditions on and the associated block-decomposition of the Lindblad operators and Hamiltonian turns out to be of the form:
Lk=⎛⎜ ⎜⎝LS,k0LP′,k0LT,kLP′′,k00LZ,k⎞⎟ ⎟⎠,
subject to the conditions:
iHP′−12∑kL†S,kLP′,k=0, iHP′′−12∑kL†T,kLP′′,k=0.
One sees that the most general Hamiltonian perturbation that preserves the invariance of has the form:
Hc=⎛⎜ ⎜⎝H1000H2HM0H†MH3⎞⎟ ⎟⎠.
Consider a control Hamiltonian such that the block has full column-rank, while are arbitrary and is still to be determined. If then for every hence cannot support any invariant subsystem, since conditions in Corollary 1 cannot be satisfied for any subspace of . Conversely, if choosing an as above, by dimension comparison must have a non-trivial left kernel , and thus there exists a that supports an invariant whose dimension is strictly lesser than dimension of We can iterate the reasoning with a new, refined decomposition with . With this decomposition, the generator matrices exhibits the same block structure as above, with Thus, we can exploit the freedom of choice on the block to further reduce the dimension of the invariant set. At each iteration, the procedure either stops rendering attractive, if or decrease the dimension of the invariant set by at least The procedure thus ends in at most steps.
Remarkably, the proof of the above theorem, combined with a strategy to find invariant subspaces, provides a constructive procedure to build a constant Hamiltonian that makes the desired invariant subspace attractive whenever the Theorem’s hypothesis are satisfied.
### 4.2 Markovian feedback control
The potential of Hamiltonian compensation for controlling Markovian evolutions is clearly limited by the impossibility to directly modify the noise action. To our scopes, open-loop control is then mostly devoted to connect subspaces in that are already invariant, and to adjust the generator parameters so that the interplay between Hamiltonian and dissipative contributions (as in Eq. (5)) can stabilize the desired subspace or subsystem.
A way to overcome these limitations is offered by closed-loop control strategies. Measurement-based feedback control requires the ability to both effectively monitor the environment, and condition the target evolution upon the measurement record. Feedback strategies have been considered since the beginning of the quantum control field [28], and successfully employed in a wide variety of settings (see e.g. [29, 30, 24, 31, 32]).
We focus on a measurement scheme which mimics optical homo-dyne detection for field-quadrature measurements, whereby the target system (e.g. an atomic cloud trapped in an optical cavity) is indirectly monitored via measurements of the outgoing laser field quadrature [29, 33]. The conditional dynamics of the state is stochastic, driven by the fluctuation one observes in the measurement. Considering a suitable infinitesimal feedback operator determined by a feedback Hamiltonian , and taking the expectation with respect to the noise trajectories, this leads to the Wiseman-Milburn Markovian Feedback Master equation (FME) [29, 30]:
˙ρt=−iℏ[H+12(FM+M†F),ρt]+D(M−iF,ρt). (10)
The feedback state-stabilization problem for Markovian dynamics has been extensively studied for a single two-level system (qubit) [34, 35]. The standard approach is to to design a Markovian feedback loop by assigning both the measurement and feedback operators and to treat the measurement strength and the feedback gain as the relevant control parameters accordingly. Throughout the following section, we will assume to have more freedom, by considering, for a fixed measurement operator , both and as tunable control Hamiltonians
###### Definition 6 (Chc)
A controlled FME of the form (10) supports complete Hamiltonian control (CHC) if (i) arbitrary feedback Hamiltonians may be enacted; (ii) arbitrary constant control perturbations may be added to the free Hamiltonian .
This leads to both new insights and constructive control protocols for systems where the noise operator is a generalized angular momentum-type observable, for generic finite-dimensional systems. Physically, the CHC assumption must be carefully scrutinized on a case by case basis, since constraints on the form of the Hamiltonian with respect to the Lindblad operator may emerge, notably in the abovementioned weak-coupling limit derivations of Markovian models [3].
We now address the general subspace-stabilization problem for controlled Markovian dynamics described by FMEs. A characterization of the subspaces supporting stabilizable subsystems is provided by the following:
###### Theorem 3 (Feedback attractive subspaces)
Let , with being the orthogonal projection on Assume CHC capabilities. Then, for any measurement operator , there exist a feedback Hamiltonian and a Hamiltonian compensation that make the subsystem supported by attractive for the FME (10) iff
[ΠS,(M+M†)]≠0. (11)
Proof. Write , with both and being Hermitian, thus . Condition (11) holds iff is not block-diagonal when partitioned according to the chosen decomposition. If is block-diagonal, then, by Corollary 1, enforcing invariance of the subsystem supported by requires that . But then it must also be so that supports an invariant subsystem. Since the choice of and does not affect invariance, by Theorem 2 it follows that cannot be made attractive by Hamiltonian control. On the other hand, if is not block-diagonal, we can always find in such a way that is upper diagonal, by choosing . With as the new noise operator, we now have to devise a control Hamiltonian with a block that makes invariant (this is always possible by Corollary 1, since =0), and a block constructed following the procedure in the proof of Theorem 2.
The following specialization to pure states, i.e. one-dimensional subspaces, is immediate:
###### Corollary 2
Assume CHC. For any measurement operator , there exist a feedback Hamiltonian and a Hamiltonian compensation able to stabilize an arbitrary desired pure state for the FME (10) iff
[ρd,(M+M†)]≠0. (12)
The proof of Theorem 3 provides a constructive algorithm for designing the feedback and correction Hamiltonians needed for the stabilization task. In particular, our analysis recovers the qubit stabilization results of [34] recalled before. For example, the states that are not stabilizable within the control assumptions of [34] are the ones commuting with the Hermitian part of that is, In the -plane of the Bloch’s representation, the latter correspond precisely to the equatorial points.
As a corollary of Theorem 3 and Proposition 2, we present sufficient and necessary conditions for engineering a generic attractive quantum subsystem (with a non-trivial co-factor). We start with a Lemma, which is a straightforward specialization of Proposition 5 in [8]:
###### Lemma 3
Assume that , (), and a QDS of the form If admits at least two invariant states, then is not attractive.
###### Theorem 4 (Feedback attractive subsystems)
Let with and assume CHC capabilities. Then for any , with Hermitian part , there exist a feedback Hamiltonian and a Hamiltonian compensation that make the subsystem attractive for the FME (10) iff the following conditions hold:
1. \vspace∗−2mm[ΠSF,MH]≠0, (13)
2. \vspace∗−2mmΠSFMHΠ†SF={IS⊗CF,\rm orCS⊗IF, (14)
3. \vspace∗−2mmΠSFMHΠ†SF≠λISF,∀λ∈C. (15)
Proof. By Theorem 3, condition (13) is necessary and sufficient to render attractive, which is a necessary condition for attractivity of In fact, if this is not the case, by Theorem 1 there would exist an invariant subsystem whose support is contained To ensure invariance of by Corollary 1, the block of has to satisfy with or (or both). Thus, both the Hermitian and anti-Hermitian parts of must have the same structure. The Hermitian part of is equal the Hermitian part of , whereby it follows that is necessary for invariance of . Assume (the other case may be treated in a similar way, by interchanging the roles of and in what follows). If is not satisfied, then must be unitarily similar to a diagonal matrix for any choice of that ensures invariance of . Hence, the dynamics restricted to admits at least two different stationary states ( by hypothesis). By Lemma 3, we conclude that cannot be attractive. Conversely, if i) holds, following the proof of Theorem 3, we can devise a Hamiltonian correction and a feedback Hamiltonian for which is attractive. Since the -block is irrelevant to this stage, and may be further chosen to render a pure state of attractive for the reduced dynamics. Assume ii) and iii), with different from a scalar matrix (again, to treat the other case, it suffices to switch the appropriate subscripts in what follows). Thus, there exists a one-dimensional projector such that By Corollary 2, we can find and that render it attractive. By choosing an Hamiltonian control so that and the stated conditions are also sufficient for the existence of attractivity-ensuring controls.
## 5 Applications
The following examples will serve to exemplify the application of our stabilization results to prototypical finite-dimensional control systems, which are also of direct relevance to quantum information devices. Different scenarios may arise depending on whether the target system is (or is regarded as) indecomposable, or explicit reference to a decomposition into subsystems is made.
### 5.1 Single systems
Example 1: Consider a single qubit on , with uncontrolled dynamics specified by , with and Assume we wish to stabilize Since this is possible. Following the procedure in the above proof, consider so that
and Substituting in the FME (10), one obtains the desired result, as it can also be directly verified by using Proposition 7 in [8].
Assume, more generally, that it is possible to continuously monitor an arbitrary single-spin observable, . Since the choice of the reference frame for the spin axis is conventional, by suitably adjusting the relative orientation of the measurement apparatus and the sample, it is then in principle possible to prepare and stabilize any desired pure state with a similar control strategy.
Example 2: Consider a three-level system (a qutrit), whose Hilbert space carries a spin-1 representation of spin angular momentum observables , . Without loss of generality, we may choose a basis in such that the desired pure state to be stabilized is , and by CHC we may also ensure that . In analogy with Example 1, a natural strategy is to continuously monitor a non-diagonal spin observable, for instance:
Jx=ℏ√2⎛⎜⎝010101010⎞⎟⎠.
Since the state is stabilizable. Choosing the feedback Hamiltonian as
F=−Jy=−ℏ√2⎛⎜⎝0−i0i0−i0i0⎞⎟⎠,
yields
L=Jx+iJy=ℏ√2⎛⎜⎝010001000⎞⎟⎠.
Unlike the qubit case, , thus a Hamiltonian compensation is needed to ensure that . With these choices, it is easy to see that does not support any invariant subsystem, hence is attractive.
Provided that a similar structure of the observables is ensured, the previous examples naturally extend to generic -level systems, as formally established in [8] by using Lyapunov techniques.
### 5.2 Bipartite systems
If a multipartite structure is specified on , it is both conceptually and practically important to understand whether stabilization of physically relevant class of states (including non-classical entangled states) is achievable with control resources which respect appropriate operational constraints, such as locality. We focus here on the simplest setting offered by bipartite qubit systems, with emphasis on Markovian-feedback preparation of entangled states, which has also been recently analyzed within a quantum filtering approach in [36].
Example 3: Consider a two-qubit system defined on a Hilbert space , with a preferred basis (e.g., defines the computational basis in quantum information applications). The control task is to engineer a QDS generator that stabilizes the maximally entangled “cat state”:
ρd=12(|00⟩+|11⟩)(⟨00|+⟨11|).
In order to employ the synthesis techniques developed above, we consider a change of basis such that in the new representation A particularly natural choice is to consider the so-called Bell basis:
B={1√2(|00⟩+|11⟩),1√2(|00⟩−|11⟩), 1√2(|01⟩+|10⟩),1√2(|01⟩−|10⟩)}.
Let be the unitary matrix realizing the change of basis. In the Bell basis, which we use to build our controller, we consider a Hilbert space decomposition where and and the associated matrix block decomposition.
Let us consider in the canonical basis. It is easy to verify that In the Bell basis, and If, in this basis, we are able to implement the feedback Hamiltonian (where now the tensor product should simply be meant as a matrix operation), we render invariant, yet obtaining with Direct computation yields back in the computational basis. With this choice, using the definitions in the proof of Theorem 2, we have:
HR′=span{1√2(|01⟩+|10⟩),1√2(|01⟩−|10⟩)},
and is itself invariant. Hence, we need to produce a control Hamiltonian to “destabilize” By inspection, we find that contains a proper subspace that supports an invariant and attractive state for the dynamics reduced to . To “connect” this state to the attractive domain of we need a non-trivial Hamiltonian coupling between and . This may be obtained by a control Hamiltonian in the standard basis – which completes the specification of the control strategy that renders the unique attractive state for the dynamics. Notice that both the measurement and Hamiltonian compensation can be implemented locally, which may be advantageous in practice.
This example suggests how our results, obtained under CHC assumptions, may be interesting to explore the compatibility with existing control constraints. A further illustration comes from the following example.
Example 4: Consider again the above two-qubit system, but now imagine that we can only implement “non-selective” measurement and control Hamiltonians, i.e., must commute with the operation that swaps the qubit states. It is then natural to restrict attention to the dynamics in the three-dimensional subspace generated by the triplet states, which correspond to eigenvalue , , of the total spin angular momentum , [10]:
HJ=1=span{|00⟩,1√2(|01⟩+|10⟩),|11⟩}.
Notice that corresponds to the fixed subspace with respect to the swap operation.
Our goal is to engineer a FME such that the maximally entangled state is attractive for the dynamics restricted to . Consider a collective measurement of spin along the -axis, described by . Upon reordering the triplet vectors so that in the new (primed) basis the -projection ranges over and we have:
J′x=ℏ√2⎛⎜⎝01110010
|
2021-02-26 17:59:29
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8993804454803467, "perplexity": 629.8155851209956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00570.warc.gz"}
|