url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/chapter-5-relationships-within-triangles-5-4-medians-and-altitudes-practice-and-problem-solving-exercises-page-313/25 | ## Geometry: Common Core (15th Edition)
Published by Prentice Hall
# Chapter 5 - Relationships Within Triangles - 5-4 Medians and Altitudes - Practice and Problem-Solving Exercises - Page 313: 25
#### Answer
$\overline{BD}$
#### Work Step by Step
An altitude is a perpendicular line segment drawn from the vertex of a triangle to a side opposite the vertex. An altitude for $\triangle ABC$ is $\overline{BD}$ because it is a perpendicular line segment that originates at vertex $B$ and contacts $\overline{AC}$, which is the side opposite vertex $B$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2020-09-30 21:51:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3280019164085388, "perplexity": 1381.4227563065976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00549.warc.gz"} |
https://math.stackexchange.com/questions/3377481/midterm-pigeon-hole-question | # Midterm pigeon hole question
A midterm exam consists of $$5$$ problems. Students who solve two of those problems correctly get a passing grade. There are $$32$$ students in the class, and only $$8$$ students passed. Prove that one of the problems was solved correctly by at most $$12$$ students.
So $$16$$ questions were solved correctly, so at least one problem was solved at least $$3$$ times by the pigeon hole principle. Where do the $$12$$ students come from? I mean can't the $$24$$ students solve $$0$$ questions or do I assume that at least each student solve one question? Thanks.
• The assumption that $16$ questions were solved correctly is incorrect. What if some people only got $1$ question correct? They still would've failed. – Landuros Oct 2 '19 at 2:04
• 8 students passed and they need to solve two correctly, so 16 questions must be correct or am I misunderstanding? – Jimmy Ceh Oct 2 '19 at 2:06
• You need to consider the worst case scenario. E.g. If all 8 students who passed answered all 5 problems correctly, can we still find a problem that was solved correctly by at most 12 students? – Calvin Lin Oct 2 '19 at 2:15
• If the conclusion is false, what is the smallest number of correct answers that could have been given? – saulspatz Oct 2 '19 at 2:21
Suppose not. Then all problems were solved by atleast $$13$$ students and therefore, the class solved atleast $$65$$ problems.
But only $$8$$ students passed, hence maximum number of problems that were done correctly are $$8\times 5+24=64$$, we have reached the desired contradiction.
Based on what you wrote, it is not true that "only 16 questions were solved correctly". What we know is that "at least 16 questions were solved correctly".
It might be possible that $$8\times5 = 40$$ questions were solved correctly, like if those who passed got all questions correct.
Hint: Consider the $$32-8=24$$ people who failed. Can we guarantee that there is a problem that is solved by at most 4 people?
Corollary: That problem is solved by at most $$4+8=12$$ people.
• What if the 24 people solved 0 questions. Then wouldn't one problem be solved by at most 8 students? – Jimmy Ceh Oct 2 '19 at 2:49
• So what? There being a problem solved by at most 8 students does not contradict the fact that there is a problem solved by at most 12 students. – Calvin Lin Oct 2 '19 at 2:50
• Do you understand my comment of "You need to consider the worst case scenario"? Currently, you are proving the statement for the best case scenario. – Calvin Lin Oct 2 '19 at 2:51 | 2021-05-14 10:10:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5949463248252869, "perplexity": 359.53698830122613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00544.warc.gz"} |
https://www.oecd-ilibrary.org/sites/eco_surveys-usa-2016-3-en/index.html?itemId=/content/component/eco_surveys-usa-2016-3-en | # Assessment and recommendations
Seven years after the financial crisis, the United States is making a comeback. The US economic recovery, while modest by historical standards, has been one of the strongest in the OECD, thanks to robust monetary policy support and an early fiscal expansion. Many private-sector jobs have been created, pushing unemployment down to its pre-crisis level, thereby providing consumers with higher income and improving their confidence. Further economic growth at a pace near 2% a year is likely in the short term, while a new recession is a low-probability prospect in the current environment. But a number of long-term challenges remain unresolved. In particular, the slowdown of productivity growth already apparent since the mid-2000s has continued in recent years. Faster productivity growth – supported by well-designed investments in innovation, infrastructure, skills and inclusiveness – would help to address future challenges such as rising income inequality, population ageing and fiscal sustainability. Against this background, this report focuses on:
• How to support a sustainable expansion by using fiscal and structural policies, so as to lighten the burden on monetary policy and to facilitate a normalisation of interest rates;
• How to boost productivity growth by bolstering competitive forces on market incumbents, combined with well-designed investments in innovation, skills, infrastructure and environmental protection.
• How to make growth more inclusive by enabling the acquisition of appropriate skills, eliminating obstacles to employment and enabling individuals to fulfil their potential.
## After the recovery, growth is likely to remain moderate
Output has recovered, albeit more slowly than in previous expansions (Figure 1). The slow speed of the recovery reflects the severity and depth of the financial crisis, fiscal consolidation, the exit of baby boomers from the labour market, weaknesses in key OECD economies, and, more recently, world trade stagnation induced by the slowdown of China and lower demand from oil-exporting countries.
While activity is, on average, well above pre-crisis peaks, the revival does not prevail everywhere. The recovery has been particularly robust in some locations, but activity remains low in other areas. Some industries have performed strongly (software, telecommunications, pharmaceutical products), whilst growth in many other areas and industries remains mired in the doldrums. The diversity in economic outcomes is reflected in income inequality, which continues to increase.
The recovery has been sustained mainly by mutually-reinforcing gains in employment, income and household spending. Declines in energy prices – which began when oil and natural gas became available from unconventional sources – have boosted household purchasing power, providing an additional lift to consumption. However, the impetus from these influences is unlikely to be sustained without a meaningful pickup in real wage growth. Meanwhile, business fixed investment has expanded steadily in comparison to the rest of the OECD, reflecting the strong recovery of business output, although booming conditions in the domestic energy sector that prevailed through late 2014 have come to a sudden halt.
Weak global demand and a stronger dollar have created powerful headwinds for firms exposed to international competition. The effective exchange rate has appreciated sharply since mid-2014 in real effective terms, thus exerting a drag on exports (Figure 3). Steps to expand international trade treaties could support greater US and global demand over time. The recently concluded Trans Pacific Partnership is expected to lift US real incomes by around ½ per cent of GDP by 2030 when it is fully implemented (Petri and Plummer, 2016), and negotiations for a Transatlantic Trade and Investment Partnership with the European Union and a number of other negotiations are ongoing.
Economic growth is projected to continue at an annual pace of about 2% in 2016 and 2017 (Table 1; Figure 2, Panel B). Fiscal policy is assumed to have a neutral impact after several years of budget consolidation. Monetary conditions are assumed to remain highly accommodative, even though the Federal Reserve is no longer expanding its balance sheet and has begun to gradually raise interest rates from very low levels. A new recession is an unlikely prospect in the near term on the basis of existing information (Box 1). Nonetheless, low-probability but extreme events (Box 2) should not be overlooked by policymakers. With monetary policy levers persistently set at highly accommodative settings to achieve mediocre growth, the scope for policy to respond aggressively to adverse shocks is limited.
Table 1. Macroeconomic indicators and projections
Annual percentage change, volume (2009 prices)
2012
2013
2014
2015
2016
2017
Current prices (USD billion)
Gross domestic product (GDP)
16 155
1.5
2.4
2.4
1.8
2.2
Private consumption
11 051
1.7
2.7
3.1
2.7
2.1
Government consumption
2 544
-2.5
-0.5
0.4
0.5
0.8
Gross fixed capital formation
3 064
2.4
4.1
3.7
2.4
4.5
Housing
442
9.5
1.8
8.9
10.1
7.5
2 008
3.0
6.2
2.8
-0.1
4.1
Government
614
-4.8
-1.1
2.3
4.1
2.9
Final domestic demand
16 659
1.2
2.5
2.8
2.3
2.4
Stockbuilding1
62
0.1
0.1
0.2
-0.3
0.0
Total domestic demand
16 721
1.2
2.5
3.0
2.0
2.4
Exports of goods and services
2 198
2.8
3.4
1.1
0.4
3.5
Imports of goods and services
2 764
1.1
3.8
4.9
1.9
4.3
Net exports1
- 566
0.2
-0.2
-0.7
-0.2
-0.2
Other indicators (growth rates, unless specified)
Potential GDP
1.7
1.7
1.7
1.6
1.5
Output gap2
-3.5
-2.8
-2.0
-1.8
-1.2
Employment
1.0
1.6
1.7
2.1
1.5
Unemployment rate
7.4
6.2
5.3
5.0
4.7
GDP deflator
1.6
1.6
1.0
1.4
2.1
Consumer price index
1.5
1.6
0.1
1.1
2.0
Core consumer prices
1.5
1.5
1.3
1.7
1.8
Household saving ratio, net3
4.8
4.8
5.1
5.2
4.5
-4.2
-4.3
-4.2
Current account balance4
-2.3
-2.2
-2.7
-2.5
-2.5
General government fiscal balance4
-5.5
-5.1
-4.4
-4.3
-3.7
Underlying government primary fiscal balance2
-1.6
-1.1
-0.8
-0.6
-0.3
General government gross debt 4
111.4
111.7
113.6
114.2
114.2
General government net debt4
87.7
87.4
88.5
90.0
90.0
Three-month money market rate, average
0.3
0.3
0.5
0.9
1.4
Ten-year government bond yield, average
2.4
2.5
2.1
2.2
3.0
Memorandum items
Federal budget surplus/deficit4
-4.1
-2.8
-2.5
Federal debt held by the public4
72.6
74.4
73.7
1. Contribution to changes in real GDP.
2. As a percentage of potential GDP.
3. As a percentage of household disposable income.
4. As a percentage of GDP.
Source: OECD (2016), OECD Economic Outlook 99 database and The White House: Office of Management and Budget.
Box 1. Recession risks appear limited
Cross-country empirical studies on economic resilience using historical data from the OECD have highlighted a constellation of variables that have each been associated with past cyclical downturns (Hermansen and Röhn 2015; Röhn et al., 2015). Broadly speaking, these models associate the probability of recession to individual indicators of potential imbalances in domestic asset and credit markets, as well as in global markets and international trade. However, when the indicators identified by these studies are used to examine the US economy on its own, the predictive content of each variable in isolation is quite poor, with numerous false positives and false negatives. To downplay the noise from each indicator separately and focus on their collective signaling content, we used principal component analysis to extract three factors that appear to have some predictive power for the last three of the last five NBER-defined US recessions (Figure 4) within sample.
Figure 6(left panel) shows estimates of the recession probability at horizons of 2, 4, 8, and 12 quarters using models estimated with quarterly data for these three components over the entire time span from 1975 to 2015. These models show elevated recession probabilities around the time of most downturns but are still subject to errors – such as failing to predict the 2001 downturn. Estimates from recent quarters suggest that the vulnerability to recession has risen of late but is well below previous episodes. Nonetheless, models estimated using historical data are at risk of over fitting. Figure 4 (right panel) shows the same model applied out of sample in real time. The predictive performance deteriorates somewhat relative to the in-sample estimates, often indicating a high probability of a downturn with a delay after the event or even after the recovery had begun. Not surprisingly, the real-time models also point to very little likelihood that a recession is imminent.
Box 2. Low-probability vulnerabilities
Vulnerability
Possible outcome
An intensification of geo-political tensions and threats of terrorist activity
Heightened insecurity could undermine consumer confidence. Addressing potential threats would likely require substantial public spending and may disrupt economic activity, notably through tighter border controls.
A retreat from internationalism
A broad retreat from internationalism may give rise to increased protectionist behavior, leading trade to shrink and jeopardising economic growth.
Financial market meltdown
Exposure of systemically-important financial institutions to major shocks emanating from domestic financial markets or abroad could ultimately require the authorities to intervene to ensure financial market stability and could result in another recession.
Intensified weather variability and storm activity
Coastal areas are already heavily exposed to sometimes devastating storm damage. Extreme natural disasters may have long-term negative effects on local economies (e.g., Katrina) and require large responses in disaster relief, putting a strain on State and federal fiscal positions.
Political gridlock
A return to past difficulties in forging consensus on the budget and economic policy more broadly may result in gridlock. Risks of default on federal debt or underfunding of essential activities could result in sharp shocks to the economy and financial sector.
Private-sector job creation has been the most welcome aspect of the recovery (Figure 5). The unemployment rate has come down substantially and long-term unemployment has decreased further than in other countries. Labour-market participation has also begun to recover recently, although it remains on a declining trend due to the retirement of baby-boomers.
Notwithstanding low unemployment, inflation is expected to remain stubbornly low, partly due to transitory downward pressure from the recent appreciation in the dollar and falling energy prices, but also due to the flattening of the Phillips curve (Figure 6, Box 3). Measures of core inflation are higher, but still below the Federal Reserve’s inflation 2% target for PCE price inflation. Indicators signal little to no risk of an emerging inflationary spiral, with measures of inflation expectations showing hints of tailing downward. Nominal wage growth remains slow, although there are signs of modest upward pressure.
Box 3. Real-time uncertainty in assessing inflationary pressures
The magnitude of the gap between output and potential GDP (the output gap) can be difficult to assess in real time for many countries – including the United States. These difficulties are evident in Figure 7, which compares estimates of the output gap across OECD Economic Outlooks from 2004 to 2016. Indeed, OECD Economic Outlooks initially suggested that the magnitude of the US output gap was fairly modest in the years leading up to the Great Recession in 2008/2009. The existence of an inflationary gap prior to the crisis only became substantial years later, in retrospect. This experience underscores the substantial uncertainties involved in assessing the current output gap.
### An exit from unconventional monetary policy has started
The expectation of Federal Reserve’s FOMC members is that inflation will rise slowly toward the target. The central bank has stopped adding to its balance sheet through bond purchases and started the process of normalising interest rates. Further increases in interest rates would be warranted in line with inflation becoming more consistent with the Fed’s inflation target, though at a pace so as not to jeopardise the recovery. As the target is symmetric, inflation could run temporarily higher than 2%.
The exit from unconventional monetary policy could be facilitated by fiscal policy taking a greater role in supporting domestic demand through well-targeted public investment. Structural policies designed to boost productivity growth and the size of the labour force would also facilitate the normalisation of monetary policy by raising potential output growth and the neutral interest rate. These policies would create space of monetary policy reacting to adverse shocks, and they would reduce the risk of hitting the lower bound.
### Preserving financial stability requires introducing macro-prudential tools
Large global US banks have mostly recovered from the crisis. While US banks overall are less well capitalised than those of many other OECD countries, when measuring capital and adequacy using risk-based capital metrics (Tier 1 risk-based capital ratios) (Figure 8, Panel A), the large global US banks are about as well capitalised as similarly large and complex banks from other OECD countries and are more highly capitalised than peers when measuring capital adequacy using a globally consistent leverage ratio measure, which controls for differences in risk-weighted assets (see comparison of Basel III leverage ratios)(Figure 8, Panel B). In addition, while the concentration of financial activity in a handful of large, global banks has increased compared to the pre-crisis years, the overall share of assets held by the six largest banks has been declining since 2010. Moreover, since 2010, the largest and most complex banks have shed assets, reduced reliance on less stable sources of funds, and significantly strengthened their capital and liquidity buffers, which has reduced risk at these banks. The authorities have been working to mitigate risks, particularly for the large, complex banks (Table 2). These steps include rules to improve funding resilience, restrict financial interconnectedness and improve the ability of regulators to resolve these firms. Work is ongoing on introducing counter-cyclical capital buffers. In addition, robust and dynamic stress testing increases the vigilance of the authorities with respect to financial stability. Finally, the Financial Stability Oversight Council was introduced with a mandate to assess and respond to systemic financial stability threats. Nonetheless, the fragmented nature of the financial regulatory system remains unaddressed which may complicate taking necessary macro-prudential policy measures (Kohn, 2014). Another possible weakness are limits on the Federal Reserve to act as a lender of last resort outside the banking sector. Against this background, it is warranted to reduce fragmentation amongst regulators and ensure substantial capital buffers, particularly in banks that are too big to fail, while macro-prudential policy remains underdeveloped.
Table 2. Past OECD recommendations on monetary and financial policy
Recommendation
Actions taken since the 2014 Survey
Gradually reduce and ultimately remove monetary accommodation as the economy approaches full employment and inflation returns to the Fed’s 2% target.
The process of raising policy rates began in December 2015, though policy remains appropriately accommodative.
Continue to roll out macro-prudential policy tools, including those associated with the Dodd-Frank Act and those addressing vulnerabilities in wholesale funding, repo market and money-market mutual funds.
Capital requirements for systemically important banks are substantially higher than before the crisis, stress tests have been implemented to reveal vulnerabilities, and regulations require systematically important institutions to form “living wills” to avoid a disorderly unwinding in the case of failure. New rules on securitisation and money market funds as well as enhanced transparency apply to the shadow banking sector.
Reform the housing finance system to ensure access to mortgage credit by creditworthy homebuyers while providing better guarantees of financial stability and avoiding again exposing taxpayers to costly bailouts.
Several housing finance reform proposals have been made, but none progressed past the committee stage in Congress.
Leave the securitisation of mortgages to the private sector. This would entail privatising the Government Sponsored Enterprises, cutting off their access to preferential lending facilities with the federal government, subjecting them to the same regulation and supervision as other issuers of mortgage-backed securities, and dividing these entities into smaller companies that are not too big to fail.
Fannie Mae and Freddie Mac remain under government stewardship. The Senate Banking Committee passed in May 2014 a bipartisan proposal (“Johnson-Crappo GSE reform”) seeking to reform the housing finance system, create greater competition and reduce taxpayer risk, while ensuring affordable fair access to all creditworthy homebuyers. The proposal has not gone beyond the committee stage.
The housing market is showing signs of recovery. Residential house prices have increased and are exceeding pre-crisis levels in nominal terms in a handful of areas. However, price-to-rent ratios remain below the pre-crisis peak (Figure 9). In addition, loan write-offs and household spending restraint have helped put household balance sheets in a stronger position overall than prior to the crisis (Figure 10). Mortgage debt growth remains subdued, in part because government sponsored enterprises (GSEs) Fannie Mae and Freddie Mac have taken measures to bolster risk sharing with the mortgage originators when they purchase loans. Their regulator, the Federal Housing Finance Agency, has also imposed tighter prudential standards for the loans they can purchase. A number of reforms to the GSEs have been proposed, though none have made it into legislation (Table 2).
### The federal deficit has declined, making space for higher public investments
After having peaked at 10.5% of GDP in 2009, the general government budget deficit narrowed to 4.4% in 2015, reflecting both the improving economy and a period of sustained and substantial consolidation since 2011. Almost all of this consolidation occurred at the federal level, with the federal deficit falling from a peak of 9.75% of GDP to only 2.5% in fiscal year 2015 (Figure 11). Given current concerns about growth prospects and inequality, more supportive fiscal policy is appropriate. Measures to support firm creation, skill formation, innovation and infrastructure provision would likely help productivity (Auerbach and Gorodnichenko, 2013; Abiad et al., 2013; Delong and Summers 2012). The President’s budget proposal for fiscal year 2017 presents a package of measures intended to raise spending on infrastructure and other areas, while increasing tax revenues, including by limiting the value of regressive tax expenditures (CBO, 2016). If additional fiscal policy support were co-ordinated internationally, the multiplier effect on GDP would be substantially larger.
From 2013 to 2015, ongoing political brinkmanship resulted in a government shutdown and episodes of bond market volatility. Recently, Congress and the Administration reached an agreement that reduced short-term uncertainty. Congress suspended the federal debt ceiling until March 2017 and approved the Bipartisan Budget Act of 2015 that fully funded the government during 2016. Further demonstration of such bipartisanship would be beneficial, enhancing financial stability and helping progress towards long-term fiscal sustainability.
Public investment, such as temporary infrastructure spending, would increase the federal deficit in the short term, but need not have a detrimental impact on the projected trajectory of the public debt-GDP ratio if it is high quality and therefore enhances long-term productivity. Increased long-term spending commitment, such as education and training, would need to be funded by higher revenue, such as reducing regressive tax exemptions and introducing green taxes, as recommended in previous OECD Economic Surveys and suggested in the Administration’s proposed 2017 Budget (Box 4). Even though healthcare spending has slowed down recently (Box 5), it remains a long-term concern that needs to be addressed, including with the implementation of an excise tax on high-cost health insurance plans, which has been delayed (Table 3). The CBO projects that under current law the federal budget deficit will increase from around 3% to almost 5% of GDP from 2016 to 2026, gradually pushing up the total amount of federal debt held by the public by about 10 percentage points to 86% of GDP (Figure 13). In the absence of fiscal policy changes, debt to GDP would be on an exponential path in the longer term. Building on the CBO baseline, OECD projections suggest that somewhat slower healthcare spending growth (i.e. assuming that more of the slowdown is structural) would still place the debt-to-GDP ratio on an upward path. By contrast, an acceleration of labour productivity growth from the assumed 1.4% annually to 2% annually (the historical norm) would push down the federal debt-to-GDP ratio to 75% by 2026, assuming lower health-care spending.
Box 4. The proposed FY2017 budget
The FY2017 Budget proposal would provide a boost to spending with accompanying revenue measures that would reduce the budget deficit and federal debt held by the public in comparison with projections based on current law. By the end of the projections in 2026, federal debt would reach 77.4% of GDP rather than 85.6% of GDP under current law (CBO, 2016). The proposal builds on the bipartisan budget agreement, adhering to the discretionary levels provided for 2017 and prevents the return to sequestration thereafter, while also putting forward paid-for mandatory investments to underpin economic growth in the future and support innovation.
On the revenue side, an estimated $2.9 trillion of deficit reduction over 10 years comes from taxes, immigration reforms, and other proposals. The Budget proposes a number of reforms that would modernise the business tax code to make it fairer and more efficient by closing tax loopholes and reforming tax expenditures, including by reducing tax benefits for high-income households. A tax on oil would also be introduced. On the spending side, the proposed budget supports infrastructure and innovation. Investments in Building a 21st Century Transportation System amounting to$320 billion over 10 years are intended to support a multi-agency initiative to build a clean transportation system. Overall, the 21st Century Clean Transportation Plan would increase American investments in clean transportation infrastructure by roughly 50 % above current levels. The budget also calls for $32 billion per year over 10 years to support innovative programs that make communities more livable and sustainable. The Budget proposes a number of initiatives to improve access to high-quality early childhood education, which has been supported in past Economic Surveys. Notably, the budget would provide funding to expand access to high-quality care to more than 1.1 million additional children under age four by 2026. In addition, the Budget proposes to help States implement changes required by the new bipartisan Child Care and Development Block Grant Act of 2014 and for competitive pilot projects to help build a supply of high-quality child care in rural areas and during non-traditional hours. The Preschool for All initiative would give all four-year olds from low- and moderate-income families access to high-quality pre-school. The budget also proposes to make college education more affordable and encourage completion. Finally, the Budget includes roughly$375 billion of health savings that grow over time and builds on the Affordable Care Act with further incentives to improve quality and control health care cost growth.
Box 5. Potential lessons from healthcare spending in OECD countries
Disentangling cyclical drivers in healthcare spending from the broader trend is particularly difficult for a single country. Examining common trends in spending across many countries may help separate the roles of cyclical effects and policy measures. Previous OECD studies (Lorenzoni et al., 2014) show that the slowdown of healthcare spending in the United States is broadly consistent with patterns in a number of other OECD countries.
One way to assess the common trend in healthcare expenditure growth is to estimate aggregate effects from using a cross-country panel regression, whilst controlling for country-specific fixed effects. The red line in Panel A of Figure 12 shows yearly aggregate effects from such a regression, estimated using available data from the OECD’s Health Spending Accounts (HSA) for 21 countries from 1996 to 2013. The dependent variable in this regression is the annual growth rate in per-capita healthcare expenditures, which is converted to purchasing power units using the price deflator for actual individual consumption (which adds in-kind government benefits to private consumption). This plot suggests that a spending deceleration gradually took hold in the early-2000s and then intensified around the time of the financial crisis. Insights about the sources of this deceleration can be gained by decomposing spending growth into annual contributions from the quantity of healthcare consumed per insured person, the price of healthcare relative to the consumption deflator, and the proportion of the population covered by insurance. Since these contributions jointly account for overall per-capita spending growth, aggregate effects from regressions that include an identical set of controls will cumulate to each year’s overall aggregate effect. The decomposition shown by the bars in Panel A suggests that the gradual deceleration prior to the crisis was driven about equally by slowing in both the relative price and quantity of healthcare consumed, whereas the sharp post-crisis deceleration was mainly reflected in quantities. Decompositions from Lorenzoni et al. (2016) provide additional insights, showing that the spending deceleration to date is most evident in publicly-financed spending, which gradually slowed before leveling after of the crisis; by comparison, privately-financed spending growth ebbs steadily from the early 2000s onward. By function, the slowdown is most apparent in pharmaceuticals, with government-financed spending on curative and rehabilitative care category playing a secondary role.
The key question for many of these OECD countries – including the United States – is how much of the steep post-crisis falloff in healthcare spending growth is cyclical. To assess how cyclicality has contributed to the cross-country downtrend, we estimated separate sets of aggregate effects for overall per-capita spending growth using the same basic specification, with and without annual measures of the each country’s economic slack (measured using the unemployment gap from the OECD’s Fall 2015 Economic Outlook). The aggregate effects shown in Figure 12. Panel B suggest that the widening of slack after the crisis explains only some of the slowdown in the cross-country trend of healthcare spending.
Table 3. Past OECD recommendations on fiscal policy
Recommendation
Actions taken since the 2014 Survey
Fiscal policy needs to remain cautious and prepared to take actions to ensure longer-term sustainability.
There have been no large changes in fiscal policy.
Act towards rapid international agreement and take measures to prevent base erosion and profit shifting (BEPS).
The United States participated in the OECD/G20 Base Erosion and Profit Shifting (BEPS) Project, endorsed by the G20 Leaders in November 2015.
Increase reliance on consumption taxation.
No action taken
Make the personal tax system more redistributive by restricting regressive income tax expenditures.
The President’s proposed FY2017 Budget has measures to limit regressive tax expenditures, reform capital income taxation, and reconcile different tax bases.
Replace the health tax exclusion (i.e., the exclusion from taxable personal income and payroll tax of compensation paid in the form of health insurance cover) with subsidies that do not encourage overly-generous health plans (subject to minimum standards of coverage).
The 2010 Affordable Care Act included an excise tax that will be levied on high-cost health insurance plans starting in 2018, but now delayed to 2020. The Administration is continuing to develop and implement regulations on the tax on high-cost health insurance plans, the so called “Cadillac tax”.
Speed up the phased increase in the retirement age at which full social security benefits are paid from 65 to 67. Link the retirement age to active life expectancy thereafter. Reduce the replacement rate for higher earners and raise the Social Security tax cap.
No action taken. Recent research has revealed that life expectancy for low-income pensioners has remained static, undermining the case for an automatic link between average life expectancy and the retirement age.
## Achieving stronger long-term growth
Well-designed investments and structural policies would help to boost productivity and therefore long-term growth of living standards (present section). This would not be enough, however, to make growth more inclusive, which requires adequate social policies (next section).
### Investing in infrastructure
Public infrastructure has not kept pace with the economy (Figure 14). The marked slowdown in the growth of public investment has contributed to the deterioration in quality of existing infrastructure (Figure 15), as well as growing problems of congestion. Improving infrastructure provision would not only improve productivity and reduce congestion, but could also help to contain urban sprawl and environmental degradation. Low current interest rates make such investments even more desirable (Elmendorf and Sheiner, 2016).
Shortfalls in public infrastructure are notable in road transportation. The CEA (2014) reports estimates that traffic congestion imposes annual costs of $120 billion on households and around$30 billion on businesses. The main federal funding source for road transport, the Highway Trust Fund has required repeated injections from general revenue as the nominal (per-gallon) gasoline tax that was intended to fund road transport infrastructure has not been adjusted since 1993. In December 2015, the Fixing America’s Surface Transportation Act secured funding from general revenue until 2020. Better use of taxation, distance-based charges and congestion charges could help to address the funding needs in and tighten the links between road use (captured by fuel consumption) and congestion, accidents and pavement damage.
The CBO (2015) estimates that raising fuel taxes by roughly 10 cents per gallon to around 30 cents per gallon would cover spending commitments. Taxes on road use could also address externalities more effectively, for example by targeting heavy trucks, which account for just 4% of road users but represent almost one-quarter of the costs, mainly through damage to the road pavement (Austin, 2015). In this spirit, the Administration has proposed a $10 per barrel oil fee to fund infrastructure. As it becomes more expensive to build around congestion, implementing user tolls in the most heavily congested areas would help reduce congestion while providing funding to support needed expansion and improvement of the transport network. State and local governments make most decisions regarding infrastructure provision. New analysis shows co-ordination problems arise when projects require several governments to act together (Glocker and Ahrend, 2016). Under-provision can emerge when co-ordination is needed for infrastructure and service provision (such as mass transit), making cars indispensable in many cities. As a result, single passenger commutes by car, commute times and greenhouse gas emissions are often higher when compared with other cities. Furthermore, such problems can weigh on city-level productivity (Ahrend et al., 2015). The federal government has some ability to facilitate co-ordination. The Fixing America’s Surface Transportation (FAST) Act signed in December 2015 established the Nationally Significant Freight and Highway Projects competitive grant program aimed to support economically beneficial projects that will facilitate improved freight movement and set up an Innovative Finance Bureau designed to promote public-private partnership procurements of large-scale infrastructure projects through expanded technical assistance. Boosting the complementary approach developed in Partnership for Sustainable Communities would ensure multidimensional needs of residents and businesses are taken into account for infrastructure development. Investing in infrastructure would not only boost productivity growth, but it would also enhance socioeconomic opportunities. For example, access to fixed broadband telecommunications, as measured by subscriptions, is about average for the OECD, but generally at slower speeds and higher cost (Figure 16). Access to high-speed broadband varies markedly across the United States, undermining individual and firm opportunities in poorly served areas. Recent initiatives by the Administration including ConnectALL, ConnectHome, and ConnectED will help address the digital divide. The FCC has been promoting competition in the wireless market, and prices are now falling and quality is improving in markets where there is competition. In the fixed market, the FCC has addressed some barriers to competition and in 2015 pre-empted state-level prohibitions on municipalities creating their own networks to help boost competition. As potential for greater competition is emerging in the fixed-line broadband sector with new entrants beginning to create or augment existing networks, competition authorities should act to strengthen competition as they have for wireless broadband. ### Unleashing productivity Measured productivity growth has been unusually sluggish post crisis. Although the sluggishness is partly linked to the business cycle, the broader pattern reflects a slower pace of capital deepening and TFP growth, as well as, to a lesser extent, weaker labour quality growth (Figure 17). This happened despite the abundant flow of new information technology and rising automation, which hints that measurement difficulties may be playing some, albeit small, role. Business capital expenditure, which is needed to increase productivity, has been low even as corporate profitability is at multi-decade highs (Figure 18). Instead of investing, companies have opted to return earnings to shareholders through dividends and share buybacks, which account for a larger share of profits in comparison to the past (Gruber and Kamin, 2015). Average nonfarm business productivity growth decelerated about ¾ percentage point from 2009 to 2014 relative to the preceding five-year period, and weaker average contributions from capital deepening – down about 1½ percentage points from the earlier period – are more than sufficient to explain this overall slowdown. The aggregate health of the corporate sector obscures divergences between firms at the frontier of each industry, which are generally doing well, and non-frontier firms that are lagging behind. OECD firm-level analysis (which uses data that underrepresent US businesses) suggests a growing productivity divide between firms at the global level, which may in part be due to slower rates of knowledge diffusion across firms (Andrews et al., 2015). Studies using US specific data shed light on the productivity slowdown, revealing evidence of a substantial and persistent productivity divide across firms within detailed industry groupings, and of young firms not scaling up operations in response to profitability gains as vigorously as in the past (Decker et al., 2015). Furthermore, the rate of firm entry and exit, which has been a source of productivity gains on aggregate, has declined. The changing composition of the economy may also be contributing to slower productivity growth in a number of ways: • The composition of activity is shifting toward industries where increasing returns to scale are more important, thereby contributing to the marked differences in firm-level productivity. For example, the ability of larger (global) firms to better tailor (digital) technology to their needs – as opposed to relying on more standardised solutions - can provide firm-specific cost advantages, contributing to winner-takes-all outcomes and potentially blunting competitive pressure, especially if there are large barriers to entry. • Population ageing tends to shift activity toward lower-productivity industries that offer services required by seniors, such as long-term care, which can depress aggregate productivity growth. • Demographics can also hold back aggregate productivity growth by shifting the age composition of the workforce away from younger workers, who historically account for a greater share of new entrepreneurship. • Shifts in productivity and relative prices redirect resources away from industries that experience higher productivity growth (Baumol’s disease). The fact that relative prices tend to fall in industries with faster productivity gains (Figure 19) is consistent with such compositional shifts. #### Removing obstacles for small and new firms Reinvigorating firm creation could play an important role in countering productivity trends. New firm creation has been an important driver for productivity growth and also employment growth. A more dynamic business sector will also reduce mis-match in the labour market, and could offer opportunities for workers to improve their remuneration through job moves. Finally, by boosting competitive pressures, new firms can spur innovation and put downward pressure on prices, ultimately lifting well-being. Bankruptcy procedures can support new firm creation by capping potential losses for the entrepreneur, although at a potential cost of a higher risk premium levied by creditors. Reform of the personal bankruptcy code in 2005 strengthened creditors’ positions by introducing means testing during bankruptcy proceedings. Entrepreneurs with “high incomes” were no longer able to use Chapter 7 to surrender assets and gain a “fresh start” but were obliged to use Chapter 11 and propose a repayment plan, making debt discharge more difficult. A further restriction put limits on how quickly an entrepreneur could re-enter bankruptcy proceedings. The immediate effect of the reforms was to cut dramatically the number of bankruptcies filed, though struggling firms may have anticipated the change, boosting pre-reform numbers (Figure 20). Disincentives to file for bankruptcy other things being equal will slow how quickly resources are reallocated. A second effect of the reform has been to encourage incorporation (Paik, 2013). Since the reform, un-incorporated self-employment has declined by over 900 000 whereas incorporated self-employment has risen by 300 000. For unincorporated firms, States offering larger exemptions under Chapter 7 appear to have sustained more firm creation (Rohlin and Ross, 2016). These results suggest enabling “fresh starts” and making debt discharge less onerous might in some cases support firm creation. In contrast, other research suggests that stronger creditor protection may increase firm creation by making credit more readily available (Cerqueiro et al, 2016; Gropp et al., 1997). Patenting permits small firms to invest and benefit from subsequent commercialisation by larger firms, particularly in competitive markets with dominant incumbents. Patents also potentially provide collateral for financially-constrained firms. Empirical evidence suggests that new firms obtaining a patent subsequently experience stronger earnings and employment growth than those that did not. However, firms are sensitive to delays in the patenting process, which can hinder subsequently growth (Farre-Mensa et al., 2015). Delays in dealing with patent applications rose substantially during most of the 2000s, with the time taken from submission to action increasing by around 12 months to three years over the decade. After the introduction of the America Invents Act in 2012, the US Patent and Trademark Office made progress in addressing the application backlog and reducing the time for examiners to review applications and then subsequently either grant or deny a patent (targets for further reductions are already established). Furthermore, the patent fee was reduced for small firms. Legal uncertainties about patenting can create a second barrier to small firms. Aggressive patent infringement lawsuits launched by “patent trolls” or patent assertion entities tend to target small firms disproportionately (Chien, 2015). Delays in patenting in some cases can aggravate the patent troll problem. While patent assertion entities can play an important role in monetising innovation, the authorities should target abuses to ensure that innovation by new firms is not unfairly undermined. The Supreme Court acted in 2014 to give the court discretion to shift the attorney fees to the loser of patent litigation as one deterrent. The Federal Trade Commission is currently investigating the activities of patent assertion entities. #### Enhancing government support of innovation Government support for innovation tends to favour incumbent firms. Support for business R&D is amongst the more generous in the OECD, amounting to 0.25% of GDP in 2013. Most of the support comprises direct support, such as grants and procurement contracts, and can favour incumbents with established reputations. Tax incentives have remained relatively constant as a share of GDP over the past decade and in late 2015 were made permanent (Table 4). The R&D tax subsidy is relatively small in comparison to other OECD countries, where there has been a trend to making incentives more generous and simpler to use (OECD, 2015c). The US tax subsidy provides more support for incumbents relative to new entrants who may not benefit from non-refundable tax credits. Redesigning the R&D tax credit to make them refundable to new firms could support new enterprises more effectively, but would need to be balanced with increased costs of administration. Table 4. Past OECD recommendations on innovation Recommendation Actions taken since the 2014 Survey The federal R&D budget should be protected from the expenditure cuts. Make the R&D tax credit permanent The R&D tax credit was made permanent in 2015. Patent reform (America Invents Act) needs to be taken further by ensuring that courts grant injunction relief and damages awards for patent infringement that reflect realistic business practices and the relative contribution of patented components of complex products. The Supreme court has allowed costs to be shifted in cases of a lost appeal. Tertiary education attainment in STEM fields needs to be increased. An important step in doing so is improving access to quality secondary education so that students are better prepared for STEM tertiary studies. Every Student Succeeds Act was introduced in 2015. The 2017 budget proposal includes$4 billion in mandatory funding over three years for States to increase access to K-12 STEM coursework, and $80 million for a new, competitive programme to promote the redesign of secondary schools with a focus on STEM-themed schools that expand opportunities for all students, particularly girls and other under-represented groups in STEM fields. Establish a national innovation office to increase coherence and continuity in implementation of the national innovation strategy. No action taken. Other OECD countries have established productivity commissions. A number of proposals to support R&D further has included calls to establish a so-called patent box (often called an innovation box in the US context), which lowers the tax rate on income from patents and intellectual property. Proponents of such regimes may justify them on international competitiveness concerns or because firms may not be able to appropriate all the benefits from their inventions due to various spillovers. However, patent boxes typically provide the greatest tax benefit to the most profitable activities, and there is little evidence to suggest that this approach better addresses the externalities associated with R&D than other government support. In addition, patent boxes add substantial complexity to the tax system, often providing windfall gains to holders of existing intellectual property, and have less effect on cash flow for small firms. A further concern is that countries offering patent boxes, without significant R&D activity in the country, have attracted intellectual property activity through base erosion and profit shifting. With respect to the base erosion and profit shifting concern, the recent agreement on harmful tax practices, including certain intellectual property regimes, as part of the OECD/G20 Base Erosion and Profit Shifting Project will reduce opportunities to shift profits without having significant R&D expenditures in the country. The tax system can tilt the playing field against new and small firms (OECD, 2015b). For example, compliance costs for small and medium sized enterprises can be significant. Taking opportunities to simplify the tax code would mitigate these effects (Box 6). Box 6. Corporate tax reforms Previous Surveys have advocated reforms to the US corporate tax system, which combines high statutory marginal tax rates, a narrow base and numerous provisions that invite deadweight losses from tax-avoidance activities. In December 2015, Congress made some small changes as part of the 2015 omnibus budget legislation. The associated appropriations act permanently extended tax credits for R&D, expensing for small businesses, and a number of tax credits targeted at low-income households. A tax on the most expensive medical insurance plans from the 2010 Affordable Care Act (“Cadillac tax”) was deferred, delaying the incentive this tax is meant to provide for businesses to look for better value-for-money insurance coverage for their workers by two years. The recent wave of multinational corporations using inversions and interest deductions on intra-group borrowing to reduce their US tax liabilities is driven by the high statutory corporate tax rate, the world-wide taxation with deferral and foreign tax credits, and relatively weak international tax anti-avoidance rules. A number of international tax reform proposals call for lower corporate tax rates and tougher anti-avoidance rules. The President’s 2017 budget proposal would tighten the limitation on corporate interest deductions, impose a minimum tax on foreign source income, restrict hybrid tax structure arrangements designed to create stateless income, and tighten controlled foreign corporation rules. The Treasury in April 2016 introduced new regulations that limit earning stripping and tighten certain restrictions on inversions. Moving ahead on tax reform, including international tax reform, will require legislation, but the US administration has been actively engaged in other important changes are occurring in international taxation. The United States has committed to the outcomes of the OECD/G20 Base Erosion and Profit Shifting (BEPS) Project, endorsed by the G20 Leaders in November 2015, which include significant measures to improve the international framework for taxation of cross-border activities and reducing BEPS. The United States is already moving forward with the implementation of the BEPS recommendations on country-by-country reporting for the largest multinationals, which will provide important information to tax administrations for risk assessment purposes, and anticipation of this reporting by multinationals has already begun to discourage aggressive tax planning. It has also incorporated the minimum standard on treaty abuse and a mandatory binding arbitration provision into its new Model Tax Convention. Beyond BEPS implementation, the United States has recently taken steps through regulatory action to improve the transparency of single member limited liability companies to address weaknesses in the availability of ownership information identified by the Global Forum on Transparency and Exchange of Information for Tax Purposes. The United States will be subject to a new round of peer reviews which will also assess the new standard of beneficial ownership adopted by the Global Forum. This may require further action. The enactment of FATCA in 2010 and its full implementation in 2014 provided the basis for the Common Reporting Standard, which is modelled on FATCA. The US now has automatic exchange intergovernmental agreements in place with over 100 jurisdictions, has almost 200,000 foreign financial institutions registered to supply information under FATCA and has already exchanged information in this context, including providing information to those jurisdictions about their residents’ US accounts. The information supplied by the US through these agreements is not identical to the information required to be supplied under the Common Reporting Standard and Congress has yet to enact the required proposed legislation that would put the US on parity with the Common Reporting Standard with respect to the specific types of information exchanged. The United States should also commit to implement the OECD Common Reporting Standard on automatic exchange of financial account information by 2017 or 2018 as have 101 other members of the Global Forum. It is recognised that legislative action may be required to implement the latter recommendation #### Curbing market power and boosting competition Greater market power could account for a number of the features of the current expansion, including slow growth of capital expenditure, less business dynamism and slower productivity growth. While some indicators suggest greater concentration (Figure 21), this evidence is crude and may reflect factors besides market power. For functional markets, as assessed by the competition authorities, there has been relatively little change in anti-competitive behaviour and the competition agencies are active in pursuing competitive outcomes in specific markets. However, in sectors such as fixed-line telecommunications, internet access and pharmaceuticals the permitted market structure and patent protection blunt competitive pressures. For example, the Federal Trade Commission has estimated that pay-for-delay deals (whereby a patent holder makes payments to a potential competitor for not entering the market) which are still permissible in patenting disputes between pharmaceutical companies raise drug costs by$3.5 billion annually (FTC, 2010). The FCC in 2015 pre-empted state-level prohibitions on municipalities creating their own networks to help boost competition.
The United States is generally an open economy with comparatively few barriers to foreign merchandise trade, but some service sectors are less open to competitive pressures from foreign firms (Figure 22, Panel B). These include domestic air and maritime transport and courier services. In addition, annual quotas on the number of contractual and independent services suppliers blunt competitive pressures. Further progress in reducing barriers to trade in services could open the economy to greater competitive pressures. The recently concluded Trans Pacific Partnership goes some way in this direction. The ongoing Trade in Services Agreement negotiations also promote fair and open access across many service sectors. The Transatlantic Trade and Investment Partnership currently being negotiated with the European Union could have similar benefits, including concessions to roll back “Buy American” provisions for public procurement.
The strictness of regulation of professional services was close to the OECD average in 2008 (Figure 22, Panel A) and occupational licensing has grown considerably over the past decades (Kleiner and Krueger, 2013). In 2015, around one quarter of the population had a certificate or licence, with the prevalence rising for full-time workers. Some of the growth of licensing is related to shifts in the composition of economic activity towards sectors such as health. Indeed, the incidence of licensing is particularly pronounced for health care practitioners with over 70% coverage as well as public administration (mainly local) where 40% of workers hold licences. However, the rising prevalence of licensing requirements at the State level also suggests efforts to restrict entry. Indeed, wage premia tend to increase over time following the introduction of an occupational license (Han and Kleiner, 2016). On average, median weekly earnings are one third higher for workers holding a licence. In a number of cases, licensing is a standard requirement across the States, including for occupations such as pest control, bus and truck drivers, and barbers. In other cases, licensing is fairly widespread but not universal, such as for construction occupations in around 30 States. Finally, State-level licensing can be fairly idiosyncratic, including occupations such as interior design and floristry. The growth of occupational licensing also appears to have an effect on migration patterns, with people in occupations that are typically licensed less likely to move across State lines (CEA, 2015). This dynamic has likely contributed to the decline in inter-state migration (Molloy et al. 2014).
While difficult to measure, indirect indicators and anecdotal evidence suggest that the importance of zoning is rising over time (Furman, 2015). Zoning can exacerbate house price appreciation that often accompanies local productivity growth by artificially restricting housing supply. As house prices rise, a sorting on the basis of income tends to occur as fewer lower-income people can afford housing in these high-productivity areas, ultimately leading to residential segregation (Ganong and Shoag, 2015). Such effects contribute to mismatch, act as a drag on productivity, and hamper the ability of labour mobility to moderate income differences across the country.
## Making growth more inclusive and sustainable
Raising the growth rate overall will only benefit well-being in the long run if it is also inclusive and sustainable. Well-being is high on average in the United States (Box 7), but there is considerable heterogeneity with some groups of the population faring considerably better than others.
Inequality in income, wages and opportunity appear to have been growing over time. Growth rates in nominal labour compensation per worker have not kept pace with domestic prices and productivity, implying ongoing erosion of labour’s share of income and a growing share of non-labour income (which tends to flow to high earners). Moreover, gains in labour compensation are mainly flowing to those in the upper end of the income distribution, further widening income inequality (Figure 23). Existing assessments of income inequality are imperfect, including because population surveys involve mis-measurement. For instance, Meyer and Mittag (2015) document that underreporting of assistance programmes (such as SNAP and TANF) understates the effectiveness of anti-poverty policy. Nonetheless, median real disposable household income has not improved materially over the past two decades, though the impact of non-cash benefits, such as Medicaid, paints a more positive picture. Creating opportunities to participate in the labour market more fully would be an important step towards reducing income inequalities. Achieving this requires action to ensure individuals are able to acquire skills they need and do not face discrimination or other obstacles in the labour market.
Box 7. Well-being is high, but not for all
Well-being of the average household is high in the United States in comparison to the rest of the OECD (Figure 24, Panel A). This is particularly so in terms of income and wealth, but outcomes are comparatively strong in almost all dimensions. Only indicators of work-life balance are below the OECD average. However, behind the average significant differences emerge, particularly with respect to income and wealth (Figure 24, Panel B). New indicators of child well-being are comparatively weak in the United States (OECD, 2015d). In particular, children in the United States rank relatively poorly on health indicators (infant mortality, birth weight and obesity).
### Improving opportunities for all
Schools play an important role in developing the skills that employers are demanding and that offer pathways for children from disadvantaged backgrounds to better life outcomes (Table 5). The quality of schooling, as measured by the National Educational Assessment Program scores for mathematics and literacy as well as enrolment, has been improving (Figure 25, Panel A). In particular, the poorest performing States have been successful in narrowing the gap with other States. In large part this reflects substantial improvements in reducing the number of students performing poorly, which has followed a wave of school finance reforms that reduced resource inequalities between schools and boosted performance (Lafortune et al., 2016). In the early 1990s, large shares of students from ethic minority backgrounds failed to attain basic numeracy and literacy skills (up to 80% of students). Substantial improvements have led to this falling more recently, such that less than a half do not show at least basic numeracy and literacy, although there is still room for improvement given that the national average is less than one third of students (33% for mathematics and 29% of reading for 8th grade). Notwithstanding progress, student attainment is around the OECD average when measured by PISA (Figure 25, Panel B). The difficulties of education have left large segments of the US adult population with relatively weak skills by international comparison (Figure 25, Panel C). Workers with only basic skills are particularly susceptible to being unemployed, especially during cyclical downturns.
Table 5. Past OECD recommendations on education policy
Recommendation
Actions taken since the 2014 Survey
Improve quality secondary education to better prepare students for STEM tertiary studies.
The United States is taking policy actions to improve quality secondary education, based on a 5-year strategic plan for enhancing STEM education, including supporting state-led standards for secondary education; making investments toward the goal of preparing 100 000 more STEM-qualified teachers over the next decade; and initiating a STEM Master Teacher Corps. The 2017 budget proposal includes $4 billion in mandatory funding over three years for States to increase access to K-12 STEM coursework, and$80 million for a new, competitive program to promote the redesign of secondary schools with a focus on STEM-themed schools that expand opportunities for all students, particularly girls and other under-represented groups in STEM fields.
Greatly raise limits on Stafford loans, especially for unsubsidised direct loans, so that they cover the full cost of study. The interest rate on these loans should vary with the long-term bond rate. The default repayment plan should be income-contingent.
In August 2013 the Bipartisan Student Loan Certainty Act re-established interest rates for new Federal Direct Student Loans. Interest rates at origination are tied to the 10-year Treasury note, plus a margin, but are fixed for the life of the loan. For loans made between July 1, 2013 and June 30, 2014, the interest rate was 3.86% for undergraduates, 5.41% for graduate students, and 6.41% for PLUS loans. The bill also imposes a cap to ensure interest rates never exceed 8.25% for undergraduate students, 9.5% for graduate students, and 10.5% for PLUS borrowers. The Administration has expanded income driven repayment plans, allowing all who borrowed federal direct loans as students to cap their payments at 10 per cent of their monthly incomes.
Good schools can have a marked impact on student outcomes and support inter-generational income mobility (Chetty and Hendren, 2015). In this context, the Every Students Succeed Act of 2015 replaces the nationwide K-12 standards in No Child Left Behind, and gives States control in setting their own educational objectives. By setting ambitious targets, States can help ensure that their students are well prepared for the job market and can help narrow geographic differences in attainment. An important aspect of the new law is increased State accountability for educational outcomes, including an intervention requirement for underperforming schools. However, there is as yet no evidence of the impact of the Act and States will need to resist the temptation to revert to less demanding standards. Resources differences, such as the incidence of teacher shortages, vary across schools in line with the socio-economic background of students (OECD, 2015c). To offset these differences, tackling underperformance may require the States and the federal government to level the playing field for poorly-performing schools in poorer areas. Funding across States is currently largely regressive partly due to underfunding of federal programmes (Schanzenbach et al., 2016).
Investing in higher education significantly boosts the chances for an individual to be in employment, earn a higher wage and to move up the income scale over their lifetime. The expansion of higher education in the United States since the early 1980s has seen the prospects improve for many more people, including for people from disadvantaged backgrounds. Enrolment rates have been rising steadily, reaching about two-fifths of 18-24 year olds in 2015. Attainment rates have also been trending up, though with around one-third of 25-29 year olds now having completed bachelor degrees. The measure of attainment remains somewhat lower than enrolment. In part this is due to students dropping out, particularly in private for profit colleges where only around one-third of students successfully complete their studies.
Since the early 1980s, college fees have risen steadily, making college education increasingly expensive. Whereas the annual fees for private non-profit was equivalent to a quarter of median household disposable income in the early-1980s, these costs are now approaching 60%. Fees are smaller for public schools and 2-year colleges, but have been increasingly equally rapidly. Partly as a consequence, student debt has been rising as a proportion of household debt (Figure 26). While student debt is not problematic if it enhances earning power, it can load students with debt, which is difficult to discharge, in low-quality degree programmes. In response to emerging problems, the federal government has limited the ability of students to obtain loans for institutions that are performing poorly and have put in place several measures that enable borrowers to shift to income-contingent repayment.
### Boosting jobs
Raising employment will also raise output and well-being and - by bringing in groups of the population that have faced difficulties in finding jobs - make growth more inclusive. Employment rates dropped during the crisis and now stand below other major economies (Figure 27). In part, this reflected the severity of the recession, but also demographic pressures, generally rising disability rolls, and educational enrolment that have been pushing participation rates down. The decline in participation for prime-age individuals is in marked contrast to elsewhere in the OECD, particularly for women. Certain population groups face greater difficulties in finding rewarding work and removing the barriers they currently face would help boost employment. Immigration reform presents another means to boost labour supply, though moving forward on different proposals has proven difficult politically.
Women have greatly improved their economic opportunities, working longer and earning higher incomes, with benefit to overall society, yet there is ample scope for further progress. Women’s participation in the labour force and employment rates remain well below men’s and have been falling back recently such that they are now below those of Germany and Japan. On the other hand, American women are far more likely to work full time (around three quarters against one half in Germany). The United States remains the only OECD country that does not offer paid parental leave on a national basis. In the States that require paid leave, the likelihood that women work increases (Adema et al., 2015). Furthermore, employers can benefit from reduced replacement and training costs as women return to work for the same companies. Differences in State policies concerning paid leave, child and elderly care suggest that States with policies that support greater flexibility (paid parental leave, better quality child care and old age care) also have with higher female employment, including in managerial and professional occupations. The 2014 Economic Survey recommended that access to paid family leave be expanded nationally (Table 6). It also recommended improving the flexibility of working arrangements, increasing access to quality pre-school and childcare to help struggling families better balance work and family commitments. These remain policy priorities for the administration. Second earners, most of whom are women, generally face higher marginal tax rates on labour force participation decisions due to the US family-based tax system in combination with progressive tax rates.
Table 6. Past OECD recommendations to improve work-life balance
Recommendation
Actions taken since the 2014 Survey
Provide support to parents with young children by expanding access to paid family leave nationally.
The proposed 2017 budget includes $2.2 billion to support the creation of State paid leave programmes as well as offer federal employees six weeks of paid leave. Since 2014 some States, such as California, have introduced State-wide programmes and more than 20 cities or counties, such as San Francisco, require paid maternity leave. Help States develop right-to-ask policies to support flexible working arrangements. Since June 2014, all federal employees have such rights. Increase access of low and moderate-income families to quality preschool and childcare. The Preschool for All initiative would invest$75 billion over 10 years with the aim of providing access to high quality preschool for all 4-year olds from low and moderate income families. Support is included in the 2017 budget proposal.
Improving opportunities not only requires breaking down barriers to finding work but also being appropriately remunerated. The gender pay differential, as measured by the differences in the median wages for men and women, has fallen in the United States (Figure 28), although substantial differences remain across States. Part of this wage inequality arises from sorting by occupation and firm. Men typically work at higher paying firms and receive larger wage increases when they switch jobs. Further progress in closing gender wage gaps requires changes in job structure and remuneration, particularly if job flexibility comes at the cost of reduced hourly wages (Goldin, 2015). States with more flexible work arrangements tend to have greater employment rates and smaller gender wage gaps. With women increasingly outperforming men at all levels of education, failure to make occupations attractive to women will hold back the economy and individual well-being (Table 7).
Table 7. Gender inequalities are large
United States
OECD
Women
Men
Women
Men
Health status
Life expectancy at birth (years)
81
76
83
78
Share of people in good/very good health conditions
89%
90%
67%
72%
Education and skills
Tertiary degrees awarded (all fields)
58%
42%
58%
42%
Jobs and Earnings
Employment rates (tertiary educated individuals)
76%
84%
79%
88%
Wage gap between men and women
+18%
+16%
33%
49%
37%
30%
Work-Life balance
Number of hours dedicated to household tasks (per week)
27
18
32
21
Civic engagement and governance
Share of seats in national parliament
19%
81%
29%
71%
Personal security
Share of people feeling safe when walking alone at night
67%
82%
61%
79%
Subjective well-being
Level of life satisfaction on a 0 to 10 scale
7.2
7.1
6.6
6.5
Source: OECD Better Life Index.
Substantial gaps in the median earnings of full-time workers also exist across races (Figure 29). Black and African American and Hispanic and Latino male workers earn a bit less than three quarters of that earned by white males. The gaps between women workers across races are less pronounced. Asian workers generally earn significantly more than other groups. In part, differences in educational attainment may account for these differences, though improvement in test scores by minority students over time has not translated into the wage gap narrowing over the last few decades. Black and Hispanic workers tend to work in lower-paying jobs and their returns to experience have tended to be lower.
Current disability insurance provides little incentive to re-enter the labour market for those whose health condition has improved and would like to work, as earnings above a limit will lead to the disability benefit being withdrawn. People qualifying for disability benefits also qualify for Medicare. Currently, there are some programmes aimed at helping transition individuals back into the workforce, such as retraining, continuing cash benefits for a period of time, and extending Medicare benefits for 102 months after resuming work. These efforts should be carefully evaluated and, if needed, the incentives should be strengthened to get off disability rolls for people who want to work and are again capable of doing so. The number of disability benefit recipients, which exceeded 10 million in 2014, now exceeds the number of unemployment benefit recipients, which dropped below 8 million in 2015. The previous OECD Economic Survey recommended encouraging greater labour market attachment, both by helping maintain labour force attachment during the claims process and by reducing the disincentives to work once receiving disability insurance (Table 8).
Table 8. Past OECD recommendations on disability and health care reform
Recommendation
Actions taken since the 2013 Survey
Provide comprehensive work support to get disability recipients back to work.
The 2014 Workforce Innovation and Opportunity Act put some emphasis on states putting in place policies to improve employment outcomes for people with disabilities
Reform the individual and small-group market to facilitate greater risk pooling. To this end, require community-rated and guaranteed issue policies and make health insurance compulsory. Introduce means-tested subsidies to help low-income persons afford health insurance.
These were features of the Affordable Care Act of 2010.
Another group facing difficulties in finding jobs are people with criminal records. By some estimates, almost 30% of the adult population have been arrested, and even those who are released without charge may still have a record that shows up during a background check (Solomon, 2012). The United States has the highest incarceration rate in the OECD by a considerable margin. In 2009, over 754 persons for every 100 000 population were incarcerated, compared to 140 on average in the OECD. The administration is working to reduce incarceration and efforts in Congress are underway to reform the criminal justice system to reduce incarceration through the reduction of overly long sentences. The administration is also working to help reintegrate individuals who have been incarcerated into the labour market though improving access to employment, job training, housing and healthcare. These actions are needed, as possessing a criminal record reduces employment prospects. In some cases, criminal records may be misused to discriminate on the basis of race, which can compound the disadvantages certain groups already face in the labour market, stemming from factors including poorer quality schools, residential segregation and discrimination (Bertrand and Mullainathan, 2004). Given the over-representation of blacks in prison populations, black males, particularly young males, have much higher unemployment rates and lower employment rates (Figure 30). To counter discrimination, 23 States and the District of Columbia (and over 100 cities or counties) have introduced “ban the box”, which removes pre-screening questions from application forms, but does not prevent firms subsequently checking a candidate’s past. A number of major firms and the federal government have removed these questions during their recruitment processes. Rolling out this initiative nationally would give this marginalised group a fairer chance in getting a job. On the downside, new empirical evidence suggests job applicants without work experience suffer because potential employers have introduced new questions about work experience as a way to mimic the criminal record question.
#### Reducing mismatch
Reducing mismatch between the supply and demand of skills is a means to boost growth while raising well-being. Past technological disruptions have eliminated some jobs, but also created others, making it important to facilitate the skilling and reskilling of workers throughout their working lives. The pace at which changes are occurring is now arguably faster and policy settings need to be adjusted to keep pace with it. Reducing mismatch has become more complicated with the decline in business dynamism. Fewer people have been leaving their job voluntarily and moving to new jobs in the aftermath of the last two recessions (Hyatt and Spletzer, 2013). Historically, this type of job switching has been closely associated with individual earning growth, reflecting gains from better resource allocation and matching the demand and supply of skills. The evidence from Mukoyama (2015) suggests that a share of the recent slowdown in total factor productivity (up to 0.5% annual decline) can be traced to workers finding it harder to move to a job that better match their skills. OECD empirical work on policies that could reduce mismatch, such as bankruptcy procedures and housing policies, suggests opportunities to raise the level of US output by over 3% (Adalet McGowan and Andrews, 2015).
A growing number of firms are aware of the societal challenges posed by inequality of opportunity and are beginning to address them by raising their own minimum wage, improving working conditions (such as offering parental leave), allowing greater flexibility and removing screening on the basis of criminal records. It is in the interest of businesses to engage with educational institutions, both to ensure the right skills are being taught and to counter problems with professional certificates not being portable across the country and which contributes to mismatch (Table 9). In some sectors, such as construction, firms have worked with educational institutions to ensure that credentials are widely recognised. Helping other sectors reach similar arrangements will ensure the right skills are being taught while increasing the scope of opportunity for students. The proposed FY2017 budget calls for a tax credit to strengthen partnerships between businesses and community colleges.
Table 9. Past OECD recommendations on business sector contribution to well-being
Recommendation
Actions taken since the 2014 Survey
Strengthen the portability and recognition of training by involving employers in programme design.
The Department of Health and Human Services introduced the Licensure Portability Grant Program to support State professional licensing boards cooperating in reducing statutory and regulatory barriers to telemedicine. The FY2016 Budget included $7.5 million to support efforts by a consortium of States to expand reciprocity for a range of occupational licenses. The proposed FY2017 budget also calls for a community college partnership tax credit. This proposal would provide businesses with a new tax credit for hiring graduates from community and technical colleges as an incentive to encourage employer engagement and investment in these education and training pathways. The proposal would provide$500 million in tax credit authority for each of the five years, 2017 through 2021. The tax credit authority would be allocated annually to states on a per capita basis and would be available to qualifying employers that hire qualifying community college graduates
Work with employers in preventing the negative effects of job strain on mental health, prolonged sick leaves, job loss and disability-benefit claims
No action taken.
Raise labour earnings at the low end by expanding the EITC, which would be more effective if supported by a higher minimum wage.
EITC expansions were made permanent in 2015 as were similar expansions made to the Child Tax Credit. The 2017 budget proposes to expand the earned income tax credit to workers without qualifying children. 14 States and over 30 cities and counties have introduced minimum wages that are higher than the federal minimum wage in 2014 and 2015.
Investment in health can also make growth more inclusive and expand opportunities. The introduction of the Affordable Care Act has led to considerable progress in addressing the lack of health insurance coverage (Figure 31). As a result, fewer people lack access to adequate healthcare coverage, which could enable them to participate in the job market or take more productive jobs. Among other things, the law helps address shortcomings with employer-provided healthcare coverage, which can create lock-in effects that discourage inter-firm mobility. The expansion of coverage has also helped ease access to healthcare for would-be entrepreneurs, who previously would have relied on a spouse’s employer-provided coverage, purchased more expensive coverage in the individual market, or opted for no coverage.
## Meeting environmental challenges
Ensuring environmental sustainability is an overarching challenge. Greenhouse gas emissions remain relatively high per capita, but are around one-tenth lower than their peak in 2007. The United States has recently made a number of agreements to support emission reduction, notably with China. Reducing carbon dioxide emissions remains an area in which the US performs relatively poorly in comparison with the rest of the OECD, despite the strengthening of fuel economy standards, and significant use of policies and incentives for renewable energy and energy efficiency at the State level (Box 8). The Administration has proposed a $10 per barrel oil fee to fund infrastructure. Implementation of further measures to abate climate change should include the roll-out of Clean Power Plan for new, modified, and reconstructed coal, oil and natural gas-fired power plants Box 8. Economy-environment linkages in the United States The US economy benefits from abundant natural resources. Their contribution to output growth has been positive until 2013 (Figure 32). Energy intensity and carbon intensity of the economy have declined. The United States is a net importer of CO2 emissions, which started to grow in the previous decade and stabilised after the 2008 crisis. The rising share of gas in energy production lowers CO2 intensity although fugitive emissions of methane from fracking are an offsetting factor. The exposure of the population to air pollution by fine particulates has declined steadily for several decades even in periods of stronger economic growth, although still above the WHO target levels for long-term exposure (10 µg/m³). Such pollution is not only an industrial or urban phenomenon. High levels of particle pollution in Los Angeles are likely associated with manufacturing and the urban environment. Neighbouring Central Valley, also with high levels of pollution, is predominantly agricultural (EPA Green Book). Municipal waste generation, though it has declined on a per capita basis, remains higher than the OECD, though less so when compared with GDP. Recycling rates are below the best performers, though similar to the OECD average. For non-recycled waste, the share dealt with through incineration is well below average as low overall population density often makes landfill the preferred option. Revenues from environmentally related taxes are very much lower than in other countries, largely because of low energy taxation. The average tax rate on motor fuel is between 10 and 20% of the level in Europe, for example. Diesel is taxed higher than gasoline in the United States, more in line with their relative externalities than in most other OECD countries. The backlog of investment in public infrastructure covers many sectors including public water supply and sewage systems. Many legacy sites with hazardous waste have been cleaned up over the 30 year history of the Superfund and other programmes but resources devoted to this have diminished. Water quality can be seriously affected by polluted sites. In other areas, for example in California, water availability is the key issue, as water pumping, mainly for agriculture, from some of the largest aquifers continues at unsustainable rates. A high proportion of innovation activity is associated with green inventions, higher than the average for the OECD. The United States is a leader in innovation on energy-efficiency technologies, although only a small fraction of government R&D support (less than 2.5%) is allocated to the environment and energy. R&D expenditure on energy technologies has been relatively stable since 1990, but picked up more recently and as a share of GDP is around average for the OECD. The Administration recently proposed the 21st Century Clean Transportation System, an initiative designed to address the challenges of climate change mitigation. The plan would levy modest fees on oil producers to finance investment in clean transportation infrastructure and to promote innovation in clean technologies. Putting a price on greenhouse gas emissions and supporting innovation in clean technologies were previous recommendations in past Economic Surveys ( Table 10). While implementing climate change policy may have an impact on the budget, the net effect on public finances over the longer term is ambiguous. The impact will partly depend on innovation, the speed at which businesses and individuals adopt greener environmental solutions and the extent to which adaptation costs are reduced as a result. In addition, to the extent that addressing climate change also reduces local air pollution, the impact on health of reduced pollution would boost productivity and reduce health care costs (OECD, 2014). Table 10. Past OECD recommendations on environmental sustainability and energy Recommendation Actions taken since the 2014 Survey Further lower emissions with efficient policy tools, as part of the climate-change strategy, notably by putting a price on greenhouse gas emissions, though well-designed regulation and investment in renewables also have a role to play. The Clean Power Plan of 2015 aims to reduce greenhouse gas emissions from electricity generation, from reducing emissions from coal-fired power stations while promoting renewables. Promote innovation in energy saving and low carbon technology. The 21st Century Clean Transportation System would fund low carbon technology and infrastructure. Ensure that trade restrictions do not hamper energy exports. The ban on crude oil exports was lifted at the end of 2015 Managing water presents challenges in ensuring safe supply and long-run sustainability. Water service providers are often small municipal corporations that lack the institutional capacity needed to raise funding for major capital investments. However, many drinking water systems are ageing and require upgrading to meet environmental requirements. The Environmental Protection Agency has estimated that$384 billion is needed over 20 years to maintain and improve water infrastructure, with the majority accounted for by investment in transmission and distribution networks (EPA, 2013). In addition to water supply systems, large swathes of the country confront challenges in addressing groundwater depletion and water stress, which may ultimately reduce drinking water sources and lead to desertification and saltwater intrusion in coastal areas. Co-ordinating groundwater withdrawals across catchment areas and between the multiple uses of water has proven challenging in parts of the country.
## Bibliography
Abiad, A., D. Furceri and P. Tepalova (2015), “The Macroeconomic Effects of Public Investment: Evidence from Advanced Economies”, IMF Working Paper, No. WP/15/95.
Adalet McGowan, M. and D. Andrews (2015), “Labour Market Mismatch and Labour Productivity: Evidence from PIAAC Data”, OECD Economics Department Working Paper, No. 1209
Adema, W,, C. Clarke and V. Frey (2015), “Parental Leave for Inclusive Growth in the United States” OECD Social, Employment and Migration Working Papers, No. 172.
Ahrend, R., E. Farchy, I. Kaplanis and A. Lembcke (2015), “What Makes Cities More Productive? Agglomeration Economies and the Role of Urban Governance: Evidence from 5 OECD Countries”, SERC Discussion Papers, No 0178, Spatial Economics Research Centre, LSE.
Akerlof, G., W. Dickens and G. Perry (1996), “The Macroeconomics of Low Inflation”, Brookings Papers of Economic Activity, 1, pp. 1-76.
Andrews, D., C. Criscuolo and P. Gal (2015), “Frontier Firms, Technology Diffusion and Public Policy: Micro Evidence from OECD Countries”, OECD Productivity Working Papers, No. 02.
Auerbach, A., And Y. Gorodnichenko (2013), “Fiscal Multipliers in Recession and Expansion”, in A. Alesina and F. giavazzi, eds, Fiscal Policy after the Financial Crisis, NBER Books, National Bureau of Economic Research.
Austin, D. (2015), “Pricing Freight Transport to Account for External Costs”, CBO Working Paper, No. 2015-03.
Bertrand, M. and S. Mullainathan (2004), “Are Emily and Greg More Employable than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination”, American Economic Review, Vol. 90, No. 4, pp. 991-1013.
Blanchard, O., G. Dell’Ariccia and P. Mauro (2010), “Rethinking Macroeconomic Policy”, IMF Staff Position Note.
Boik, A. “The Economics of Universal Service: An Analysis of Entry Subsidies for High Speed: Broadbank” Net Institute Working Paper, No. 15-11.
Broecke, S., G. Quintini and M. Vandeweyer (2016), “Wage Inequality and Cognitive Skills: Re-opening the Debate”, NBER Working Paper, No. 21965.
Byrne, D. and E. Pinto (2015), “The Recent Slowdown in High-tech Equipment Price Declines and Some Implications for Business Investment and Labour Productivity”, FEDS Notes, March 26, 2015.
CBO (2015) “The Status of the Highway Trust Fund and Options for Paying for Highway Spending”, Testimony before the Committee on Ways and Means, U.S. House of Representatives, July 17, 2015.
CBO (2016), An Analysis of the President’s 2017 Budget, Congressional Budget Office, Washington, D.C.
CEA (2014), “An Economic Analysis of Transportation Infrastructure Investment”, A report prepared by the National Economic Council and the President’s Council of Economic Advisors.
CEA (2015), Economic Report of the President, Council of Economic Advisers, Washington, D.C..
Chetty, R., and N. Hendren (2015), “The Impacts of Neighborhoods on Intergenerational Mobility: Childhood Exposure Effects and Country-level Estimates”, Mimeo.
Chien, C. (2012), “Startups and Patent Trolls”, Santa Clara University Legal Studies Research Paper, No. 09-12.
Cerqueiro, G., D. Hegde, M. Penas and R. Seamans (2016), “Debtor Rights, Credit Supply and Innovation”, Mimeo.
Decker, R., J. Haltiwanger, R. Jarmin and J. Miranda (2015), “Changes in Business Dynamism: Volatility of vs. Responsiveness to Shocks?”, Mimeo.
Delong, J., and L. Summers (2012), “Fiscal Policy in a Depressed Economy”, Brookings Papers on Economic Activity, Spring, pp. 233-297.
Elmendorf, D., and L. Sheiner (2016), “Federal Budget Policy with an Aging Population and Persistently Low Interest Rates”, Hutchings Center on Fiscal and Monetary Policy Working Paper, No. 18.
Farre_Mensa, J., D. Hegde and A. Ljungqvist (2015), “The Bright Side of Patents”, USPTO Working Paper, No. 2015-5.
Federal Trade Commission (2010), “Pay-for-Delay: How Drug Company Pay-Offs Cost Consumers Billions”, An FTC Staff Study, January 2010.
Furman, J. (2015), “Barriers to Shared Growth: The Case of Land Use Regulation and Economic Rents”, Remarks at The Urban Institute, November 20, 2015.
Glocker, D., and R. Ahrend (2016, forthcoming), “Impact of Governance Structure and Resulting Investment Choices in the United States”, OECD Economics Department Working Paper.
Goldon, C. (2015), “How to Achieve Gender Equality”, The Milken Institute Review, Third Quarter, pp. 24-33.
Gonang, P. and D. Shoag (2015), “Why Has Regional Income Convergence in the U.S. Declined?”, Mimeo Harvard Kennedy School.
Gropp, R., J. Scholz, and M. White. (1997), “Personal Bankruptcy and Credit Supply and Demand”, The Quarterly Journal of Economics, Vol. 112, No. 1, pp. 217-251.
Gruber, J. and S. Kamin (2015), “The Corporate Saving Glut in the Aftermath of the Global Financial Crisis”, International Finance Discussion Papers, No. 1150.
Han, S. and M. Kleiner (2015), “Analyzing the Duration of Occupational Licensing on the Labor Market”, Mimeo.
Hermansen, M. and O. Röhn (2016), “Economic Resilience: he Usefulness of Early Warning Indicators in OECD Countries”, OECD Economics Department Working Paper, No. 1250.
Hulten, C. and V. Ramey (2015), “Skills, Education, and U.S. Economic Growth: Are U.S. Workers Being Adequately Prepared for the 21st Century World of Work?”, Mimeo.
Hyatt, H. and J. Spletzer (2013), “The Recent Decline in Employment Dynamics”, IZA Journal of Labor Economics, Vol. 2, No. 5.
Kleiner, M. and A. Krueger (2013), “Analyzing the Extent and Influence of Occupational Licensing on the Labor Market”, Journal of Labour Economics, Vol. 31, No. 2, pp. S173-
Kohn, D. (2014), “Institutions for Macroprudential Regulation: The UK and the U.S.”, Speech, Kennedy School of Government, Harvard University.
Krugman, P. (1998), “It’s Baaack: Japan’s Slump and the Return of the Liquidity Trap”, Brookings Papers on Economic Activity 2, pp. 137-205.
Lorenzoni, L., A. Belloni and R. Sassi (2014), “Health-care Expenditures and Health Policy in the USA Versus Other High-Spending OECD Countries”, The Lancet, Vol. 384, No. 9937, pp. 83-92.
Lorenzoni, L, J. Millar, F. Sassi and D. Sutherland (2016, forthcoming), “Drivers of Health-Care Expenditures Trends in OECD Countries”, OECD Economics Department Working Paper.
McKinsey Global Institute (2015), Digital America: A Tale of the Haves and Have-Mores, McKinsey and Company.
Meyer, B., and N. Mittag (2015), ”Using Linked Survey and Administrative Data to Better Measure Income: Implications for Poverty, Program Effectiveness and Holes in the Safety Net”, NBER Working Paper, No. 21676.
Molloy, R., C. Smith and A. Wozniak (2014), “Declining Migration within the US: The Role of the Labor Market”, Finance and Economics Discussion Series, No. 2013-27.
Mukoyama, T. (2013), “The Cyclicality of Job-to-Job Transitions and its Implications for Aggregate Productivity”, International Finance Discussion Papers, No. 1074.
OECD (2010), Sickness, Disability and Work: Breaking the Barriers, OECD Publishing Paris.
OECD (2012), “Reducing Income Inequality while Boosting Economic Growth: Can it Be Done?”, Economic Policy Reforms 2012: Going for Growth, OECD Publishing Paris.
OECD (2014), The Cost of Air Pollution: Health Impacts of Road Transport, OECD Publishing, Paris.
OECD (2015b), Taxation of SMEs in OECD and G20 Countries, OECD Tax Policy Studies, No. 23, OECD Publishing Paris.
OECD (2015c), PISA 2012 Results: What Makes Schools Successful? Resources, Policies and Practices: Volume IV, OECD Publishing, Paris.
OECD (2015c), OECD Science, Technology and Industry Scoreboard 2015: Innovation for Growth and Society, OECD Publishing, Paris.
OECD (2015d), How’s Life 2015: Measuring Well-Being, OECD Publishing, Paris.
Paik, Y. (2013), “The Bankruptcy Reform Act of 2005 and Entrepreneurial Activity”, Journal of Economics & Management Strategy, Vol. 22, No. 2, pp. 259-280.
Petri, P., and M. Plummer (2016), “The Economic Effects of the Trans-Pacific Partnership: New Estimates”, Peterson Institute of International Economics Working Paper, No. 16-2.
Rohlin, S. and A. Ross (2016), “Does Bankruptcy Law Affect Business Turnover? Evidence from New and Existing Businesses”, Economic Inquiry, Vol. 54, No. 1, pp. 361-374.
Röhn, O., A. Caldera Sánchez, M. Hermansen and M. Rasmussen (2016), “Economic Resilience: A New Set of Vulnerability Indicators for OECD Countries”, OECD Economics Department Working Paper, No. 1249.
Schanzenbach, A., D. Boddy, M. Mumford and G. Nantz (2016), Fourteen Economic Facts on Education and Economic Opportunity, The Hamilton Project
Solomon, A. (2012), “In Search of a Job: Criminal Records as Barriers to Employment”, National Institute of Justice Journal, Issue No. 207, pp. 42-51.
Summers, L. (1991), “How Should Long-Term Monetary Policy Be Determined”, Journal of Money, Credit and Banking, Vol. 23, No. 3. | 2020-02-18 15:26:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2470310926437378, "perplexity": 6900.276116711186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00506.warc.gz"} |
https://cs.stackexchange.com/questions/97036/is-every-regular-context-free-langauge-decidable-in-logspace | Is every regular/context free langauge decidable in LogSpace?
I know all the regular languages are decidable but not sure whether it can be done in LogSpace.
It is unknown whether context-free languages can be decided in logarithmic space. The best known result shows that they can be decided in nondeterministic space $O(\log^2 n)$. It is also known that if all context-free languages can be decided in deterministic logarithmic space then $\mathsf{L} = \mathsf{NL}$, which is considered unlikely.
For regular languages, we have $$\text{REG = DSPACE}(O(1)) = \text{NSPACE}(O(1))$$ where REG is the class of regular languages. This can be easily seen from the equivalent formalisms of regular languages. A language is a regular language iff it is the language accepted by a deterministic finite automaton. A language is a regular language iff it is the language accepted by a nondeterministic finite automaton.
In fact, $\text{REG = DSPACE}(o(\log \log n))$. That is, $Ω(\log\log n)$ space is required to recognize any non-regular language [1]. | 2019-08-26 09:20:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6142867803573608, "perplexity": 232.01765323657472}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331485.43/warc/CC-MAIN-20190826085356-20190826111356-00415.warc.gz"} |
https://math.libretexts.org/Bookshelves/Applied_Mathematics/Applied_Finite_Mathematics_(Sekhon_and_Bloom)/02:_Matrices/2.06:_Applications__Leontief_Models | # 2.6: Applications – Leontief Models
##### Learning Objectives
In this section we will examine an application of matrices to model economic systems.
In the 1930's, Wassily Leontief used matrices to model economic systems. His models, often referred to as the input-output models, divide the economy into sectors where each sector produces goods and services not only for itself but also for other sectors. These sectors are dependent on each other and the total input always equals the total output. In 1973, he won the Nobel Prize in Economics for his work in this field. In this section we look at both the closed and the open models that he developed.
## The Closed Model
As an example of the closed model, we look at a very simple economy, where there are only three sectors: food, shelter, and clothing.
##### Example $$\PageIndex{1}$$
We assume that in a village there is a farmer, carpenter, and a tailor, who provide the three essential goods: food, shelter, and clothing. Suppose the farmer himself consumes 40% of the food he produces, and gives 40% to the carpenter, and 20% to the tailor. Thirty percent of the carpenter's production is consumed by himself, 40% by the farmer, and 30% by the carpenter. Fifty percent of the tailor's production is used by himself, 30% by the farmer, and 20% by the tailor. Write the matrix that describes this closed model.
Solution
The table below describes the above information.
Proportion produced by the farmer Proportion produced by the carpenter Proportion produced by the tailor The proportion used by the farmer .40 .40 .30 The proportion used by the carpenter .40 .30 .20 The proportion used by the tailor .20 .30 .50
In a matrix form it can be written as follows.
$A=\left[\begin{array}{lll} .40 & .40 & .30 \\ .40 & .30 & .20 \\ .20 & .30 & .50 \end{array}\right] \nonumber$
This matrix is called the input-output matrix. It is important that we read the matrix correctly. For example the entry $$A_{23}$$, the entry in row 2 and column 3, represents the following.
$$A_{23}$$ = 20% of the tailor's production is used by the carpenter.
$$A_{33}$$ = 50% of the tailor's production is used by the tailor.
##### Example $$\PageIndex{2}$$
In Example $$\PageIndex{1}$$ above, how much should each person get for his efforts?
Solution
We choose the following variables.
$$x$$ = Farmer's pay $$y$$ = Carpenter's pay $$z$$ = Tailor's pay
As we said earlier, in this model input must equal output. That is, the amount paid by each equals the amount received by each.
Let us say the farmer gets paid $$x$$ dollars. Let us now look at the farmer's expenses. The farmer uses up 40% of his own production, that is, of the x dollars he gets paid, he pays himself .40x dollars, he pays .40y dollars to the carpenter, and .30z to the tailor. Since the expenses equal the wages, we get the following equation.
$x=.40 x+.40 y+.30 z \nonumber$
In the same manner, we get
\begin{aligned}
y=&.40 x+.30 y+.20 z \\
z=&.20 x+.30 y+.50 z
\end{aligned}
The above system can be written as
$\left[\begin{array}{l} x \\ y \\ z \end{array}\right]=\left[\begin{array}{lll} .40 & .40 & 30 \\ .40 & .30 & .20 \\ .20 & .30 & .50 \end{array}\right]\left[\begin{array}{l} x \\ y \\ z \end{array}\right] \nonumber$
This system is often referred to as $$X = AX$$
Simplification results in the system of equations $$(I - A) X = 0$$
\begin{aligned}
.60 x-.40 y-.30 z &=0 \\
-.40 x+.70 y-.20 z &=0 \\
-.20 x-.30 y+.50 z &=0
\end{aligned}
Solving for $$x$$, $$y$$, and $$z$$ using the Gauss-Jordan method, we get
$x =\frac{29}{26}t \quad y = \frac{12}{13}t \quad \text{ and } z = t \nonumber$
Since we are only trying to determine the proportions of the pay, we can choose t to be any value. Suppose we let $$t\(= 2600, then we get $x =\2900 \quad y = \2400 \quad \text{ and } z = \2600 \nonumber$ Note: The use of a graphing calculator or computer application in solving the systems of linear matrix equations in these problems is strongly recommended. ## The Open Model The open model is more realistic, as it deals with the economy where sectors of the economy not only satisfy each other’s’ needs, but they also satisfy some outside demands. In this case, the outside demands are put on by the consumer. But the basic assumption is still the same; that is, whatever is produced is consumed. Let us again look at a very simple scenario. Suppose the economy consists of three people, the farmer F, the carpenter C, and the tailor T. A part of the farmer's production is used by all three, and the rest is used by the consumer. In the same manner, a part of the carpenter's and the tailor's production is used by all three, and rest is used by the consumer. Let us assume that whatever the farmer produces, 20% is used by him, 15% by the carpenter, 10% by the tailor, and the consumer uses the other 40 billion dollars worth of the food. Ten percent of the carpenter's production is used by him, 25% by the farmer, 5% by the tailor, and 50 billion dollars worth by the consumer. Fifteen percent of the clothing is used by the tailor, 10% by the farmer, 5% by the carpenter, and the remaining 60 billion dollars worth by the consumer. We write the internal consumption in the following table, and express the demand as the matrix D. F produces C produces T produces F uses .20 .25 .10 C uses .15 .10 .05 T uses .10 .05 .15 The consumer demand for each industry in billions of dollars is given below. $\mathrm{D}=\left[\begin{array}{c} 40 \\ 50 \\ 60 \end{array}\right] \nonumber$ ##### Example \(\PageIndex{3}$$
In the example above, what should be, in billions of dollars, the required output by each industry to meet the demand given by the matrix $$D$$?
Solution
We choose the following variables.
x = Farmer's output
y = Carpenter's output
z = Tailor's output
In the closed model, our equation was $$X = AX$$, that is, the total input equals the total output. This time our equation is similar with the exception of the demand by the consumer.
So our equation for the open model should be $$X = AX + D$$, where $$D$$ represents the demand matrix.
We express it as follows:
$X = AX + D \nonumber$
$\left[\begin{array}{l} x \\ y \\ z \end{array}\right]=\left[\begin{array}{lll} .20 & .25 & .10 \\ .15 & .10 & .05 \\ .10 & .05 & .15 \end{array}\right]\left[\begin{array}{l} x \\ y \\ z \end{array}\right]+\left[\begin{array}{l} 40 \\ 50 \\ 60 \end{array}\right] \nonumber$
To solve this system, we write it as
$\begin{array}{l} X=A X+D \\ (I-A) X=D \quad \text{ where I is a 3 by 3 identity matrix }\\ X=(I-A)^{-1} D \end{array} \nonumber$
$\mathrm{I}-\mathrm{A}=\left[\begin{array}{ccc} .80 & -.25 & -.10 \\ -.15 & .90 & -.05 \\ -.10 & -.05 & .85 \end{array}\right] \nonumber$
$(\mathrm{I}-\mathrm{A})^{-1}=\left[\begin{array}{ccc} 1.3445 & .3835 & .1807 \\ .2336 & 1.1814 & .097 \\ .1719 & .1146 & 1.2034 \end{array}\right] \nonumber$
${X}=\left[\begin{array}{ccc} 1.3445 & .3835 & .1807 \\ .2336 & 1.1814 & .097 \\ .1719 & .1146 & 1.2034 \end{array}\right]\left[\begin{array}{c} 40 \\ 50 \\ 60 \end{array}\right] \nonumber$
$X=\left[\begin{array}{l} 83.7999 \\ 74.2341 \\ 84.8138 \end{array}\right] \nonumber$
The three industries must produce the following amount of goods in billions of dollars.
Farmer = $83.7999 Carpenter =$74.2341 Tailor = $84.813 We will do one more problem like the one above, except this time we give the amount of internal and external consumption in dollars and ask for the proportion of the amounts consumed by each of the industries. In other words, we ask for the matrix $$A$$. ##### Example $$\PageIndex{4}$$ Suppose an economy consists of three industries F, C, and T. Each of the industries produces for internal consumption among themselves, as well as for external demand by the consumer. The table shows the use of each industry's production ,in dollars. F C T Demand Total F 40 50 60 100 250 C 30 40 40 110 220 T 20 30 30 120 200 The first row says that of the$250 dollars worth of production by the industry F, $40 is used by F,$50 is used by C, $60 is used by T, and the remainder of$100 is used by the consumer. The other rows are described in a similar manner.
Once again, the total input equals the total output. Find the proportion of the amounts consumed by each of the industries. In other words, find the matrix $$A$$.
Solution
We are being asked to determine the following:
How much of the production of each of the three industries, F, C, and T is required to produce one unit of F? In the same way, how much of the production of each of the three industries, F, C, and T is required to produce one unit of C? And finally, how much of the production of each of the three industries, F, C, and T is required to produce one unit of T?
Since we are looking for proportions, we need to divide the production of each industry by the total production for each industry.
We analyze as follows:
To produce 250 units of F, we need to use 40 units of F, 30 units of C, and 20 units of T.
Therefore, to produce 1 unit of F, we need to use 40/250 units of F, 30/250 units of C, and 20/250 units of T.
To produce 220 units of C, we need to use 50 units of F, 40 units of C, and 30 units of T.
Therefore, to produce 1 unit of C, we need to use 50/220 units of F, 40/220 units of C, and 30/220 units of T.
To produce 200 units of T, we need to use 60 units of F, 40 units of C, and 30 units of T.
Therefore, to produce 1 unit of T, we need to use 60/200 units of F, 40/200 units of C, and 30/200 units of T.
We obtain the following matrix.
$\mathrm{A}=\left[\begin{array}{lll} 40 / 250 & 50 / 220 & 60 / 200 \\ 30 / 250 & 40 / 220 & 40 / 200 \\ 20 / 250 & 30 / 220 & 30 / 200 \end{array}\right]=\left[\begin{array}{ccc} .1600 & .2273 & .3000 \\ .1200 & .1818 & .2000 \\ .0800 & .1364 & .1500 \end{array}\right] \nonumber$
Clearly $$AX + D = X$$
$\left[\begin{array}{lll} 40 / 250 & 50 / 220 & 60 / 200 \\ 30 / 250 & 40 / 220 & 40 / 200 \\ 20 / 250 & 30 / 220 & 30 / 200 \end{array}\right]\left[\begin{array}{l} 250 \\ 220 \\ 200 \end{array}\right]+\left[\begin{array}{l} 100 \\ 110 \\ 120 \end{array}\right]=\left[\begin{array}{l} 250 \\ 220 \\ 200 \end{array}\right] \nonumber$
We summarize as follows:
LEONTIEF'S CLOSED MODEL
1. All consumption is within the industries. There is no external demand.
2. Input = Output
3. $$X = AX$$ or $$(I - A)X = 0$$
LEONTIEF'S OPEN MODEL
1. In addition to internal consumption, there is an outside demand by the consumer.
2. Input = Output
3. $$X = AX + D$$ or $$X = (I - A)^{-1} D$$
This page titled 2.6: Applications – Leontief Models is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Rupinder Sekhon and Roberta Bloom via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 2022-09-29 13:42:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072178602218628, "perplexity": 1239.6753413836564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00644.warc.gz"} |
https://quantpie.co.uk/srm/ho_lee_sr.php | ## Ho Lee model
We derive the Ho Lee term structure model formulae.
### Identifying the Model
The short rate under the continuous time version of Ho Lee model has the following dynamics:
$$d r_{t}= \theta_{t} dt + \sigma d w_{t}$$
Where $$\sigma$$ is the volatility of the short rate, and the time dependent drift, $$\theta_{t}$$, is to be determined from the current term structure or the zero coupon bond prices. In the remainder of this sub-section, we use the current zero coupon bond prices to derive formula for $$\theta_{t}$$. The current zero coupon bond price can be represented as:
$$P \left( 0, T\right)=E \left[ e^{-\int_{0}^{T}{r_{t} dt}}\right]$$
Thus we need to solve the equation representing the dynamics of the short rate for $$r_{t}$$, then integrate $$r_{t}$$ over the maturity of the zero coupon bond, and then plug the resulting equation in the above formula. Integrating the short rate equation from 0 to t, we get
$$\int_{0}^{t}{d r_{u}}= \int_{0}^{t}{\theta_{u} du} + \int_{0}^{t}{\sigma d w_{u}}$$ $$r_{t}=r_{0} + \int_{0}^{t}{\theta_{u} du} + \sigma\int_{0}^{t}{d w_{u}}$$
Now integrating the short rate from 0 to T (maturity of the bond), we get
$$\int_{0}^{T}{r_{t} dt}=\int_{0}^{T}{r_{0} dt} + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+ \sigma\int_{0}^{T}{\int_{0}^{t}{d w_{u}} dt}$$ $$=r_{0} T + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+ \sigma\int_{0}^{T}{\int_{u}^{T}{dt d w_{u}} }$$ $$=r_{0} T + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}}$$
Now, to facilitate the derivation of the Bond price formula, we derive the expected value and variance of $$\int_{s}^{T}{r_{t} dt}$$:
$$E \left[ \int_{0}^{T}{r_{t} dt} \right]=E \left[ r_{0} T + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}} \right]$$ $$=r_{0} T + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}$$ $$V \left[ \int_{0}^{T}{r_{t} dt} \right]=V \left[ r_{0} T + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}} \right]$$ $$=V \left[ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}} \right]$$ $$={\sigma}^{2} \int_{0}^{T}{ {\left( T-u \right)}^{2} du}$$ $$= \frac{{\sigma}^{2}{T}^{3}}{3}$$
Now, we use the current bond prices/term structure to derive formula for $$\theta_{t}$$ in terms of current observables:
$$P \left( 0, T\right)=E \left[ e^{-\int_{0}^{T}{r_{t} dt}}\right]= e^{-E \left[ \int_{0}^{T}{r_{t} dt} \right]+\frac{1}{2}V \left[ \int_{0}^{T}{r_{t} dt} \right] }$$ $$=e^{-r_{0} T - \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+{\sigma}^{2} \frac{{T}^{3}}{6}}$$
Taking log of both sides enable us to derive formula for $$\int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}$$ :
$$ln P \left( 0, T\right)=-r_{0} T - \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+{\sigma}^{2} \frac{{T}^{3}}{6}$$ $$\int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}=-ln P \left( 0, T\right)-r_{0} T +\frac{{\sigma}^{2}{T}^{3}}{6}$$
We can now derive explicit expressions for the other terms involving $$\theta$$ :
$$\int_{0}^{T}{\theta_{u} du}=\frac {\partial} {\partial T} \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}$$ $$= \frac {\partial} {\partial T} \left(-ln P \left( 0, T\right)-r_{0} T +\frac{{\sigma}^{2}{T}^{3}}{6} \right)$$ $$= \frac {\partial} {\partial T} \left(-ln P \left( 0, T \right) \right) -r_{0} + \frac{{\sigma}^{2}{T}^{2}}{2}$$ $$= \frac {\partial} {\partial T} \left( \int_{0}^{T}{f \left( 0,u \right) du} \right)-r_{0} + \frac{{\sigma}^{2}{T}^{2}}{2}$$ $$= f \left( 0,T \right)-r_{0} + \frac{{\sigma}^{2}{T}^{2}}{2}$$ $$\theta_{T} =\frac {\partial} {\partial T} \int_{0}^{T}{\theta_{u} du}$$ $$=\frac {\partial} {\partial T} \left( f \left( 0,T \right)-r_{0} + \frac{{\sigma}^{2}{T}^{2}}{2} \right)$$ $$=\frac {\partial} {\partial T} f \left( 0,T \right) + {\sigma}^{2} T$$
Hence, the Ho Lee model can be represented as:
$$d r_{t}= \theta_{t} dt + \sigma d w_{t}=\left( \frac {\partial} {\partial t} f \left( 0,t \right) + {\sigma}^{2} t\right)dt + \sigma d w_{t}$$
The solution of which, given the current value $$r_{0}$$, is,
$$r_{t}=r_{0} + \int_{0}^{t}{\theta_{u} du} + \sigma\int_{0}^{t}{d w_{u}}$$ $$r_{t}=r_{0} + \left( f \left( 0,t \right)-r_{0} + \frac{{\sigma}^{2}{t}^{2}}{2} \right) + \sigma\int_{0}^{t}{d w_{u}}$$ $$r_{t}= f \left( 0,t \right)+ \frac{{\sigma}^{2}{t}^{2}}{2} + \sigma\int_{0}^{t}{d w_{u}}$$
Which is Gaussian with mean and variance given by,
$$E \left[ r_{t} \right]=E \left[ f \left( 0,t \right)+ \frac{{\sigma}^{2}{t}^{2}}{2} + \sigma\int_{0}^{t}{d w_{u}} \right]= f \left( 0,t \right)+ \frac{{\sigma}^{2}{t}^{2}}{2}$$ $$V \left[ r_{t} \right]= V \left[ f \left( 0,t \right)+ \frac{{\sigma}^{2}{t}^{2}}{2} + \sigma\int_{0}^{t}{d w_{u}} \right] = {\sigma}^{2}{t}$$
For completeness, we also show below the solution of the term structure
$$\int_{0}^{T}{r_{t} dt}=r_{0} T + \int_{0}^{T}{\int_{0}^{t}{\theta_{u} du} dt}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}}$$ $$=r_{0} T -ln P \left( 0, T\right)-r_{0} T +\frac{{\sigma}^{2}{T}^{3}}{6}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}}$$ $$=-ln P \left( 0, T\right)+\frac{{\sigma}^{2}{T}^{3}}{6}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}}$$
Which again is Gaussian with mean and variance given by,
$$E \left[ \int_{0}^{T}{r_{t} dt} \right]=E \left[ -ln P \left( 0, T\right)+\frac{{\sigma}^{2}{T}^{3}}{6}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}}\right]= -ln P \left( 0, T\right)+\frac{{\sigma}^{2}{T}^{3}}{6}$$ $$V \left[ \int_{0}^{T}{r_{t} dt} \right]= V \left[ -ln P \left( 0, T\right)+\frac{{\sigma}^{2}{T}^{3}}{6}+ \sigma\int_{0}^{T}{\left( T-u \right)d w_{u}} \right] = {\sigma}^{2}\int_{0}^{T}{{\left( T-u \right)}^{2}du} =\frac{{\sigma}^{2}{T}^{3}}{3}$$
It is easily verified that the current bond prices are recovered:
$$P \left( 0, T\right)=E \left[ e^{-\int_{0}^{T}{r_{t} dt}}\right]= e^{-E \left[ \int_{0}^{T}{r_{t} dt} \right]+\frac{1}{2}V \left[ \int_{0}^{T}{r_{t} dt} \right] }$$ $$=e^{ln P \left( 0, T\right)-\frac{{\sigma}^{2}{T}^{3}}{6} +\frac{{\sigma}^{2}{T}^{3}}{6}}=P \left( 0, T\right)$$ | 2021-07-31 09:48:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9483599066734314, "perplexity": 245.15530795569134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00631.warc.gz"} |
https://thoughtstreams.io/jonaspf/linux-stuff/9311/ | Linux stuff
12 thoughts
last posted Feb. 5, 2016, 10:37 a.m.
5 earlier thoughts
0
Just a quick one because I need this all the time but I keep forgetting it: How to append the current date to a filename.
touch abcd_date +%F
ls abcd*
> abcd_2015-09-17
or if you are using fish:
touch abcd_(date +%F)
6 later thoughts | 2021-03-08 21:26:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48995280265808105, "perplexity": 12550.401166932039}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00158.warc.gz"} |
https://hal.inria.fr/hal-01088664 | # The complexity of finding arc-disjoint branching flows
2 COATI - Combinatorics, Optimization and Algorithms for Telecommunications
CRISAM - Inria Sophia Antipolis - Méditerranée , COMRED - COMmunications, Réseaux, systèmes Embarqués et Distribués
Abstract : The concept of arc-disjoint flows in networks was recently introduced in \cite{bangTCSflow}. This is a very general framework within which many well-known and important problems can be formulated. In particular, the existence of arc-disjoint branching flows, that is, flows which send one unit of flow from a given source $s$ to all other vertices, generalizes the concept of arc-disjoint out-branchings (spanning out-trees) in a digraph. A pair of out-branchings $B_{s,1}^+,B_{s,2}^+$ from a root $s$ in a digraph $D=(V,A)$ on $n$ vertices corresponds to arc-disjoint branching flows $x_1,x_2$ (the arcs carrying flow in $x_i$ are those used in $B_{s,i}^+$, $i=1,2$) in the network that we obtain from $D$ by giving all arcs capacity $n-1$.It is then a natural question to ask how much we can lower the capacities on the arcs and still have, say, two arc-disjoint branching flows from the given root $s$.We prove that for every fixed integer $k \geq 2$ it is\begin{itemize}\item an NP-complete problem to decide whether a network ${\cal N}=(V,A,u)$ where $u_{ij}=k$ for every arc $ij$ has two arc-disjoint branching flows rooted at $s$.\item a polynomial problem to decide whether a network ${\cal N}=(V,A,u)$ on $n$ vertices and $u_{ij}=n-k$ for every arc $ij$ has two arc-disjoint branching flows rooted at $s$.\end{itemize}The algorithm for the later result generalizes the polynomial algorithm, due to Lov\'asz, for deciding whether a given input digraph has two arc-disjoint out-branchings rooted at a given vertex.Finally we prove that under the so-called Exponential Time Hypothesis (ETH), for every $\epsilon{}>0$ and for every $k(n)$ with $(\log{}(n))^{1+\epsilon}\leq k(n)\leq \frac{n}{2}$ (and for every large $i$ we have $k(n)=i$ for some $n$) there is no polynomial algorithm for deciding whether a given digraph contains twoarc-disjoint branching flows from the same root so that no arc carries flow larger than $n-k(n)$.
Keywords :
Type de document :
Rapport
[Research Report] RR-8640, INRIA Sophia Antipolis; INRIA. 2014
Domaine :
Littérature citée [12 références]
https://hal.inria.fr/hal-01088664
Contributeur : Frederic Havet <>
Soumis le : vendredi 28 novembre 2014 - 13:32:26
Dernière modification le : mercredi 31 janvier 2018 - 10:24:05
Document(s) archivé(s) le : vendredi 14 avril 2017 - 23:01:46
### Fichier
RR-8640.pdf
Fichiers produits par l'(les) auteur(s)
### Identifiants
• HAL Id : hal-01088664, version 1
### Citation
Joergen Bang-Jensen, Frédéric Havet, Anders Yeo. The complexity of finding arc-disjoint branching flows. [Research Report] RR-8640, INRIA Sophia Antipolis; INRIA. 2014. 〈hal-01088664〉
### Métriques
Consultations de la notice
## 298
Téléchargements de fichiers | 2018-07-21 06:35:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7376180291175842, "perplexity": 1887.0432445477031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592387.80/warc/CC-MAIN-20180721051500-20180721071500-00462.warc.gz"} |
https://byjus.com/physics/dependence-of-potential-difference-across-a-resistor-on-current-with-graph/ | # Studying the dependence of potential difference (V) across a resistor on the current (I) passing through it and determine its resistance. Also plotting a graph between V and I
Potential difference is defined as the work done to move a unit charge from one point to the other. The SI unit of potential difference is volt. Electromotive force is defined as the force that distributes the equilibrium of free-flowing electrons. Below is an experiment to study the dependence of the potential difference across a resistor with current carrying I.
## Aim
To study the dependence of potential difference (V) across a resistor on the current (I) passing through it and determine its resistance. Also, plot a graph between V and I.
## Theory
### What is Ohm’s law?
In an electric circuit, the potential difference V across the metallic wire is directly proportional to the current flowing through the circuit with a constant temperature. This is known as Ohm’s law.
V∝I. ∴ V=IR.
### What are the factors affecting resistance?
Following are the factors affecting resistance:
• The nature of the resistor.
• With an increase in length, the resistance also increases. So length also affects the resistance.
• With an increase in cross-sectional area, the resistance decreases. So cross-sectional area of the wire affects the resistance.
## Materials Required
Following is the list of materials required for this experiment:
1. A battery
2. An insulated copper wire
3. A key
4. An ammeter
5. A voltmeter
6. A rheostat
7. A resistor
8. A piece of sandpaper
## Procedure
1. Arrange the devices as shown in the circuit diagram.
2. Connect the devices with the connecting wires keeping the key open.
3. The positive terminal of the battery should be connected to the positive terminal of the ammeter.
4. Before connecting the voltmeter in the circuit, check for +ve and -ve terminals.
5. Check for ammeter and voltmeter reading once the circuit is connected and also adjust the slider of rheostat after inserting the key.
6. For current I and voltmeter V, record three different readings using a slider.
7. Record the observations in the observation table.
8. Using the formula R=V/I, calculate the resistance.
9. To plot the graph between V and I, take V on the x-axis and I on the y-axis.
10. For pure metals, resistance increases with increase in temperature.
## Observation Table
### i) Least count of ammeter and voltmeter
Sl.no Ammeter (A) Voltmeter (V) 1 Range 0-0.5 A 0-0.1 V 2 Least count 0.01 A 0.01 V 3 Zero error (e) 0 0 4 Zero correction 0 0
### ii) For the reading of ammeter and voltmeter
Sl.no Current in Ampere (I) (ammeter reading) Potential difference in volts (V) (voltmeter reading) Resistance in ohms R = V/I (Ω) Observed Corrected Observed Corrected 1 0 0.02 0 0.04 R1= 2Ω 2 0 0.03 0 0.06 R2= 2Ω 3 0 0.04 0 0.08 R3= 2Ω
## Conclusions
1. For all the three readings, the R value is the same and constant.
2. The ratio of potential difference V and current I is the resistance of a resistor.
3. With the help of the graph between V and I, Ohm’s law is verified as the plot is a straight line.
## Precautions
1. Thick copper wires are used as connecting wires and using sandpaper, their insulation is removed.
2. To avoid external resistance, the connections should be tight.
3. The connections should be as per the circuit diagram and should be approved by the teacher before conducting the experiment.
4. The current should enter from the positive terminal and exit from the negative terminal of the ammeter and should be connected in series with the resistor.
5. Resistor and voltmeter should be connected in parallel.
6. The least count of ammeter and voltmeter should be recorded properly.
7. When there is no current flow, the pointers of ammeter and voltmeter should be at zero.
8. To avoid unnecessary heating in the circuit, the current should be passed for a short time.
## Viva Questions
Q1. Define electric current.
Ans: Electric current is defined as the rate of flow of electric charge in a conductor.
$I=\frac{Q}{t}$
Where,
• I is the current in amperes
• Q is the electric charge in coulombs
• t is the time in seconds
Q2. What is the value of charge in 1 electron?
Ans: The value of charge in 1 electron is 1.6×10-19C.
Q3. What is coulomb?
Ans: Coulomb is an SI unit of electric charge and is defined as the amount of charge present in 6.25×1018 electrons.
Q4. What is 1 volt?
Ans: 1 volt is defined as the work done in 1 Joule to move a charge of 1 Coulomb from one point to the other then the potential difference is 1 volt.
Q5. What is 1 ohm?
Ans: 1 ohm is defined as the resistance offered by an object when one ampere current flows through an object with a potential difference of one volt. | 2019-09-20 22:43:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5764375925064087, "perplexity": 969.6624733360208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00074.warc.gz"} |
http://www.elekcompute.com/threephasepower | Electrical Systems
Three Phase Power
Phase Voltages
Phase Currents
$\mathrm{dummy text dummy text}$
Phase Voltages
Phase Currents
Line Currents
$\mathrm{dummy text dummy text}$
$\text{}$
In three phase circuit the source as well as load can both be connected either in star or delta. Here the source is star connected. The delta connected source is rarely used in practice.
In star connected load the voltage across the impedance Z is phase voltage and the current flowing through Z is the line current.
In delta connected load the voltage across the impedance Z is line voltage and the line current is √3 times the load current through Z. The load current in delta load leads the line current by 30°.
Whether the load or source are Star or Delta connected
Vl and Il are values of line voltage and current | 2019-04-19 12:54:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5758684277534485, "perplexity": 2425.032263692483}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527720.37/warc/CC-MAIN-20190419121234-20190419143234-00510.warc.gz"} |
https://web2.0calc.com/questions/help_18749 | +0
# help
0
34
1
If a is not equal to 0, and if x + y = a and x^3 + y^3 = b, write an equation expressing x^2 + y^2 explicitly in terms of a and b.
Dec 8, 2019
#1
0
x+y = a (1)
$$x^3+y^3=b$$ (2)
$$x^3+y^3=(x+y)(x^2-xy+y^2)$$ (4)
$$(x+y)(x^2-xy+y^2)=b$$ (3)
$$\frac{b}{(x+y)}=(x^2-xy+y^2)$$
From (1) we know that x+y=a
so
$$\frac{b}{a}=x^2-xy+y^2$$ Expressed in terms of a and b as the question asks. and a can't be 0 as given.
$$x^2+y^2=\frac{b}{a}+xy$$ Expressed in terms of $$x^2+y^2$$
Notice that
$$(x+y)(x^2-xy+y^2)$$
we know $$x^2+y^2=\frac{b}{a}+xy$$
we can subsituite
we also know (x+y)=a
so
$$a*(\frac{b}{a}+xy-xy)$$
Simplify the brackets
$$a*(\frac{b}{a})$$
which is equal to b ,=x^3+y^3 as given.
So this identity is correct that x^2+y^2=b/a+xy
Dec 8, 2019 | 2020-01-20 04:21:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7188703417778015, "perplexity": 852.7419024145862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597230.18/warc/CC-MAIN-20200120023523-20200120051523-00080.warc.gz"} |
https://hal.in2p3.fr/in2p3-00202683 | # $J/\psi$ Production in $\sqrt{s_{NN}}$= 200 GeV Cu+Cu Collisions
Abstract : Yields for $/\psi$ production in Cu+Cu collisions at sqrt (s_NN)= 200 GeV have been measured by the PHENIX experiment over the rapidity range |y| < 2.2 at transverse momenta from 0 to beyond 5 GeV/c. The invariant yield is obtained as a function of rapidity, transverse momentum and collision centrality, and compared with results in p+p and Au+Au collisions at the same energy. The Cu+Cu data provide greatly improved precision over existing Au+Au data for $/\J/psi$ production in collisions with small to intermediate numbers of participants, providing a key constraint that is needed for disentangling cold and hot nuclear matter effects.
Document type :
Journal articles
http://hal.in2p3.fr/in2p3-00202683
Contributor : Suzanne Robert Connect in order to contact the contributor
Submitted on : Monday, January 7, 2008 - 5:06:16 PM
Last modification on : Sunday, June 26, 2022 - 11:47:50 AM
### Citation
A. Adare, S. Afanasiev, C. Aidala, N.N. Ajitanand, Y. Akiba, et al.. $J/\psi$ Production in $\sqrt{s_{NN}}$= 200 GeV Cu+Cu Collisions. Physical Review Letters, American Physical Society, 2008, 101, pp.122301. ⟨10.1103/PhysRevLett.101.122301⟩. ⟨in2p3-00202683⟩
Record views | 2022-08-11 01:40:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.62956303358078, "perplexity": 8561.307751399487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00628.warc.gz"} |
https://stats.stackexchange.com/questions/114368/factor-analysis-with-repeated-measures | # Factor analysis with repeated measures
Multilevel factor analysis seems to be the technical term for factor analysis with repeated measures, judging from this abstract. To be precise, following Wikipedia's factor analysis notation, the model I want to build is
$$x_i =l_{i1} F_1 + \cdots + l_{ik} F_k + z_i + \varepsilon_i$$
where $x_i$ is the $i$th observed variable (already centered and scaled, say), an $n\times 1$ vector. The thing that makes this model different from ordinary factor analysis is the presence of the $n\times 1$ vector $z_i$ on the right-hand side; this is a vector of fixed or random effects that correspond to the repeated measures. Specifically, $z_{i(p)} = z_{i(q)}$ whenever the $p$th and $q$th records come from the same individual.
Multiple queries similar to this one exist (here and here). This question is only slightly more general while hopefully also more expository:
(A) Where can I find a publicly available and detailed description of multilevel factor analysis?
(B) What software exists to do multilevel factor analysis in a pretty straightforward way? Solutions involving R, SAS, Python, or Latent GOLD are of particular interest.
• Given your explanation, in what respect does it differ from performing the standard factoring on the variables $x$, each averaged across the RM-levels (that is, inside each individual)? If the "individual bias" factor $z_i$ is modelled $x_i$-specific and independent of the common factors $F$ - and it is how it appears in your model - then $z_i$ can be safely assessed and cancelled before factor analysis. Sep 5, 2014 at 9:07
• @ttnphns, you seem to understand the model. Your proposal is to estimate $z_i$ and subtract it from both sides of the equation before doing factor analysis, right? But $z_i$ is latent -- how do you propose to estimate $z_i$? In particular, it seems like the estimates would depend on the factor analysis fitted values. Sep 5, 2014 at 12:56
• In my comment I said that in your model formulation - as I understood it - $z_i$ does not appear to be latent. And why should it? In FA, latent are factors common to different $x$'s. With your model, $z_i$ is $x_i$-specific and does not interact with the factors. So why not get rid of $z_i$ prior FA? Sep 5, 2014 at 13:16
• It's easy to imagine situations in which $z_i$ is observable, but in my applications it is not. Sep 5, 2014 at 21:40
I don't know if R (lavaan or OpenMx packages) or Stata (glamm or built-in tools) have such capabilities. | 2022-08-10 09:10:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7430739402770996, "perplexity": 737.6629345348209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571150.88/warc/CC-MAIN-20220810070501-20220810100501-00176.warc.gz"} |
https://math.stackexchange.com/questions/1155757/classes-of-measures-that-are-closed-under-multiplication | # Classes of measures that are closed under multiplication
Consider the space $\mathcal M$ of all finite complex Borel measures on a segment with norm $\|\mu\|=\int d\,|\mu|$. Assume that a norm-closed linear subspace $\mathcal M_0$ of $\mathcal M$ has the following property:
if $\mu\in\mathcal M_0$ and $f\in L^1(|\mu|)$, then $f\mu\in \mathcal M_0$.
Can $\mathcal M_0$ be characterized as the family of all measures that vanish on a certain fixed collection of subsets of the segment?
Example: the class of all measures $\mu$ such that $d\mu=f\,dx$ for some $f\in L^1$ coincides with the class of measures that vanish on all subsets of zero Lebesgue measure.
Update: For any fixed measure $\mu$, one can define $\mathcal M_0$ as the class of all measures that are absolutely continuous wrt $\mu$. Then the class of subsets in question is the class of subsets of zero $\mu$-measure.
• Fix the interval to be $(0,1)$, is the standard Lebesgue measure in the closed linear span of $\delta_x$ where $x$ ranges over all of $(0,1)$? – Willie Wong Feb 19 '15 at 17:10
• I have changed the order of words, that's what I meant -ok? – limanac Feb 19 '15 at 17:18
• Nope. The closed linear span of all Dirac measures coincides with the set of all discrete measures. – limanac Feb 19 '15 at 17:20
• @Willie Wong: nothing guarantees this; as far as In know, this is not true, and that is why I think that the answer should be negative. – limanac Feb 19 '15 at 18:07
• Sorry for the wrong answer (which I deleted). Another try: If $\mathcal M_0$ is the space of all discrete measures then $\emptyset$ is the the only set so that each $\mu \in \mathcal M_0$ vanishes on it. Am I missing something? – Jochen Feb 20 '15 at 7:50
## 1 Answer
Let $\mathcal M_0$ be the space of all discrete measures $\mu$ (i.e. $|\mu|(A)=0$ for some $A$ with countable complement). This is a norm-closed subspace satisfying the property. But $\emptyset$ is the only set on which every discrete measure vanishes.
• @limanac The comment is no longer relevan and should be deleted. – Jochen Feb 20 '15 at 8:18
• That was because you undeleted your old answer and my comment appeared automatically. Done. – limanac Feb 20 '15 at 8:41 | 2019-10-20 03:09:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8502271771430969, "perplexity": 234.68210721553714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00069.warc.gz"} |
https://stats.stackexchange.com/questions/377290/principal-components-can-i-interpret-pca-as-essentially-a-change-of-basis?noredirect=1 | # Principal components: Can I interpret PCA as essentially a change of basis
I was hoping that someone could simply validate or correct my interpretation of Principal Components Analysis. There are a lot of questions on this site about Principal Components analysis--some listed below. But none of these expressed PCA in the way I refer to below, so I wanted to check if my interpretation was correct or incorrect.
Relationship between SVD and PCA. How to use SVD to perform PCA?
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal?
I understand how principal components works as a linear dimensional reduction method, using the spectral decomposition of some matrix $$X$$. It was wondering if it is appropriate to interpret PCA as simply a change of basis, as we might do in some other situations in linear algebra? So PCA simply takes points expressed in the standard basis and transforms them into points expressed in an eigenvector basis. In this process of transformation, some dimensions with low variance are discarded and hence the resulting dimensional reduction.
Of course the process of finding the eigenvector basis uses numerical optimization, so the resulting projection of the original points into the eigenvector basis will produce some "wiggle" in the points, but otherwise the distances between the points are left unchanged. I am thinking in terms of purely classical PCA here, and not some of the PCA variants where regularization imposes some delibrate properties on the resulting reduced rank projection matrix.
Thanks.
• Yes. $\phantom{}$ – amoeba Nov 16 '18 at 8:35
• @amoeba thanks so much. Yes the validation just helps me to know I am on the right track. – krishnab Nov 16 '18 at 16:26
• @amoeba so to clarify -- the distances in the original/standard basis remain approximately unchanged...? Can we say anything about the distances between the points in the transformed/new basis if we kept all the eigenvector basis (e.g. like relative distances remain the same, or something interesting like that)? – JPJ Sep 8 '19 at 4:20
• @JPJ The distances are unaffected by a change of basis. They all remain the same. – amoeba Sep 8 '19 at 13:13 | 2020-07-08 10:43:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7935726046562195, "perplexity": 508.5565259475691}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896932.38/warc/CC-MAIN-20200708093606-20200708123606-00075.warc.gz"} |
https://www.varsitytutors.com/ap_physics_c_mechanics-help/understanding-conservation-of-energy | # AP Physics C: Mechanics : Understanding Conservation of Energy
## Example Questions
### Example Question #1 : Work, Energy, And Power
A 0.8kg ball is dropped from rest from a cliff that is 150m high. Use conservation of energy to find the vertical velocity of the ball right before it hits the bottom of the cliff.
Explanation:
The conservation of energy equation is .
The ball starts from rest so . It starts at a height of 150m, so . When the ball reaches the bottom, height is zero and thus, and . The conservation of energy equation can be adjusted below.
Solve for v.
### Example Question #121 : Mechanics Exam
Starting from rest, a skateboarder travels down a 25o incline that's 22m long. Using conservation of energy, calculate the skateboarder's speed when he reaches the bottom. Ignore friction.
Explanation:
Conservation of energy states that .
The skateboarder starts from rest; thus, and . At the bottom of the incline, and .
Solve for v.
Using trigonometry, .
### Example Question #1 : Energy
bowling ball is dropped from above the ground. What will its velocity be when it is above the ground?
Explanation:
Relevant equations:
Determine initial kinetic and potential energies when the ball is dropped.
Determine final kinetic and potential energies, when the ball has fallen to above the ground.
Use conservation of energy to equate initial and final energy sums.
Solve for the final velocity.
### Example Question #130 : Mechanics Exam
A solid metal object with mass of is dropped from rest at the surface of a lake that is deep. The water exerts a drag force on the object as it sinks. If the total work done by the drag force is -, what is the speed of the object when it hits the sand at the bottom of the lake?
Explanation:
This is a conservation of energy problem. First we have to find the work done by gravity. This can be found using:
It is given to us that the work done by the drag force is which means that work is done in the opposite direction. We take the net work by adding the two works together we get, of net work done on the block.
Since this is a conservation of energy problem, we set the net work equal to the kinetic energy equation:
is the mass of the block and we are trying to solve for .
### Example Question #131 : Mechanics Exam
If a roller coaster car is traveling at when it is above the ground, how fast is it going when above the ground?
Explanation:
This is a classic conservation of energy problem. We know that potential energy and kinetic energy both have to conserve. So we use the following equation:
What this equations says is, the sum of kinetic and potential energy is same at varying heights and velocities.
We can simplify this equation by cancelling out all the m terms.
We know all the terms except for , which is the final speed we are trying to solve for , which is , is and is .
If we plug in all the numbers and solve for , we get . | 2017-10-20 21:57:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8071057200431824, "perplexity": 665.0230063809896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824357.3/warc/CC-MAIN-20171020211313-20171020231313-00383.warc.gz"} |
https://perso.univ-rennes1.fr/michel.coste/Borel/realgeomabs.html | # TRIMESTER ON REAL GEOMETRY September 12th – December 16th, 2005
## Abstracts of the courses
• F. Acquistapace (September 14th - October 13th, 8 lectures): Around Hilbert's 17th problem for analytic functions
In this course we will study several properties of the ring of analytic functions on a real analytic manifold . We will also study important properties of the sets which are defined using the ring of analytic functions on .
We will discuss several problems for those global functions and sets, which are classical for semialgebraic sets and regular functions, namely:
• Hilbert Problem: Let for any . Is a sum of squares of meromorphic functions?
• Nullstellensatz: Given an ideal , how can we caracterize the ideal ?
• Positivstellensatz: Given , how can we caracterize the set of functions which are on the set ?
• Closure: Is the closure of a global analytic set still a global semianalytic set?
• Connected components: Is a union of connected components of a global semianalytic set still a global semianalytic set?
• Finiteness: Let be a global open (resp. closed) semianalytic set. Can be described as a finite union of basic open (resp. basic closed) global semianalytic sets?
Here basic open (resp. basic closed) sets means sets of the form (resp.), where each .
We point out that almost all these problems are still open in the general case, for dimension .
We will start by showing the relations between the solution of the Hilbert problem and the set of orderings of the field of fractions or of the ring of functions . The crucial point here is the so called Artin-Lang Property which relates, roughly speaking, the sets which are definable by , that is, the global semianalytic sets, with the constructible sets of orderings of .
We will see that this property holds always for dimension and for a compact manifold. As an application, we will show how to solve Hilbert problem and to prove that the closure and the connected components of a global semianalytic set in a smooth analytic surface are still global semianalytic sets.
The Artin-Lang Property is not known for non compact manifolds of dimension . However, one can solve, for instance, the finiteness problem using more traditional tools, as sheaf theory, Cartan's Theorem B, Whitney's approximation theorem. For the sake of the audience, we will recall briefly these classic technics.
At the end of the course,we will present some (partial) recent results related to the Hilbert Problem, the Nullstellensatz and the Positivestellensatz, some of which are still work in progress.
• S. Basu (November 16th - 25th, 4 lectures): Efficient Algorithms for Computing the Betti Numbers of Semi-algebraic Sets.
Computing homological information of semi-algebraic sets (or more generally constructible sets) is an important problem for several reasons. From the point of view of computational complexity theory, it is the next logical step after the problem of deciding emptiness of such sets, which is the signature NP-complete problem in appropriate models of computation.
In this course I will describe some recent progress in designing efficient algorithms for computing the Betti numbers of semi-algebraic sets in several different settings. I will describe a single exponential time algorithm for computing the first few Betti numbers in the general case and polynomial time algorithms in case the set is defined in terms of quadratic inequalities. One common theme underlying these algorithms is the use of certain spectral sequences -- namely, the Mayer-Vietoris spectral sequence and the cohomological descent'' spectral sequence first
introduced by Deligne.
Certain parts of this work is joint with R. Pollack, M-F. Roy and (separately) with T. Zell.)
• F. Catanese (November 10th - 29th, 8 lectures) : Deformation types of real algebraic functions and manifolds
This course will focus on the complex point of view in the investigation of real algebraic varieties. This means that for instance the real manifolds are viewed as the real part of a compact complex manifold, often projective or Kaehler, endowed with an antiholomorphic involution. And the point of view consists in (instead of forgetting the non real part) trying to see how the real part embeds into the complex manifold.
A first example of the advantage of taking this point of view emerged historically even when dealing with maximal real polynomials in 1 variable: C. Davis proved with elementary real analysis that, given (n-1) real critical values, there is a real polynomial of degree n with these critical values. Later Thom observed how these kind of results are a consequence of Riemann's existence theorem, which sheds light more generally on the classification of real algebraic functions. In fact, once the real critical value sets are fixed, real algebraic functions are then seen just as monodromies which are invariant by complex conjugation on the complex line. I will focus here on the least complicated examples, such as the counting of Arnold's so-called "snake sequences", leading to an easy generating function.
Another very easy concept which will be introduced pretty soon is the concept of the real (orbifold)-fundamental group of a real manifold: this group contains the complex fundamental group as a subgroup of index two, and it splits as a semidirect product if there are real points (observe that from the complex point of view also the empty set is interesting, as one can easily argue considering the mathematical charm of the Klein bottle). The use of this concept will be illustrated by the first important issue, which is the treatment of real structures on complex tori. Particular enphasis will be set on studying real elliptic curves and their deformations. This leads to an easy but important understanding of the genus 1 case of the relation between moduli spaces of real curves, and the real part of the moduli space of curves. This relation will then also be treated, even if sketchily, for the higher genus case. The topic of Nielsen Teichmueller realization, treated here, will show up and prove quite important also later.
Turning to higher dimensions, and in the Kaehler case, Hodge theory plays an important role, as was shown by Kharlamov in the 70's, in order to give topological estimates for Betti numbers of real manifolds.
Another simple application of Hodge theory concerns the existence of irrational pencils, a theme which in turns leads to surfaces isogenous to a product and to those among them which are rigid: the so-called Beauville surfaces. These surfaces are rather important because they provide interesting examples of complex surfaces which do not admit real structures, yet can be isomorphic to their complex conjugate surface. The question of reality or non reality of rigid surfaces is not yet fully investigated: I will also briefly discuss the interest of such surfaces concerning the action of the absolute Galois group, in close relation to Grothendiecks' program called 'Dessins d' enfants'.
I will then briefly outline the Enriques-Kodaira classification of complex algebraic surfaces, and the Kodaira classification of elliptic fibrations. I will then review the status of the Enriques-Kodaira classification of real algebraic surfaces, through the contributions of several authors, and with diverse methods. In particular, I will try to introduce to some important open problems about real surfaces of special type. One such instance is the notion of quasi-simplicity considered by Degtyarev, Itenberg and Kharlamov , which revolves around the problem of comparing deformation and differentiable equivalence for real structures on complex algebraic surfaces.
• G. Comte (October 4th - 26th, 8 lectures): Propriétés métriques en géométrie modérée
Measures on Grassmannians,
Hausdorff and entropy dimensions,
Vitushkin variations and Lipschitz-Killing curvatures,
Critical locus and entropy,
Semialgebraic complexity of applications,
Quantitative Sard theorems,
Localizations of invariants of integral geometry in tame geometry,
Conormal cone, normal cone and regularity conditions,
Variations of local invariants along regular strata.
• M. Coste (September 13th - 28th, 8 lectures) : Ensembles constructibles en géométrie réelle ; géométrie modérée
I shall give in this course the basics of semialgebraic geometry and tame geometry (o-minimal structures) for students who are not familiar with these topics, in order that they can follow more specialized courses of the trimester. I shall review the cylindrical decomposition into cells, stratifications, triangulation, the local conic structure, Hardt's trivialization theorem, growth properties of semialgebraic functions. I shall insist on uniformity results for semialgebraic (or definable) families.
• M. Dickmann (September 13th - October 24th, 8 lectures) : Théorie des modèles en algèbre et géométrie
Notions et results of model theory necessary for the proof of quantifier elimination for real closed fields and related results.
Application of these tools to:
• the structure of semialgebraic sets and functions,
• the real Nullstellensatz and the Artin-Lang theorem,
• simple zeros and positivity criteria for polynomials,
• topology of semialgebraic sets: cellular decomposition, dimension, Thom's lemma, triviality of semialgebraic families,
• theory of the real spectrum; real spectrum of varieties,
• continuous semialgebraic functions and Nash functions, Lojasiewicz inequality, Artin's approximation theorem,
• commutative algebra of Nash functions.
• L. van den Dries (November 28th - December 14th, 8 lectures): Asymptotic differential algebra
This is differential algebra in a setting where there is an ordering and valuation compatible with the derivation, as in Hardy fields, fields of transseries and H-fields.
• A. Gabrielov (October 20th - 28th, 4 lectures): Pfaffian functions and sparsity. Real Schubert Calculus and the B. and M. Shapiro Conjecture
• Lecture 1. Complexity of computations with Pfaffian functions
Pfaffian functions introduced by Khovanskii in 1970-ies have polynomial-like global finiteness properties. These functions satisfy a triangular system of Pfaffian (first order partial differential) equations with polynomial coefficients. Khovanskii-Bezout theorem gives an upper bound on the number of real solutions to systems of Pfaffian equations in terms of the number of Pfaffian equations and the degrees of polynomials in the definition of Pfaffian functions. Important special cases of Pfaffian functions are fewnomials, i.e., polynomials with few nonzero monomials. Complexity of fewnomials as Pfaffian functions depends on the number of monomials, not on their degrees.
A review of the upper bounds on the complexity of computations with Pfaffian functions and semi-Pfaffian sets will be given, with the special attention to fewnomial semialgebraic sets.
• Lecture 2. Betti numbers of sets defined by formulas with quantifiers
A spectral sequence associated with a surjective mapping allows one to provide upper bounds for the Betti numbers of a wide class of sets defined by formulas with quantifiers in terms of the Betti numbers of auxiliary sets defined by quantifier-free formulas. In the ordinary semialgebraic case, this approach provides better upper bounds than the quantifier elimination, especially for the sparse polynomials. It works also for Pfaffian functions.
• Lecture 3. Rational functions with real critical points, the Catalan numbers, and the Schubert calculus
How many rational functions of degree have a given set of points as their set of critical points? If we identify functions that differ by a fractional-linear transformation in the target space (all such functions have the same critical points) the answer is , the Catalan number.
This is equivalent to a problem in the Schubert calculus: How many codimension 2 affine subspaces in intersect affine lines tangent to the rational normal curve at distinct points? The answer (Schubert, 1886) is , the Catalan number.
Theorem. Suppose that all points are real. Then all equivalence classes of rational functions with these critical points are real (contain real functions).
The corresponding result in the Schubert calculus is a special case of the B. and M. Shapiro conjecture.
More general problem: Given non-overlapping real segments , how many rational functions of degree satisfy for all ?
The answer is again , the Catalan number, and all these functions (up to equivalence) are real.
In the Schubert calculus problem, one should replace tangents to the rational normal curve by the secants through and . These secants become tangents when the segments contract to points.
If the segments do overlap, not all solutions are real, but there are lower bounds on the number of real solutions. These results are related to the B. and M. Shapiro conjecture for flag varieties.
• Lecture 4. Degree of the real Wronski map and the pole placement problem in control theory.
The Wronski map associates to a -tuple of polynomials of degree their Wronski determinant, a polynomial of degree . If the polynomials are linearly independent, they define a a point in the Grassmannian . Accordingly, the Wronski map can be considered as a map from to the projective space . The map is finite, and one can define its degree. In the complex case, this degree equals the number of standard Young tableaux for the rectangular -shape. In the real case, Young tableax should be counted with the signs depending on the number of inversions. Degree of the real Wronski map is zero when is even, and equals the number of standard shifted Young tableaux for an appropriately defined shifted shape when is odd. When both and are even, the Wronski map is not surjective. These results have important applications to real Schubert Calculus and to the pole placement problem in control theory. In particular, non-surjectivity of the real Wronski map provides examples of linear systems with static output feedback for which the pole placement problem has no real solutions.
• L. González-Vega (October 20th - 26th, 4 lectures): Using real algebraic geometry to improve curve and surface algorithms in computer aided geometric design applications
By using the surface--to--surface intersection problem in Computer-Aided Geometric Design as motivation, it will be shown how techniques from Real Algebraic Geometry can be very helpful in order to improve in practice the computation of the intersection curve between two surfaces in 3D space.
The topics covered by the four lectures will be the following:
• Lecture 1: How curves and surfaces are represented and manipulated in Computer-Aided Geometric Design?
• Lecture 2: Solving intersection problems: dealing with real algebraic plane curves or surfaces implicitely defined.
• Lecture 3: Computing with offsets: detecting geometric extraneous components.
• Lecture 4: Interference characterization between conics and quadrics through Quantifier Elimination.
• S. Kuhlmann (September 26th - October 17th, 8 lectures) : Polynômes positifs et problèmes de moments
The -moment problem originates in Functional Analysis: for a linear functional on , one studies the problem of representing via integration. That is, one asks whether there exists a measure , on Euclidean space , supported by some given (basic closed semi-algebraic) subset of , such that for every we have . Via Haviland's Theorem, the -moment problem is closely connected to the problem of representing positive (semi)definite polynomials on (Hilbert's 17th Problem, Positivstellensatz). In his solution of the Moment Problem for compact , Schmüdgen (1991) exploits this connection, and proves that a surprisingly strong version of the Positivstellensatz holds in the compact case. Schmüdgen's result provides a strong motivation to study refined versions of the Positivstellensatz. Following rapidly on his work, several generalizations of his results were worked out. (See F. Acquistapace's course for related topics in the context of analytic functions). The aim of this course is to provide the beginning student with a brief account of these developments. More precisely, we plan to cover the following topics:
• Hilbert's 17th Problem,
• Stengle's Positivstellensatz,
• Schmüdgen's Positivstellensatz,
• applications to the multi-dimensional -moment problem,
• the moment problem for subsets of the real line,
• extending Schmüdgen's Theorem to non-compact sets.
If time permits, we will discuss a version of the -moment problem, when is assumed to be invariant under the action of a group . Throughout the course, we shall present open problems.
• A. Macintyre (October 17th - November 15th, 8 lectures): Model theory of elliptic functions
I will consider model-theoretic questions for the various structures (T,f), where T is a complex torus of dimension 1 and f is a corresponding Weierstrass elliptic function. Model-completeness results will be proved for each individual case, in a formulation where we consider the f as given by its real and imaginary parts, which enables us to bring into play various techniques from o-minimality. The attempt to get a uniform model-completeness is till under way, and seems to be of great difficulty and considerable interest. By work of Peterzil and Strachenko, the uniform theory is known to be o-minimal, because (nontrivially) interpretable in the theory of the restricted analytics and the global real exponential. I will discuss the fine detail of this,and lay out the obstructions to getting uniform model-completeness (these have to do with Manin's work, and differential equations for the periods).I go on to show how certain cases (T,f) are even decidable, modulo a conjecture of Andre on 1-motives and transcendence. I will explain the connection to the Wilkie-Macintyre proof of decidability for the real exponential using Schanuel's Conjecture.Finally, I will discuss what is presently known for related functions, e.g the zeta and sigma functions.
• L. Mahé (November 10th - December 16th, 12 lectures) : Formes quadratiques et géométrie réelle
Real algebraic (and semialgebraic) geometry deals with sets of real solutions of polynomial equations (and inequalities), and the algebra underlying this geometry is the one of sums of squares, or more generally of quadratic forms. Thus the latter turn out to play a fundamental role. After some basics of semialgebraic geometry (real closed fields, semialgebraic sets), real algebra (Nullstellensatz and Positivstellensatz, real spectrum), and quadratic forms (Pfister forms, Witt rings), the course will illustrate the interactions between geometry and quadratic forms with the study of the following three problems:
• bounding the number of squares intervening in sums of squares,
• bounding the number of inequalities needed to describe a semialgebraic set
• separating connected components of varieties by signatures of quadratic forms.
Bibliography:
1. J. BOCHNAK, M. COSTE, M.-F. ROY : Géométrie algébrique réelle, Springer (1987)
2. C. ANDRADAS, L. BRÔCKER, J. RUIZ, Constructible sets in real geometry, Springer (1996)
3. T.-Y. LAM, The algebraic theory of quadratic forms, Reading, Benjamin (1973)
4. W. SCHARLAU, Quadratic and hermitian forms, Springer (1985)
• C. McCrory (September 28th - October 28th, 16 lectures) : Invariants and singularities
In the first half of the course I will discuss Akbulut-King invariants, which are local topological obstructions for a semialgebraic set to be homeomorphic to a real algebraic set. I'll present work with Adam Parusinski defining these invariants using operators on the ring of constructible functions. as well as related work by Coste-Kurdyka and Akbulut-King. The second half of the course will be on virtual Betti numbers, which are global invariants of real algebraic varieties that Parusinski and I introduced. I'll explain their relation to the weight filtration of Totaro and to the work of Bittner and Guillen-Navarro. Throughout the course I will emphasize open problems.
• G. Mikhalkin (September 23rd - December 16th, 16 lectures) : Amoebas of algebraic varieties and tropical geometry
This course is intended to introduce the audience to a recent technique in Algebraic Geometry based on application of the moment map and toric degenerations. One of the simplest examples of the moment map is the logarithm map that takes a point of the complex torus C*n to the point in Rn obtained by taking the logarithm of the absolute value coordinatewise . The images of holomorphic subvarieties of C*n under this map are called amoebas.
If one modifies this moment map by taking the logarithm with base t and lets t to go to infinity then the amoebas tend to some piecewise-linear polyhedral complexes in Rn. The dimension of these limiting complexes is equal to the dimension of the original varities. It turns out that such complexes can be considered as algebraic varieties over the so-called tropical semifield. The term "tropical semifield" appeared in Computer Science and, in the current context, refers to the real numbers augmented with the negative infinity and equipped with two operations, taking the maximum for addition and addition for multiplication. Polynomials over the tropical semifield are convex piecewise-linear functions and geometric objects associated to these polynomials are certain piecewise-linear complexes in Rn.
In the course we consider applications of both the amoebas themselves and the resulting tropical geometry. One area where amoebas turn out to be useful is Topology of Real Algebraic Varieties, in particular, problems related to Hilbert's 16th problem. Using amoebas we show topological uniqueness of a homologically maximal curve in the real torus R*2 and deduce a partial topological description for hypersurfaces in R*n for n>2. Applications of tropical geometry include construction of real algebraic varieties with prescribed topology (patchworking) as well as enumerative geometry.
A typical problem in enumerative algebraic geometry is to compute the number of curves of given degree and genus and with a given set of geometric constraints (e.g. passing through a point or another algebraic cycle, being tangent to such cycle, etc.). For a proper number of geometric constraints one expects a finite number of such curves. Even in the cases when this number is not finite there exists a way to interpret the answer to such problem as a (perhaps fractional or negative) Gromov-Witten number. Tropical geometry can be used for computation of these numbers. In this course we'll compute such numbers for arbitrary genus and degree when the ambient space is a toric surface and for genus 0 (and arbitrary degree) if the ambient space is a higher-dimensional toric variety. In addition we consider real counterparts of the enumerative problems, in particular, the Welschinger invariant, and do some computations for them.
• S. Orevkov (September 15th - October 7th, 8 lectures) : Courbes algébriques réelles, tresses et courbes J-holomorphes
• P. Parrilo (November 21st - 24th, 4 lectures): Computational techniques based on sum of squares decompositions
The idea would be to cover the application of convex optimization methods to problems in real algebraic geometry. We'd start with the basic notions of semidefinite programming, and SOS decompositions. Depending on the audience, we may go into SOS on quotient or invariants rings, and apply the results to the computation of Positivstellensatz certificates.
Also, again depending on the audience, we would emphasize different kind of applications (geometry, dynamical systems, quantum mechanics, etc.)
• L. Paunescu (December 1st - 16th, 4 lectures), Tree Model, Relative Newton Polygon and Applications
The Newton Polygon is a powerful tool for analytic singularities. It yields the fundamental theorem of Newton-Puiseux. This is an elementary exposé on this.
For an analytic function germ (convergent power series) and an analytic arc , the Newton Polygon of relative to, is defined. A number of important applications are illustrated.
Two seemingly unrelated problems are intimately connected.
The first is the equisingularity problem in : For an analytic family , when should it be called an equisingular deformation? This amounts to finding a suitable trivialization condition (as strong as possible) and, of course, a criterion.
The second is on the Morse stability. We define , which is "enriched" with a class of infinitesimals. How to generalize the Morse Stability Theorem to polynomials over ?
The space is much smaller than the space used in Non-standard Analysis. Our infinitesimals are analytic arcs, represented by fractional power series, e.g., , , , are infinitesimals at , in descending orders.
Thus, is a family of polynomials over . This family is not Morse stable:a triple critical point in splits into three when .
Bibliography:
1. T.-C. Kuo and A. Parusinski, Newton Polygon Relative to an Arc, in Real and Complex Singularities (São Carlos, 1998), Chapman & Hall Res. Notes Math., 412, 2000, 76-93.
2. T.-C. Kuo and A. Parusinski, Newton-Puiseux Roots of Jacobian Determinants, Journal of Algebraic Geometry, 13 (2004), 579-601.
3. K. Kurdyka and L. Paunescu, Arc-analytic Roots of Analytic Functions are Lipschitz, Proceedings of the American Mathematical Society, 13 (2004), no. 6, 1693-1702.
4. T.-C. Kuo and L. Paunescu, Equisingularity in as Morse stability in infinitesimal calculus, to appear in the Proceedings of the Japan Academy, June, 2005.
• J.-Ph. Rolin (November 16th - December 15th, 8 lectures) : La o-minimalité du point de vue de la géométrie et de l’analyse
This course is the continuation of M. Coste's one. We will try to answer, in concrete situations, the question : How to prove that a given family of sets is o-minimal ?
More precisely, the topics considered will be :
• the properties of global sub-analytic sets, and a preparation theorem for sub-analytic functions
• the exp-log-analytic functions
• the behavior of solutions of differential equations from the o-minimal point of view
• the relationship with the notion of quasi-anayticity
• F. Rouillier (September 14th - 28th, 5 lectures), M.-F. Roy (October 3rd - 12th, 5 lectures), S. Basu (November 14th - November 28th, 5 lectures) : Algorithms in real algebraic geometry
• F. Rouillier:
1. Univariate solving.
2. Properties and applications of Gröbner bases (localization and elimination).
3. Zero-dimensional solving (Hermite quadratic forms, RUR)
4. Well-behaved parametric systems
5. Some applications of polynomial system solving
• M-F. Roy:
1. Discriminants, Resultants, Subresultants
2. Complexity of cylindrical decomposition
3. Applications of cylindrical decomposition
4. Complexity of finding a point in every connected components of an algebraic set
5. Complexity of finding non empty sign conditions on a family of polynomials
• S. Basu:
1. General Decision Problem and Quantifier Elimination.
2. Uniform (Local) Quantifier Elimination and its application in constraint databases.
3. Computing Roadmaps for Algebraic Sets.
4. Computing Roadmaps in general and computing descriptions of connected components.
5. Computing coverings by contractible sets and applications.
• E. Shustin (September 29th - October 14th, 4 lectures): Patchworking construction and its applications
1. Construction of real non-singular algebraic varieties: Viro's method and its modifications.
2. Patchworking construction in singularity theory and algebra over the complex and real field.
3. Patchworking construction in the tropical geometry.
• F. Sottile (November 14th - 29th, 8 lectures) : Real solutions to equations from geometry
Understanding, finding, or even deciding the existence of real solutions to a system of equations is a very difficult problem with many applications. While it is hopeless to expect much in general, we know a surprising amount about these questions for systems which possess additional structure. Particularly fruitful---both for information on real solutions and for applicability---are systems whose additional structure comes from geometry. Such equations from geometry for which we have information about their real solutions will be the subject of this short course.
We will focus on equations from toric varieties and homogeneous spaces, particularly Grassmannians. Not only is much known in these cases, but they encompass some of the most common applications. The results we discuss may be grouped into three themes:
(1) Upper bounds on the number of real solutions.
(2) Geometric problems that can have all solutions be real.
(3) Lower bounds on the number of real solutions
Upper bounds as in (1) bound the complexity of the set of real solutions---they are one of the sources for the theory of o-minimal structures which are an important topic in this trimester. Lower bounds as in (3) give an existence proof for real solutions. Their most spectacular manifestation is the non-triviality of the Welschinger invariant, which was computed via tropical geometry. This is also explained in other courses this trimester at the Centre Borel.
The course will have three parts, grouped by geometry:
I) Overview (Lecture 1)
II) Toric Varieties (Lectures 2--4)
III) Grassmannians (Lectures 5--8)
Topics for each lecture
1. Overview. Upper and lower bounds, Shapiro conjecture, and rational curves interpolating points in the plane.
2. Sparse polynomial systems and toric varieties. Kouchnirenko's Theorem and Groebner degeneration.
3. Upper bounds. Descartes' rule of signs, Khovanski's fewnomial bound, bound for circuits.
4. Lower bounds. Soprunova-Sottile toric lower bound.
5. Grassmannians. Wronski map, problem of 4 lines. Reality in the Schubert calculus via Schubert induction, Vakil's Theorem.
6. The Shapiro conjecture for Grassmannians. Computational and theoretical evidence. Sums of squares and discriminants.
7. Eremenko and Gabrielov's elementary proof of Shapiro Conjecture for 2-planes. Maximally inflected curves.
8. Lower bounds for the Wronski map via sagbi degeneration.
There will be typeset notes available for each lecture.
• B. Teissier (September 19th - October 17th, 5 lectures) : Introduction to valuations in algebraic geometry
Je donnerai les bases de la theorie des valuations en insistant sur leur role en geometrie algébrique. J'essaierai de rappeler et d'illustrer les notions de géométrie algébrique utilisées pour les rendre accessibles aux non-experts.
I will explain the basics of the theory of valuations, insisting on their role in algebraic geometry. I shall try to recall and illustrate the notions of algebraic geometry which are used, in order to make them accessible to non-experts.
D. Gondard (October 24th - November 14th, 3 lectures) : Valuations in real algebra
Le cours, qui sera effectué dans le cadre des corps ordonnables, débutera avec la compatibilité d'un préordre et d'une valuation, et les cas particuliers des éventails et éventails valués. Ensuite nous présenterons la théorie des ordres de niveau supérieur, et ses liens avec les sommes de puissances ; puis il sera etudié la notion de clôture algébrique d'un corps muni d'un éventail valué. Enfin nous donnerons quelques applications des ces notions en géométrie algébrique réelle et montrerons le lien avec les R-places et l'anneau d'holomorphie réel.
We shall work in the framework of real fields, start with the compatibility of a preordering with a valuation, and then turn to the special case of fans and valuation fans. Afterwards we shall present the higher level orderings, and their links with sums of powers, and study the notion of algebraic closure under algebraic extensions of a fields equipped with a valuation fan. Last we shall give some applications to Real Algebraic Geometry and show the links with R-places and with the real holomorphy ring. | 2022-09-30 11:13:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7594438195228577, "perplexity": 1053.948408420607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335448.34/warc/CC-MAIN-20220930082656-20220930112656-00798.warc.gz"} |
https://datamettle.com/tag/data-mettle/ | We are your data science workshop.
## Useful Out-of-the-Box Machine Learning Models
With the growing popularity of data science, out-of-the-box machine learning models are becoming increasingly available in free, open-source packages. These models have already been trained on datasets that often large and therefore time-consuming. They are also packaged up in ways that are easy to use, thus simplifying the process of applying data science. While there is no replacing customising a model to suit your objectives, these packages are useful for obtaining quick results and provide a gentle, easy introduction into data science models. In this article, we touch on some of these tools and their applications to well-known machine learning problems.
## Image classification using out-of-the-box machine learning models
Image classification is a machine learning task where an image is categorised into one of several categories through identifying the object in the image. It is easy for humans to recognise the same object shown in different backgrounds and different colours, and it turns out that it’s also possible for algorithms to classify images up to a certain amount of accuracy. Significant advances have been made in image classification as well as other computer vision problems with deep learning, and some of these available as out-of-the-box machine learning models in keras/tensorflow
As an example, the inception-v3 model, trained on ImageNet, a database of over 14 million labelled images, is available as a tool for prediction. It is easy to set up, only taking a couple lines of code before you’re ready to begin classifying your own images – simply read in the image and the model will return a category and a score. The inception-v3 and other image classification models can also be fine-tuned by keeping the weights of some of the layers fixed, and tuning the weights of other layers by training on a new, more relevant dataset. This is a well-known procedure of transfer learning to customise the model to the new data. It is especially useful when the new dataset is small.
## Named Entity Recognition
With Named Entity Recognition (NER), the goal of the task is to identify named entities within a chunk of text, such as the name of an organisation or a person, geographical locations, numerical units, dates and times, and so forth. This is useful for automatic information extraction, eg. in articles, reports, or invoices, and saves the effort of having a human manually read through a large number of documents. NER algorithms can help to answer questions such as which named entities are mentioned most frequently, or to consistently pick out a monetary value within the text.
Spacy has a NER module available in several languages, as well as a range of text processing capabilities such as part-of-speech tagging, dependency parsing, lemmatization etc. Installation of the package is straightforward and no more than a few lines of code is required to begin extracting entities. Spacy v2.0 NER models consist of subword features and a deep convolutional neural network architecture, and the v3.0 models have been updated with transformer-based models. Spacy also has the functionality to allow training on new data to update the model and improve accuracy, as well as a component for rule-based entity matching where a rule-based approach is more convenient.
## Sentiment Analysis using out-of-the-box machine learning models
Sentiment analysis is used for identifying the polarity of a piece of text, i.e, whether it is positive, negative or neutral. This is useful for monitoring customer and brand sentiment, analysing any text-based feedback. More recently, it has been used for analysing public sentiment to the COVID-19 pandemic through social media or article headlines.
Hugging face transformers is a library containing a range of well-known transformer-based models that have obtained state-of-the-art results in a number of different natural language processing tasks. It includes both language models and task-specific models, and has a sentiment analysis model with a base architecture of distilBERT (a smaller/faster version of the BERT language model) which is fine-tuned for the sentiment analysis downstream task using the SST-2 dataset. The model returns positive and negative labels (i.e, excludes neutral) as well as a confidence score. Other options for an out-of-the-box sentiment analysis model are TextBlob, which has both a rules-based sentiment classifier and a naive-bayes model trained on movie reviews, and Stanza, which has a classifier based on a convolutional neural network architecture, and can handle English, German and Chinese texts. Both TextBlob and Stanza return a continuous score for polarity.
## Transfer learning using pre-trained models
While there is a lot of value in an out-of-the-box model, its accuracy may not be the same when applied directly to an unseen dataset. This usually depends on how similar the characteristics of the new dataset is to the dataset the model is trained on – the more similar it is, the more the model’s results can be relied on. If direct use of an out-of-the-box model is not sufficiently accurate and/or not directly relevant to the problem at hand, it may still be possible to apply transfer learning and fine-tuning depending on package functionality. This involves using new data to build additional layers on top of an existing model, or to retrain the weights of existing layers while keeping the same model architecture.
The idea behind transfer learning is to take advantage of the model’s general training, and transfer its knowledge to a different problem or different data domain. In image classification for example, the pre-trained models are usually trained on a large, general dataset, with broad categories of objects. These pre-trained models can then be further trained to classify, for example cats vs dogs only. Transfer learning is another way to make use of out-of-the-box machine learning models, and is an approach worth considering when data is scarce.
## Data Mettle: New office, new team member
It’s been an exciting couple of months here at Data Mettle, and we wanted to give you a quick update.
## We’ve gone global
We are very excited to announce we’ve officially gone global! We’ve expanded our reach to the other side of the globe and are pleased that our new Australian office in Perth is open.
Over the past couple of months, we’ve been developing our presence in Perth and getting to know the digital challenges here. The technology and business landscape in the city is exciting, and the weather isn’t bad either.
## Allow myself to introduce… myself
I should probably introduce myself – I’m Matt, and I recently joined Data Mettle as COO. Its an incredibly exciting opportunity for me to work with the team here, and dive into the world of data science. My background is in product management and operations, with interest in technology and startups.
Best of all, I worked on a project with the team at Data Mettle before I joined. My first-hand experience of their expertise, professionalism and the quality of their work made joining a no brainer. I am personally very excited about the experience and knowledge I’m going to gain by working with the team.
Some of the key areas I’ll be focusing on are:
• Outreach
• Product development
• Customer engagement
I’ll be working out of the London office. You can reach me at matt@datamettle.com if you want to grab a coffee.
Finally, we are launching a quarterly newsletter. In it, we will address a range of topics in the world of data science, including news, research, and resources. It is geared both technical and non-technical readers alike to help you stay on the cutting edge of what is going on in this rapidly evolving field.
## Modelling Eurovision voting
This is a follow-up to our previous blog on Eurovision voting, where we’ll explain how we modelled the objective quality of songs and voting biases between countries in Eurovision, and how we grouped the countries into blocks based on their biases. The source code can be found here. We’ve taken the data from a Kaggle competition, and sourced data for any missing years from Wikipedia.
## The hierarchical model
The idea is that the voting outcome is dependent of both the inherent quality of the entry, and the biases countries have for voting for each other. There are lots of possible ways of doing this, but ours is fairly simple and works quite well.
Let $$r_{c_ic_jy_k}$$ denote the fraction of people in country $$c_i$$ that voted for country $$c_j$$ in the year $$y_k$$. Note that $$\sum_{j=1}^Nr_{c_ic_jy_k} = 1$$, so it is reasonable to model the vector $$\mathbf{r}_{c_iy_k} = (r_{c_ic_1y_k}, \dots, r_{c_ic_Ny_k})$$ as following a Dirichlet distribution:
$\mathbf{r}_{c_iy_k} \sim \operatorname{Dir}(\beta_{c_ic_1y_k}, \dots, \beta_{c_ic_Ny_k}).$
We choose a model where the parameters $$\beta_{c_ic_jy_k}$$ decompose as
$\beta_{c_ic_jy_k} = \operatorname{Exp}\bigl(\theta_{c_jy_k} + \phi_{c_ic_j}\bigr),$
where $$\theta_{c_jy_k}$$ captures the objective quality of the song from country $$c_j$$ in the year $$y_k$$, and $$\phi_{c_ic_j}$$ captures the bias country $$c_i$$ has in voting (or not voting) for country $$c_j$$. Furthermore, we assume that the $$\theta_{c_jy_k}$$’s and $$\phi_{c_ic_j}$$’s are drawn from an (unknown) normal distribution:
$\phi_{c_ic_j}, \theta_{c_jy_k}\sim N(\mu, \sigma).$
Note that we don’t actually have access to $$r_{c_ic_jy_k}$$, we only have data on the number of points each country was awarded. But we make do with what we have and approximate $$r_{c_ic_jy_k}$$ by
$r_{c_ic_jy_k} \simeq \frac{\text{(points awarded to country $$c_j$$ by country $$c_i$$ in the year $$y_k$$}) + \alpha}{(\text{total points awarded by country $$c_i$$ in the year $$y_k$$}) + N\alpha},$
where $$\alpha$$ is a constant that we set to 0.1.
It’s hard to say for definite whether this is a reasonable approximation without being able to actually see the voting data, but preferences often follow power laws, and the decreasing sequence of points 12, 10, 8, 7, 6, 5, 4, 3, 2, 1, 0, 0, 0, … at least follow a similar shape:
It’s not perfect, but hopefully good enough. Note that we do completely miss out on any information about the tail, but we assume that this is mostly noise that don’t contribute much anyway.
## Fitting the model
We fit the model using using Stan, a programming language that is great for making Bayesian inferences. It uses Markov chain Monte Carlo methods to find the distribution of the parameters which best explain the responses. Stan is very powerful, all we really need to do is to specify the model, then pass in our data and Stan does the rest!
As Stan uses Bayesian methods, it returns a sample of the distribution of your parameters, in our case consisting of 16 000 (paired) values for each parameter. In our previous analysis we simply took the means as point estimates for our parameters, but having the distribution lets us talk about the uncertainty of these estimates. For example, the point estimate for objective quality $$\theta$$ of the winner in 2015 (Sweden) is 1.63, and 1.41 for the runner up (Russia). This however, doesn’t tell us the full picture. Here’s a plot for the joint distribution of $$\theta$$ for these entries:
From this joint distribution we can calculate the probability that Sweden’s entry was objectively better than Russia’s entry as the proportion of samples above the blue line, and this turns out to be about 93%.
## Finding the blocks
Looking at the bias terms $$\phi_{c_ic_j}$$, we can attempt to group the countries into groups that are tightly connected, i.e. where there’s positive biases within a group and neutral or negative biases between groups. We use a method based on Information Theoretic Co-Clustering, where we choose a clustering where we loose as little mutual information as possible.
The basic idea can be described as follows: For each vote from country A to country B, take a marble and label it ‘from: country A, to: country B’. Now put all the marbles in a jar, and pick one at random. How much does knowing from which country the vote was from tell you who it was for? For example, Romania and Moldova almost always trade 12 points, so knowing that the vote was from Romania tells me there is a high probability that the vote was for Moldova. Mutual information gives us a quantitative value of this knowledge for the whole jar.
Now if we have clustered the countries into blocks, we can instead label the marbles ‘from: block A, to: block B’. We generally loose information by doing this. As we don’t know which country the vote was actually from it’s harder to predict which block the vote was for. By finding the clustering that looses the least amount of information, we get the clustering that best represents the biases.
Below is a heatmap showing the probabilities of countries voting for each other, with our identified blocks separated by lines. We do see that the blocks certainly capture voting behaviour fairly well, voting within the blocks is far more likely than between the blocks (with Lithuania and Georgia being a notable exception). Also, we can identify an “ex-Soviet block” within the “Eastern Europe” block, and a “Northern Europe” block with in the “Western Europe” block, both highlighted in gray. | 2022-12-02 04:02:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34175795316696167, "perplexity": 853.9682624258296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710890.97/warc/CC-MAIN-20221202014312-20221202044312-00241.warc.gz"} |
https://hsm.stackexchange.com/questions/7107/why-is-the-azimuthal-quantum-number-so-named | Why is the azimuthal quantum number so named?
The name "azimuthal quantum number" is often used for the total orbital angular momentum quantum number $\ell$ in an atom.
What is the origin of this name? It makes no sense to me, since the usual meaning of "azimuthal" is apparently "of or pertaining to the azimuth; in a horizontal circle", but $\ell$ has no information about orientation.
2 Answers
I believe this is what happened. Sommerfeld (1915, p. 430) $=$ (1916a, p. 12) originally introduced an “azimutal” quantum condition and number $n$ such that $\smash{\oint p_\varphi\,d\varphi=nh}$. This $n$ is what we now call the magnetic quantum number $m$.
Then in (1916b) he implicitly switched his azimuth from being $\alpha$ to $\gamma$ in this figure, where the tilted plane would be his “Bahnebene”. As he writes, the resulting “azimutal quantum $n$ splits into quanta $n_1$ and $n_2$ belonging to the coordinates $\varphi$ and $\vartheta$”: $\smash{\oint p_\varphi\,d\varphi=n_1h}$ and $\smash{\oint p_\vartheta\,d\vartheta=n_2h}$. That new $n=n_1+n_2$ is what we now call $\ell$. It measures angular momentum in the direction normal to a putative “orbit’s plane” rather than in a fixed vertical direction.
When the Schrödinger equation is solved in spherical coordinates by separation of variables it is split into three equations whose spectra produce the first three quantum numbers. One of them, ℓ, corresponds to the selection of spherical harmonics and determines the number of planar nodes going through the nucleus (planar node is the midpoint between crest and trough with zero magnitude).
The name "azimuthal quantum number" for ℓ was originally introduced by Sommerfeld, who refined Bohr's semi-classical model by replacing circular orbits with elliptic ones. The spherical orbitals were similar (in the lowest-energy state) to a rope oscillating in a large "horizontal" circle. | 2021-08-05 08:56:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8298003673553467, "perplexity": 573.4256361225199}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00025.warc.gz"} |
https://community.hpe.com/t5/Disk-Enclosures/xp1024-4D-4D-raid-0-1-performance/td-p/3968977 | HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Disk Enclosures
cancel
Showing results for
Did you mean:
SOLVED
Go to solution
## xp1024 4D+4D raid 0/1 performance
Hi,
I heard that 4D+4D raid 0/1 's performance over 2D+2D raid 0/1 is only about 50%. Is that true?
Suppose I have 128 disks, there are two options:
1). create 2D+2D raid0/1 array groups, that is there are total 32 array groups, create one LDEV/LUN in each array groups, and create a VG using total 32 LUNs, then create logical volume striping over total 32 LUNs.
2). create 4D+4D raid0/1 array groups, create one LDEV/LUN in each array groups, and create a VG using total 16 LUNs, then create logical volume striping over total 16 LUNs.
Both options can have IO spread over total 128 disks, Is there any performance difference in the above two options?
The application is oracle 8i(OLTP).
-Xiang
8 REPLIES
## Re: xp1024 4D+4D raid 0/1 performance
anyone help me?
Honored Contributor
Solution
## Re: xp1024 4D+4D raid 0/1 performance
2x 2+2 and 1x 4+4 do have exactly the same performance numbers!
So both of your examples will have the same performance.
The main difference is the number of LUNs you have to manage and the granularity of your VG!
Cheers
XP-Pete
I love storage
## Re: xp1024 4D+4D raid 0/1 performance
Thanks, XP-Pete.
The customer has a CA environment: two XP 1024 arrays, one is primary, and another is backup. The customer just doubled the disks to try to improve the performance. The current system just uses 2D+2D raid 0/1. If 4D+4D performance is okay, I'd like to just configure the backup XP from scratch using 4D+4D, then the number of LUN will be the same as the primary site. Then after configuring CA, we can easily switch the oracle DB to backup site, then reconfigure the primary XP the same way, and eventually switch the DB back to primary site. Hopefully the oracle IO performance will improve a lot after xp upgrading.
If stick to 2D+2D, to really balance the I/O, we have to painfully do data migration, maybe use dd command to copy datafile/LV one by one. (if 4D+4D, we just leverage the CA mechanism, no manual data migration)
Does 4D+4D make sense in my scenario?
-Xiang
Honored Contributor
## Re: xp1024 4D+4D raid 0/1 performance
Well I believe it is not worth the effort.
You say that you would leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2.
So I would stay with 2+2 and also deploy the DR XP the same way!
In your scenario you will not gain performance.
You have to make sure that your VG really sripes across all available 2+2 groups; then you are fine!
Also see the attached XA / SAP config guide.
It is actually written for the XP12000 but most also matches the XP1024.
Cheers
XP-Pete
I love storage
Honored Contributor
## Re: xp1024 4D+4D raid 0/1 performance
Uuups; here is the doc!!
XP-Pete
I love storage
## Re: xp1024 4D+4D raid 0/1 performance
Hi Peter, thanks, the XP guide for SAP you provided is very useful.
However, I'm a little bit more confused than before now, :-)
As you said in your previous post, if 2x 2+2 is the same as 1x 4+4, then 2x 4+4 will have double performance over 2x 2+2?
If I leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2, why 4+4 won't give me any performance gain? Anyway, the I/O now spreads over more disks.
Maybe the reason is that the lun with the same size in a 4+4 actually will only use 4 disks? since in your doc, 4+4 just is a "concatenation" of 2 2+2.
And it seems the guide you provided contradicts with one whitepaper I read(attached): if your guide, the 4+4 has the same performance as 2+2, however, in the attached whitepaper, 4+4 is 50% performance gain over 2+2.
And back to my original question, two options:
1) 32 2+2 array group, one lun in each array group, the logical volume stripe over 32 luns.
2) 16 4+4 array group, each lun in each array group, the logical volume stipe over 16 luns.
(the lun size is the same in both options, in our case, it's OPEN-L 36.4G)
One of the drawback for option 1) is that we have to do data migration manually.
which option will give me higher performance?
-Xiang
Honored Contributor
## Re: xp1024 4D+4D raid 0/1 performance
>>However, I'm a little bit more confused than before now, :-)
As you said in your previous post, if 2x 2+2 is the same as 1x 4+4, then 2x 4+4 will have double performance over 2x 2+2?
A: Yes, you are right!
>> If I leave the size and numbers of ldevs/LUNs for 4+4 as they are in 2+2, why 4+4 won't give me any performance gain? Anyway, the I/O now spreads over more disks.
A: Yes, you are right again! I made a mistake when I said same number!!
In the 2+2 config you will end up with double the number of LUNs compared to 4+4.
Performance of #ldevs * 4+4 = #ldevs * 2 * 2+2
>> And it seems the guide you provided contradicts with one whitepaper I read(attached): if your guide, the 4+4 has the same performance as 2+2, however, in the attached whitepaper, 4+4 is 50% performance gain over 2+2.
A: I do not agree. In the paper on page 9, figure 4 you will find that all values just double for 4+4. As an example take the purple line: For 2+2 it goes vertical at round 425IOPS for 4+4 at round 850IOPS.
>> And back to my original question, two options:
1) 32 2+2 array group, one lun in each array group, the logical volume stripe over 32 luns.
2) 16 4+4 array group, each lun in each array group, the logical volume stipe over 16 luns.
(the lun size is the same in both options, in our case, it's OPEN-L 36.4G)
One of the drawback for option 1) is that we have to do data migration manually.
which option will give me higher performance?
A: Both options will give the exact same performance. You will stripe over a total of 128 disks in both cases.
Hope that helps
Cheers
XP-Pete
I love storage | 2018-12-12 16:21:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8060099482536316, "perplexity": 4999.819476555913}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00532.warc.gz"} |
https://scioly.org/forums/viewtopic.php?p=37953 | Electric Vehicle C
rman
Member
Posts: 59
Joined: February 6th, 2009, 2:09 am
Division: C
Has thanked: 0
Been thanked: 0
Re: Electric Vehicle C
I have been reading the series of questions and answers regarding uing a ball bearing on one end of a vehicle and wheels on the other and I had a question for the original poster. When you say "ball bearing" do you mean a single steel ball used as a roller, or do you mean a ball bearing of the sort that are used on rotating parts? If you are suggesting a single ball that is free to rotate in any direction then your steering will be based on the relative size of the front wheels, the relative traction of the two wheels, the accuracy of the axle and bearings, the runout (wobble) of the wheels, and the imperfections of the track surface. These same factors effect any vehicle in much the same ways, but with a free rotating ball on one end how do you adjust the steering?
If you actually meant that you would just use a standard ball bearing (a bearing used to support a rotating shaft or a stationary shaft with a rotating outer race) for a wheel, then the ball bearing simply replaces a wheel and you would need to adjust steering the same way as any other vehicle.
I am afraid that there is no magic bullet for making a vehicle that goes straight. As others have already pointed out the only "trick" is to make sure the track is as clean as possible. Make sure the wheels have no runout and have as close to the same friction as possible. Make sure any wheels connected together on the same axle are as close as possible to the same diameter, or allow each wheel to rotate independently. Then adjust the steering until the vehicle goes as straight as possible and try and keep the conditions as close as possible each time you run.
It may seem unintuitive, but when a wheeled vehicle (car, bus, "electric vehicle" etc.) is moving the wheels are always slipping slightly. The amount of slippage necessary to cause your vehicle to turn a cm or so, over 10 meters is incredibly small. 1 cm side movement over a 10 meter track is equivelent to 1 meter over a kilometer. Imagine trying to get a car to go down a kilometer of highway and stay within 1 meter of the center without any input fromt he driver, it's nearly impossible. If you scale up your electric vehicle and the tracks used in typical competitions you will see that the imperfections in the best vehicles and smoothest tracks are probably much larger then the imperfections in a decent car and nice piece of highway. The cracks between gym floor boards, when multiplied by 100 (the difference between our 10 meter track and my hypothetical 1 km highway) becomes a large bump in the road. Even the finish on the floor, or scratches, or pieces of dirt and dust, become major obstacles to a perfect striaght run.
B
fleet130
Staff Emeritus
Posts: 433
Joined: November 10th, 2001, 3:06 pm
Location: Mare Tranquillitatis
Has thanked: 0
Been thanked: 0
Contact:
Re: Electric Vehicle C
Imagine trying to get a car to go down a kilometer of highway and stay within 1 meter of the center without any input from the driver,
Ah, but the wheelbase on the scaled-up vehicle would be somewhere in the neighborhood of 28-32 meters! Still, the tolerances needed to produce the desired results are extremely small!
Information expressed here is solely the opinion of the author. Any similarity to that of the management or any official instrument is purely coincidental! Doing Science Olympiad since 1987!
rman
Member
Posts: 59
Joined: February 6th, 2009, 2:09 am
Division: C
Has thanked: 0
Been thanked: 0
Re: Electric Vehicle C
fleet130 wrote:
Imagine trying to get a car to go down a kilometer of highway and stay within 1 meter of the center without any input from the driver,
Ah, but the wheelbase on the scaled-up vehicle would be somewhere in the neighborhood of 28-32 meters! Still, the tolerances needed to produce the desired results are extremely small!
An interesting point but I think that the notion of a longer wheelbase would somehow make the vehicle more accurate is possibly missunderstood. It is obviously true that as the wheelbase gets longer the steering angle required for a certain rate of turn decreases, but how does wheelbase change the effect of things like wheel runout, differences in traction from one wheel to another, differences in wheel diameter of coupled wheels, etc.?
By the way, I heard that someone got a perfect score at their regional competition this past weekend (200/200). Like most regions, they do not publish scores, so it is impossible to verify, but I did see the run and noticed that the event coordinator was looking very carefully and just shook his head and didn't even take out a ruler to measure the distance. As the kids doing the run walked away they said that it was absolutely perfect, zero time error, zero, distance error and zero finish line error (and obviously the centerline bonus as well). There first run was perfect on distance and time I believe, but I think was off several cm in aim, so it would seem they have to compensate aim for track variations (or they were just lucky). I guess the "no electronics" bonus wouldn't help with a perfect score. I also noticed in the rules that the tie breakers for that event are best distance and best time score, so there would be no way to pick a winner out of several vehicles with perfect scores.
fleet130
Staff Emeritus
Posts: 433
Joined: November 10th, 2001, 3:06 pm
Location: Mare Tranquillitatis
Has thanked: 0
Been thanked: 0
Contact:
Re: Electric Vehicle C
I think that the notion of a longer wheelbase would somehow make the vehicle more accurate is possibly misunderstood.
I agree, but the misunderstanding is where the misunderstanding is!
Given that all other factors remain the same, the longer the wheelbase, the greater the turning radius. The greater the turning radios, the less the error from a straight line.
Information expressed here is solely the opinion of the author. Any similarity to that of the management or any official instrument is purely coincidental! Doing Science Olympiad since 1987!
nickfastswim
Exalted Member
Posts: 5
Joined: January 27th, 2007, 12:44 pm
State: MN
Location: The Far North
Has thanked: 0
Been thanked: 0
Re: Electric Vehicle C
The basic formula for the turning radius is $R=\frac {L}{ sin(90^{\circ} -\theta)}$
where L = length of wheelbase (distance between the axles)
theta= angle the axle is turned with respect to the the center line
If we were to increase the wheel base L then according to this formula the turning radius R also increases.
Thus fleet130 is correct.
2008---I failed miserably
2007---Scrambler, Boomilever, Wright Stuff
(07 State) 1st ---Boomilever
(07 State) 2nd ---Scrambler
(06 State) 2nd ---Scrambler
-->Minnesota
ahage16
Member
Posts: 17
Joined: February 9th, 2009, 2:02 pm
Division: C
Has thanked: 0
Been thanked: 0
Re: Electric Vehicle C
Thanks for everything on the last page gh! I really wish we had a computer science class at our school, or at least someone who knew about this kind of stuff! I've been pretty much trying to work this for the past year with nothing but google, which hasn't worked out too well. All I know for sure is that I am abandoning propeller because that hasn't worked at all for me. I guess this is what summer is for!
rman
Member
Posts: 59
Joined: February 6th, 2009, 2:09 am
Division: C
Has thanked: 0
Been thanked: 0
Re: Electric Vehicle C
I'll keep this simple. Has anyone come up with an explaination for the fact that most vehicles can't repeatedly go to the same aim point? We have great sights, nice bearings and wheels that are straight and true, nice bearings, no slop or wiggle, stiff chassis, etc., but the vehicle will do as much as several cm left and right between test sessions on different tracks. On a single session on a single track the variation is only mm, but when testing on different days on different tracks the aim point seems to mysteriously move. I hate wasting one of my two runs just to find out the peculiarities of that track, but maybe there is no way to go to the exact same point on different tracks?
fleet130
Staff Emeritus
Posts: 433
Joined: November 10th, 2001, 3:06 pm
Location: Mare Tranquillitatis
Has thanked: 0
Been thanked: 0
Contact:
Re: Electric Vehicle C
I believe the interface between the wheels and the track surface has a lot to do with this. Hard skinny tires don't perform as well in this respect as wider softer ones. Solutions to these problems are usually a "best compromise" after considering several other factors (such as rolling resistance).
Information expressed here is solely the opinion of the author. Any similarity to that of the management or any official instrument is purely coincidental! Doing Science Olympiad since 1987!
sachleen
Exalted Member
Posts: 215
Joined: April 10th, 2007, 8:31 pm
State: CA
Has thanked: 0
Been thanked: 0
Contact:
Re: Electric Vehicle C
Had my states today. My EV was 2cm from the center of the target on the first run and 3cm away the second run (distance from the center of the target to the tip of the pointer). I don't know my time score yet but I think that's what caused me to get the 5th. Other teams' distance scores were about the same as mine, all within that 2cm radius range so either it was down to that last mm, or time (or both).
starpug
Posts: 932
Joined: April 5th, 2008, 6:51 pm
State: ME
Location: Waterville, Maine
Has thanked: 0
Been thanked: 0
Re: Electric Vehicle C
Yeah their device was pretty awesome
Last edited by starpug on April 20th, 2009, 5:15 pm, edited 1 time in total.
Get your facts first, then you can distort them as you please. - Mark Twain
Who is online
Users browsing this forum: No registered users and 0 guests | 2020-05-31 08:55:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5397403240203857, "perplexity": 1653.887098385129}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413097.49/warc/CC-MAIN-20200531085047-20200531115047-00460.warc.gz"} |
https://ankplanet.com/physics/mechanics/kinematics/kinematics-reasonings/find-the-angle-of-projection-at-which-the-horizontal-range-and-the-maximum-height-of-a-projectile-are-equal/ | # Find the angle of projection at which the horizontal range and the maximum height of a projectile are equal.
Let the required angle of projection be $\theta$ for a projectile fired with velocity $u$. According to the question,
$\text{Maximum Height}=\text{Horizontal Range}$ $\frac{u^2\sin^2\theta}{2g}=\frac{u^2\sin2\theta}{g}$ $\frac{\sin^2\theta}{2}=\sin2\theta$ $\frac{\sin^2\theta}{2}=2\sin\theta\cos\theta$ $\frac{\sin\theta}{\cos\theta}=4$ $\tan\theta=4$ $\therefore\theta=\tan^{-1}4=75.96°$
Hence, the required angle of projection is $75.96°.$ | 2022-08-11 06:10:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9094258546829224, "perplexity": 123.61250510889917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571234.82/warc/CC-MAIN-20220811042804-20220811072804-00150.warc.gz"} |
https://hal.inria.fr/inria-00167681 | # Text-based ontology construction using relational concept analysis
1 ORPAILLEUR - Knowledge representation, reasonning
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : We present a semi-automated process that constructs an ontology based on a collection of document abstracts for a given domain. The proposed process relies on the formal concept analysis (\fca), an algebraic method for the derivation of a conceptual hierarchy, namely '\textit{concept lattice}', starting from data context, i.e., set of individuals provided with their properties. First, we show how various contexts are extracted and then how concepts of the corresponding lattices are turned into ontological concepts. In order to refine the obtained ontology with transversal relations, the links between individuals that appear in the text are considered by the means of a richer data format. Indeed, Relational Concept Analysis (\rca), a framework that helps \fca in mining relational data is used to model these links and then inferring relations between formal concepts whose semantic is similar to roles between concepts in ontologies. The process describes how the final ontology is mapped to logical formulae which can be expressed in the Description Logics (\dl) language ${\cal FLE}$. To illustrate the process, the construction of a sample ontology on the astronomical field is considered.
Type de document :
Communication dans un congrès
International Workshop on Ontology Dynamics - IWOD 2007, Jun 2007, Innsbruck, Austria
Domaine :
https://hal.inria.fr/inria-00167681
Contributeur : Rokia Bendaoud <>
Soumis le : mercredi 22 août 2007 - 11:02:15
Dernière modification le : jeudi 11 janvier 2018 - 06:19:54
Document(s) archivé(s) le : lundi 24 septembre 2012 - 11:41:24
### Fichier
IWOD-VF4.pdf
Fichiers produits par l'(les) auteur(s)
### Identifiants
• HAL Id : inria-00167681, version 1
### Citation
Rokia Bendaoud, Amine Mohamed Rouane Hacene, Yannick Toussaint, Bertrand Delecroix, Amedeo Napoli. Text-based ontology construction using relational concept analysis. International Workshop on Ontology Dynamics - IWOD 2007, Jun 2007, Innsbruck, Austria. 〈inria-00167681〉
### Métriques
Consultations de la notice
## 494
Téléchargements de fichiers | 2018-10-18 11:30:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.566892683506012, "perplexity": 9932.695484420898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511806.8/warc/CC-MAIN-20181018105742-20181018131242-00196.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/152501-joint-p-d-f-print.html | # Joint P.D.F
Printable View
• Aug 1st 2010, 07:35 AM
Hitman6267
Joint P.D.F
Can any one help me solve this ? I don't know how to start.
http://www.mathhelpforum.com/math-he...-f-capture.png
Attachment 18397
• Aug 1st 2010, 08:22 AM
CaptainBlack
Quote:
Originally Posted by Hitman6267
Can any one help me solve this ? I don't know how to start.
http://www.mathhelpforum.com/math-he...-f-capture.png
Attachment 18397
Sketch the region over which the joint density is defined, identify the subregion corresponding to $x\ge y$ then integrate the joint pdf over that region.
(note that should be:
$f(x,y) = \begin{cases} 12xy&,0\le x \le 1, x^2\le y\le \sqrt{x}\\0&, \text{otherwise}
\end{cases}$
as it is a joint density)
CB | 2016-09-25 00:41:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8933160305023193, "perplexity": 1759.2029640562555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659680.65/warc/CC-MAIN-20160924173739-00200-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://codeforces.com/blog/entry/23309 | GlebsHP's blog
By GlebsHP, history, 7 years ago,
Hello, community!
Tomorrow Codeforces Round #342 is going to take place. It will share the problemset with Moscow Olympiads in Informatics for students of grades from 6 to 9. Though, grades from 6 to 9 in Russia usually match the ages from 12 to 15, I guarantee that everyone (even Div. 1 participants) will find some interesting problems to solve. Problems were selected for you by the Moscow jury team: Zlobober, meshanya, romanandreev, Helena Andreeva and me; and prepared by members of our scientific committee: wilwell, Sender, iskhakovt, thefacetakt and feldsherov.
Scoring distribution will be quite unusual: 750-750-1000-2000-3000.
UPD System testing is over. Here are the top 10:
Congratulation! Also, problems seemed to be too tough, we should have probably made Div. 1 round. Anyway, thanks for participating, I hope you enjoyed it and learned something new!
Thanks to romanandreev for nice analysis.
• +314
| Write comment?
» 7 years ago, # | +11 I am not able to register unofficially for the contest. Please fix this.
• » » 7 years ago, # ^ | +23 Fixed, try again.
• » » » 7 years ago, # ^ | +8 Working now, Thanks.
» 7 years ago, # | -16 Auto comment: topic has been updated by GlebsHP (previous revision, new revision, compare).
» 7 years ago, # | +3 In the same time as Open Cup 10 stage. It's a pity.
» 7 years ago, # | 0 Is this contest going to start at the same time of the official contest? Because otherwise it should be unrated, right?
• » » 7 years ago, # ^ | +23 It's not going to start exactly at the same time, but it will start before the statements become public and will end at the same time as the onsite contest (which actually runs for 4 hours).
» 7 years ago, # | -54
» 7 years ago, # | +98 GlebsHP you forgot to thank yourself :)
• » » 7 years ago, # ^ | +54 A sentence like this ??: I thank myself for my great helps to myself in preparing this round.. :D
» 7 years ago, # | 0 Links to any previous contests by the same authors would be very helpful. It'll give a good idea as to what to expect in the contest! Any help?
• » » 7 years ago, # ^ | 0 Click on the user profiles, and go to Problemsetting tab.
• » » 7 years ago, # ^ | -22 How would previous contests help ? o.O
• » » » 7 years ago, # ^ | 0 Quoted, "It'll give a good idea as to what to expect in the contest!".
» 7 years ago, # | +80 Wish Codeforces a happy Chinese New Year!
• » » 7 years ago, # ^ | +1 I hope the Chinese New Year will bring good luck.
• » » » 7 years ago, # ^ | 0 Of course!
» 7 years ago, # | +21 Nice and short announcement . Kudos to GlebsHPI guess problem statements will also be short and nice.
» 7 years ago, # | +3 @ admin whenever i log in into my account(manish_nit) everthing appears in russian, i have to manually right click every time and select translate to english. this creates lot of problem during contests...is there a setting in the codeforces website to do it permanently.
• » » 7 years ago, # ^ | +9 Of course not. You better learn Russian. Who would think of a language switcher?!
» 7 years ago, # | 0 Is it national level competition?
• » » 7 years ago, # ^ | +1 Seems to be city level.
• » » » 7 years ago, # ^ | ← Rev. 2 → 0 Thanks! I expected the problems to be much easier for a 7th grade local competition :). I lost too much time on the first problem, then B and C were probably just about right for a Div 2 contest (750 and 1000 points). Didn't even get to D or E ...
» 7 years ago, # | ← Rev. 2 → 0 why A and B has the same points ?
» 7 years ago, # | 0 Why not a combined division contest?Great authors and of course great problems!
» 7 years ago, # | ← Rev. 2 → -36 A contest by new( GlebsHP ) and old( Zlobober ) coordinator.Hello Zlobober again!
• » » 7 years ago, # ^ | 0 Why so many — votes? :)
• » » » 7 years ago, # ^ | ← Rev. 2 → -19 I don't know! but this contest was good for me after about 30 bad contests.(I took place 18 ;) )
• » » » » 7 years ago, # ^ | +7 took
• » » » » » 7 years ago, # ^ | -17 thanks; edited (English is my default language!)
» 7 years ago, # | -17 Zlobober is backed :))
» 7 years ago, # | +27 Great contest and perfect timing, do contest before dinner, then watching firework at mid night, happy lunar-new year.
• » » 7 years ago, # ^ | 0 I agree with you.
» 7 years ago, # | +10 Its Sunday morning in my country. Is it bad if I miss church for contest ? I like contest!!!
• » » 7 years ago, # ^ | +499 Bless you to take part in the contest, my son.
» 7 years ago, # | +37 Today in China, it is Spring Festival, which is the most important festival in China, every single of Chinese will have supper with family. I hope in the New Year, Codeforces will be better and better~
» 7 years ago, # | +11 i will have fun before the Chinese Spring Festival dinner.
» 7 years ago, # | 0 Luckily I won't miss the Spring Festival Gala at 20:00.
• » » 7 years ago, # ^ | ← Rev. 2 → 0 But I think the Spring Festival Gala is becoming more and more boring
• » » » 7 years ago, # ^ | 0 go to watch Pay New Year's call offering of bilibili at 18:00(UTC8)?..
» 7 years ago, # | +11 hope it will be my last div2 :D
• » » 7 years ago, # ^ | +10 We will miss you :) congratulations.
• » » » 7 years ago, # ^ | 0 thanks xD
» 7 years ago, # | +9 happy chinese new year!!!
» 7 years ago, # | +1 Happy Chinese New Year(the Spring Festival) to everybody and wish Codeforces will become better and better !
» 7 years ago, # | +1 Happy Chinese New Year~
» 7 years ago, # | ← Rev. 6 → -17
» 7 years ago, # | +3 What is the hacking test for problem B?
• » » 7 years ago, # ^ | 0 If the solution searches the whole string from the beginning after each time it finds the pattern.
• » » 7 years ago, # ^ | +3 aaabb aabb Some participants were finding string matches in O(n) but ridiculously wrong ways.
• » » » 7 years ago, # ^ | 0 can i solve the problem B use kmp algorithm???if not ,can anyone give some advice? by the way ,happy chinese new year!
• » » » » 7 years ago, # ^ | 0 I've solved it with KMP :)Submission
• » » » » 7 years ago, # ^ | ← Rev. 2 → 0 You don't need KMP. O(30*100000) is fast enough, so you can brute force the whole problem. If the pattern was longer, a linear time string searching algorithm would be suggested, but in this case (the fact that it's a Div2B should give you a hint at its difficulty) it's not necessary and has a larger chance for mistakes. This is what I did in the contest (runs in max 46ms on main tests, which is easily good enough) http://codeforces.com/contest/625/submission/15858375
• » » » » » 7 years ago, # ^ | 0 You really replace some characters with '#'! I just imagined to replace. 15874296 Although it's after the contest.
» 7 years ago, # | +28 C seems so easy then A and B. I think C < B < A Due to overflow with binary search, 6 wrong submissions on A :\ I hope it passes now.
• » » 7 years ago, # ^ | -8 Is this approach correct? Write numbers from 1 to (k-1)*n in first (k-1) columns then again start from column 1 write numbers in increasing order.
• » » 7 years ago, # ^ | ← Rev. 2 → 0 I think B and C are equally hard/easy and A easier than both of them.Where did you used binary search? I haven't use it at all.UPD: Well it seems that I will fail A after system testing. Actually I now think C is easier than A and B, just because its solution is obviously correct comparing to A and B.
• » » » 7 years ago, # ^ | +3 I used binary search to find number of times one could buy glass bottles. Looking at others code now, it seems there's a direct formula, but i couldn't get it correct.
• » » » » 7 years ago, # ^ | +3 tnx god i am not the only one who used binary search
• » » » » » 7 years ago, # ^ | +1 Haha, I was so desperate about failed pretests, I also used binary search. At least it worked :) Problem A was a trap, it was a good idea to rush through B and C first :(
• » » » » 7 years ago, # ^ | +1
• » » 7 years ago, # ^ | 0 You don't need to use a binary search — the answer can be found in O(1) with division. http://codeforces.com/contest/625/submission/15857590
» 7 years ago, # | +6 how to solve D ?
» 7 years ago, # | +6 Who is the author of the Problem A? >.< And tester also >_< ?? No mercy, No mercy >_< :'( :'(
» 7 years ago, # | -7 Now, I know why Russian coders are so good and accurate. Nice problem-set, especially first problem. Eagerly waiting for editorial for 4th and 5th question and hoping my solution for 1st three problems pass the system tests.
» 7 years ago, # | ← Rev. 2 → 0 why this work on test100000 symbols aafor 0,2 sec
• » » 7 years ago, # ^ | 0 I tried hacking a solution like this and also failed.
• » » 7 years ago, # ^ | 0 Find returns at the first occurence, so that takes O(1) time. erase can take up to linear, according to specification. But it is probably very optimized and the Codeforces servers are really fast (solutions with 10^9 operations can pass time limits) so it passes. Perhaps erasing the first character is faster than erasing from the middle, as well?
• » » » 7 years ago, # ^ | 0 i try to change first symbol to 'b', and it's work fast too
» 7 years ago, # | ← Rev. 3 → +7 This was my solution for 2nd (div2) x = raw_input() y = raw_input() print x.count(y) It passed the pretests XD, i dont think it will pass the final tests ... will it? EDIT : It passed!
• » » 7 years ago, # ^ | ← Rev. 3 → +4 See this test baaacd aa Answer = 1.
• » » » 7 years ago, # ^ | ← Rev. 2 → +3 As far as I know python3 count method counts the amount of non-overlapping substring appearances, so your test is not going to hack the solution.UPD: Not sure about python 2 tho
» 7 years ago, # | +10 I wonder how many A solutions will remain after systests :O. What was hack test for A btw
• » » 7 years ago, # ^ | +10 I was using this:10000000000000000001000000000000000000500000000000000000499999999999999999
• » » » 7 years ago, # ^ | 0 Is answer 500000000000000001?
• » » » » 7 years ago, # ^ | +5 Yes, it is)
• » » 7 years ago, # ^ | 0 mine was 45 30 98 89
• » » 7 years ago, # ^ | 0 1000000000000000000 500000000000000000 100000000000000000 99999999999999999
• » » 7 years ago, # ^ | 0 1000000000000000000 1000000000000000000 1000000000 999999999
» 7 years ago, # | +9 Very interesting problems. I got a lot of fun. Thanks to authors!
» 7 years ago, # | +4 Ouch, the penalty for wrong submissions really showed through this contest because of the sudden spike in difficulty between C and D. Two wrong answers for A cost me 200 places in standing (from ~200th place to ~400th place).
» 7 years ago, # | +19 too weak pretests these days :(
• » » 7 years ago, # ^ | 0 With strong prtests, we can't enjoy hacking :(
• » » 7 years ago, # ^ | +21 Actually, it surprised me, as I picked about 10-15 pretests in every problem. But still ,the number of cases seems to be much more than this.Many solutions will fail system tests, but this time it was unintentional :)
» 7 years ago, # | +3 Is System Test gonna take long like in the past school contests? (until the closing ceremony)
• » » 7 years ago, # ^ | +16 No
» 7 years ago, # | ← Rev. 3 → 0 hack test for problem one is : 2 100 100 50
• » » 7 years ago, # ^ | 0 Omg, and here I failed with "-1". :D Hacked some other guy with 10^18 inputs.
» 7 years ago, # | 0 terrible! I fixed some bugs on problem D and failed to sumbit in last several seconds.
» 7 years ago, # | +4 Still can't go to sleep because I'm terrified of the system tests for A and B :(
• » » 7 years ago, # ^ | +4 Still can't go to LUNCH because I'm terrified of the system tests for A and B :(
» 7 years ago, # | -7 String gog = in.next(); String tel = in.next(); int ans = 0; int k = 0; while (k + tel.length() <= gog.length()){ String s = gog.substring(k, k + tel.length()); if (s.equals(tel)){ ans++; k = k + tel.length(); } else { k++; } } out.println(ans); Its B. Will it pass sys tests?
• » » 7 years ago, # ^ | 0 Yes, I guess.
» 7 years ago, # | +16 everyone who solved 6-th COCI task of yesterday contest must solve the 4-th problem of today easily :D
• » » 7 years ago, # ^ | 0 Yeah. I should've read that editorial.
» 7 years ago, # | ← Rev. 2 → 0 Anyone solved E?I have that idea: using binary heap (aka priority queue) we sort all possible collisions of frogs by (time_to_collision, id). After that in cycle we pop first frog, kill what she could kill, update her step_size, time_to_collision and put her in the heap again (also doing this for her precessor).Stop when the next frog in heap can't kill anyone.Should be NlogN. But I stuck in some range checks and didn't finished the solution.Is it correct approach?
• » » 7 years ago, # ^ | 0 I was thinking about the same solution, but moreover you have to update time_to_collision of previous frog (I mean the one, which gonna kill actual one).
• » » » 7 years ago, # ^ | 0 Yea, I've considered that ("also doing this for her precessor").In my implementation I simply put another copy of previous frog in the heap (instead of find-delete-putagain, for performance). It shouldn't affect the result, because its value will be strictly lesser than old one, so old one will be skipped somewhere in the future.
• » » » » 7 years ago, # ^ | 0 Do you think you could explain test case 18's answer for E? I've done it manually and get a different answer than the judger. ( This is my solution btw)10 10 9 4 3 5 2 2 5 4 1 6 6 7 8 3 4 1 10 3 7 9 Participant's output 2 1 8 Jury's answer 1 8
• » » » » » 7 years ago, # ^ | 0 Nevermind, it was an arithmetic error in my manual solution.
• » » » » » 7 years ago, # ^ | 0 BTW, your solution seems to get TL later on. You already got 1 second on N=12500. And it could be 100000.
• » » » » » » 7 years ago, # ^ | 0 I expected much as I don't really make any clever observations in it. Was more expecting a TLE instead of a WA though.
• » » 7 years ago, # ^ | +9 It looks like the correct approach. Analysis will be published in the nearest 2-3 hours.
• » » » 7 years ago, # ^ | +2 Information like this should be added in the main post as it is very hard to find this kind of information in the sea of comments.
» 7 years ago, # | ← Rev. 3 → +6 Wow very fast system testing. My rank jumped from 1000 to 500's before/after system testing xD Problem A is the cause :p
• » » 7 years ago, # ^ | +4 And mine fall down from somewhere top80 to 366 :)
• » » » 7 years ago, # ^ | 0 I fell down from 80 to 1500 :(
• » » 7 years ago, # ^ | 0 I jumped about 400 positions too
» 7 years ago, # | ← Rev. 2 → -25 bad contest :-(
» 7 years ago, # | +2 Putting "Guest From the Past" as an A problem was a wicked move.I wish I didn't waste time on it :'(
» 7 years ago, # | 0 just 699 who can solve a in the contest XD
» 7 years ago, # | +73
• » » 7 years ago, # ^ | +2 Yea, very tricky A and obviously_correct_solution in C :)
• » » 7 years ago, # ^ | ← Rev. 2 → +4 I don't think this round had enough score differentiation. D and E should've been slightly easier, while A and C should've been switched (or C should've been made much harder, as it's unusual for so many people to get it). A wasn't actually hard, but it was pretty easy to make a silly mistake (a lot of people used it for hacks). The concept of C was roughly the same difficulty but the implementation was much easier (no derps).
• » » 7 years ago, # ^ | 0 The eyes... Oh my god, THE EYES!!!
» 7 years ago, # | 0 What was the solution to D? I thought of some messy solutions but could not find a clean one.
» 7 years ago, # | 0 Realy A problem is tricky with Time limit and Wrong answer.
» 7 years ago, # | -21
• » » 7 years ago, # ^ | ← Rev. 2 → +2 Y again in the same blog
» 7 years ago, # | 0 Could someone just tell me how to solve A? It seemed that A wasn't the easiest problem :(
• » » 7 years ago, # ^ | +1 can do it in O(1): if buying and selling a glass one is more expensive than buying a plastic one, only get plastics if not, buy glass bottles until you can't do so anymore, then buy plastics with the rest of the money implemented here: http://codeforces.com/contest/625/submission/15870223
• » » » 7 years ago, # ^ | ← Rev. 4 → +1 I too thought that it should be O(1) but couldn't solve it, maybe because i freaked out because it took too much time to solve A. It was similar to this problem. I solved that problem by brute force but obviously here brute force won't work. :/ Next time I will try to find a better solution for every problem even if it gets AC.
• » » » » 7 years ago, # ^ | 0 Is's possible to resubmit an AC problem? :O
• » » » » » 7 years ago, # ^ | +3 yes it is, but you lose 50 points because of resubmission.
• » » » » » 7 years ago, # ^ | 0 yeah! previous submit gets Skipped, and new one will judge in system testing.
• » » » 7 years ago, # ^ | 0 Is it any formula for O(1)? If the item costs b units and for returning you can get c units, then you can buy (n — b) / (b — c) + 1 items by n units?
» 7 years ago, # | 0 574 — "A" = 1228 :(
» 7 years ago, # | +10 Today I've seen a bunch of guys whose solutions failed systests for A but still they hacked everybody in their rooms so they got about 700-1000 points just from hacks. Turns out you don't have to actually solve a problem to get sufficient points for it.
• » » 7 years ago, # ^ | 0 Yea, its pretty usual thing for hackerfests.
• » » 7 years ago, # ^ | +10 I found my A is wrong when i have hacked 7 persons, finally my code is hacked but i still get 750 point..... btw, there is a guy write"cin>>n>>b>>a>>c", for which i got four times unsuccessful hack
» 7 years ago, # | +7 The task are ok, but very bad for the div 2 contest.Only the second task was on the level which it should be.
» 7 years ago, # | 0 Thanks for timing!! Happy Tet Holiday!! <3 <3
» 7 years ago, # | 0 Why i got WA on b??? this is my code, any help please?? http://www.codeforces.com/contest/625/submission/15862688
• » » 7 years ago, # ^ | +7 ababc abc
• » » 7 years ago, # ^ | +7 If current characters in S and T are different maybe you won't return to the begging of string T.I didn't tested your solution, but I think this is hack case for your solution :S='bbbc' T='bbc'.Correct answer 1.
• » » » 7 years ago, # ^ | 0 yes, you're right, didn't see that comingthanks lot...
» 7 years ago, # | 0 A->C, C->A
» 7 years ago, # | +92
» 7 years ago, # | +16 I am very interested, whether someone got full score in official, on-site contest ?
» 7 years ago, # | ← Rev. 2 → 0 .
» 7 years ago, # | 0 for D I came up with an DP solution where dp[i][j][a][b] represents the interval [i, j] can be written as the format required and current i position number is a and j position is b, a, b < 10, the solution will exceed memory limits but I fail to simplify my solution.
» 7 years ago, # | 0
• » » 7 years ago, # ^ | +5
» 7 years ago, # | +8 it can take days for GlebsHP to write/translate the editorial (remember round 327). So, let's not wait for editorial and write our solutions ideas here.
» 7 years ago, # | +1 Wow, outstanding performance: http://codeforces.com/contests/with/latisel
• » » 7 years ago, # ^ | ← Rev. 2 → +9 Cool, i was checking his past contest history I expect him to do 100+ unsuccessful hacks in next round :P
• » » » 7 years ago, # ^ | +5 Well my friend you haven't seen this:codeforces.com/profile/bus
• » » » » 7 years ago, # ^ | +13 I think Bus Lost His Break :P
• » » » » » 7 years ago, # ^ | ← Rev. 2 → 0 If he wanted to he could Run Over the competition
• » » » 7 years ago, # ^ | 0 " Ops! I have a successful hack! :| " latisel says.
• » » 7 years ago, # ^ | 0 I guess he want to stay at Div2
» 7 years ago, # | 0 Hmmm what's wrong here? It failed at the aaaaaaa... case, where the answer is the number of a's, but my answered was one less. codeforces.com/contest/625/submission/15869860
• » » 7 years ago, # ^ | +5 Array char is needing one extra cell for null character after end. Sorry if my English is bad, I try hard learning it.
» 7 years ago, # | ← Rev. 2 → +3 please, don't say anything before the contest.Problem set would be too easy. or Problem set would be too hard.
» 7 years ago, # | +6 I suspect the test data of problem A is very weak, since 15867499 passed system test. But it should TLE at this case: 1000000000000000000 2 500000000000000001 500000000000000000
» 7 years ago, # | 0 DO we have editorials for this one ?
• » » 7 years ago, # ^ | +1
» 7 years ago, # | +8 Can someone help me find the problem with my O(n) B?http://codeforces.com/contest/625/submission/15877046I feel like it's the correct strategy implemented right... but I'm getting off by 1 on test 26.thanks!
• » » 7 years ago, # ^ | +12 aaab aabAns should be 1. Yours gives zero
• » » » 7 years ago, # ^ | 0 Thank you! I see the problem! Cheers. And as the_art_of_war says, the easier way is of course to just go the simple way so you can't run in to tricky edge cases like this.
• » » 7 years ago, # ^ | +15 I think the length of second string is special so small that you can solve it with O(n*m) using two cycles. If you want to make solution with O(n) you should use special algorithms with strings like Z-function or prefix-function.Maybe there is more easier way to get solution with O(n).Sorry for my bad english.
» 7 years ago, # | 0 Can someone give an idea how to solve 4th one ??
• » » 7 years ago, # ^ | +18 Editorial is already uploaded =)http://codeforces.com/blog/entry/23342
» 7 years ago, # | 0 Three WA on Question 1 give me the way to hack 5 solutions :)
» 7 years ago, # | 0 Auto comment: topic has been updated by GlebsHP (previous revision, new revision, compare).
» 7 years ago, # | 0 Happy Chinese new year!Can someone tell me how to solve the A question?I see some answer:they all used (n-b)/(b-c),but I don't know why?
• » » 7 years ago, # ^ | 0 that is it!from others: can do it in O(1): if buying and selling a glass one is more expensive than buying a plastic one, only get plastics if not, buy glass bottles until you can't do so anymore, then buy plastics with the rest of the money implemented here: http://codeforces.com/contest/625/submission/15870223
• » » » 7 years ago, # ^ | 0 My question is why n should minus b?My first use n,but I know I am wrong.Can you help me?
• » » » » 7 years ago, # ^ | 0 If you don't have enough money(like n
» 7 years ago, # | 0 Somebody help me! What the problem with that solution? Problem B.
• » » 7 years ago, # ^ | 0 Consider this case: aab ab
• » » » 7 years ago, # ^ | 0 Thanks!
» 7 years ago, # | 0 Where is the editorial?
• » » 7 years ago, # ^ | 0
» 7 years ago, # | -9 Author of B sucks: 15924760 | 2023-01-28 16:56:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23416273295879364, "perplexity": 3241.847820161169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00744.warc.gz"} |
https://pos.sissa.it/364/500/ | Volume 364 - European Physical Society Conference on High Energy Physics (EPS-HEP2019) - QCD and Hadronic Physics
Diffractive PDF determination from HERA inclusive and jet data at NNLO QCD
R. Zlebcik* on behalf of the H1 Collaboration
*corresponding author
Full text: pdf
Pre-published on: June 10, 2020
Published on:
Abstract
A new fit of diffractive parton distribution functions (DPDFs) to the HERA inclusive and jet data in diffractive deep-inelastic scattering (DDIS) at next-to-next-to-leading order accuracy (NNLO) is presented. The inclusion of the most comprehensive dijet cross section data, together with their NNLO predictions, provide enhanced constraints to the gluon component of the DPDF, which is of particular importance for diffractive PDFs. Compared to previous HERA fits, the presented fit includes the high-precision HERA II data of the H1 collaboration, which corresponds to a 40-fold increase in luminosity for inclusive data (six-fold increase for jet data). In addition to the inclusive sample at nominal centre-of-mass energy $\sqrt{s} = 319\,\text{GeV}$, inclusive H1 data at $252\,\text{GeV}$ and $225\,\text{GeV}$ are included. The extracted DPDFs are compared to previous DPDF fits and are used to predict cross sections for a large number of available measurements and differential observables.
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access | 2020-09-22 11:48:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5348827242851257, "perplexity": 4997.973964914693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400205950.35/warc/CC-MAIN-20200922094539-20200922124539-00623.warc.gz"} |
http://fieldlines.org/2011/10/19/an-introduction-to-theoretical-physics-part-3-of-5/ | # An Introduction to Theoretical Physics (Part 3 of 5)
Posted on 2011/10/19 by
## Introduction to Part 3
Having solved the initial problem, calculating the resistance of a cube of identical resistors, what can be done to make the equation more general? The first abstraction was to move from 2 dimensional circuit diagrams to trying to solve 3 dimensional circuit problems.
Could we investigate higher dimensions? What about a 4 dimensional “cube”? Let’s see if we can answer this:
What is the total resistance between two opposite vertices of a 4-dimensional cube of identical resistors?
As previously mentioned, questioning the purpose of the asking something like this is unhelpful for a theoretical physicist. Of course, intuition will guide what is worth spending time on and which questions should be dismissed, but there are no strict rules. If the questions are motivated purely by pragmatic considerations, it could stultify an innovative discovery.
Carl Sagan in his book “Demon Haunted World” wonderfully articulates the public advantage of funding to undirected research – inquiry that is motivated only by a physicist’s curiosity.
If Queen Victoria had ever called an urgent meeting of her counselors, and ordered them to invent the equivalent of radio and television, it is unlikely that any of them would have imagined the path to lead through the experiments of Ampere, Biot, Oersted and Faraday… They would, I think, have gotten nowhere. Meanwhile, on his own, driven only by curiosity, costing the government almost nothing, himself unaware that he was laying the ground for the [radio and the television], [Maxwell] was scribbling away. Its doubtful whether the self-effacing, unsociable Mr. Maxwell would even have been thought of to perform such a study. If he had, probably the government would have been telling him what to think about and what not, impeding rather than inducing his great discovery.
Carl Sagan, “Demon Haunted World“, Chapter 23: Maxwell and the Nerds. p. 390 [1995]
So, although our present work clearly isn’t dealing with the key question of the day, as Maxwell was investigating, working by our own curiosity mirrors a key aspect of being a theoretical physicist. We can always look to see if there are any practical applications after we reach a solution!
To answer the above question, we will to know exactly what it means. There are two things we need to learn:
• What is a “4-dimensional cube”?
• What are “two opposite vertices” on a 4-dimensional cube?
Let’s begin with the first question.
## A Tesseract
A 4 dimensional cube, on first hearing, sound like an oxymoron. Cubes are 3 dimensional, everybody knows that. Well, we are using “cube” in a more abstract way. Here is a drawing of a 4-dimensional cube (or a “tesseract”).
“I learned very early the difference between knowing the name of something and knowing something.”
Richard Feynman ”What is Science?”, presented at the fifteenth annual meeting of the National Science Teachers Association, in New York City (1966) published in The Physics Teacher Vol. 7, issue 6 (1969)
Knowing the name “tesseract” doesn’t really help us at all, its just a way of keeping track of what we are talking about. You don’t know anything about it from just its name.
To begin, we could notice how we compared the square and the cube by counting how many edges came from each vertex. For the square, we had two vertices meeting at each vertex and noted it was a 2-dimensional object. The cube has 3 edges joining to each vertex and is a 3-dimensional object. Now, you might see where i’m going with this. The tesseract, a 4 dimensional object, has 4 edges joining at each vertex. Check the diagram to see if this is the case.
However, it is not just the number of edges from each vertex that determines the dimension. It was noted that for the square and the cube, all the edges from one vertex were all at right angles to one another. This is the difficult (or perhaps impossible) thing to picture about the tesseract. From one vertex, there are 4 edges, all at right angle to each other!
As far as we know, we live in 3-dimensional space – at least we certainly all operate under that assumption. So the tesseract is impossible to represent fully in our space. Similarly, the cube was impossible to represent on our 2D computer screen. The flat screen was only able to reproduce a projection of the cube, and the we interpret it as a 3-dimensional object. The 3 edges from each vertex were no longer all at right angles to each other, but it was still all connect together the same way.
[Interpreting flat images as 3D is not something you only do with diagram, pictures or painting - all of your visual information from the world is received as 2-dimensional images. The only difference is that with a real 3-dimensional object, you receive two slightly different flat images, one on each retina. This defines your perspective and your eye-brain interprets this as one 3D picture of the world.]
So the above diagram of the tesseract is a projection – it’s connected together the same way, but the edges from each vertex aren’t all at right angles to each other any more. It’s also a double projection. The diagram is convincing you that you’re seeing a 3-dimensional projection of a 4-dimensional object. But how can this be on a flat 2D screen? You are actually looking at a 2D projection of a 3D projection of a 4D object!
## Building A Tessearct
We could just as well call the tesseract the 4-cube, which is likely to be more beneficial. That way, we can easily talk about 3-cubes (just normal cubes), 2-cubes (squares), 1-cubes (lines) and 0-cubes (points). Choosing “cube” as the comparator is a natural choice – we live in 3 dimensions. It is often the case that we choose the most familiar thing as a reference. For example, having ten fingers is probably why we count in 10s, and not 2s as computers do or something else.
Talking about 2-cubes and 4-cubes is especially advantageous when we consider another way to think about the tesseract. It is possible to construct it from the lower dimensional “cubes”. Lets start with just a point:
This is a 0-cube. It 0-dimensional object: 1 vertex, joined to zero edges. If we take another 0-cube, we can join them with an edge:
We now have a 1-cube (a line). It is a 1-dimensional object: each vertex is joined to 1 edge. Joining two of these together gives us a 2-cube (a square):
The 2-cube was made by taking each vertex of a 1-cube and joining it to one (and only one) vertex of the other 1-cube. We carry this procedure on to get the 3-cube:
And lastly, for our purposes at least, we can construct a 4-cube by pairing up vertices from two 3-cubes:
I’ve draw this 4-cube in such a way that its projection in 2D is much more symmetric than our original picture. Take time looking at it until you’re convinced it’s two cubes joined together. Also, to confirm that it is a tesseract, check each vertex has 4 edges joined to it.
## “Opposite” Vertices
In the original diagram of the tesseract above, it was represented as one large cube connected to a smaller cube. This representation made it more difficult to see where the “opposite vertices” are.
For the square and the cube, the answer is intuitive - the pairs of opposite vertices are found by drawing diagonal lines through the shapes. There aren’t any for the 0-cube, and the two vertices of the 1-cube are, by definition, opposite. Mathematicians like to call this a ‘trivial’ case.
For the 4-cube, we’ll have to think a little more carefully and analyze our intuition. What constitutes “opposite” in the 2-cube and the 3-cube?
For the square, if we move from one vertex along the edges, it take at least 2 moves to get to the opposite vertex. For the cube, it takes at least 3 edges to get to the opposite vertex. Of course, it is possible for it to take more than 3 edges – you could draw a path that twists and turns, double-backs and it could take as many edges as you like. But opposite vertices are joined by at least as many edges as there are dimensions.
So, we have a very general insight that isn’t restircted to the 4-cube. It would be fitting to condense the above considerations into a statement about n-cubes, which are n-dimensional objects. Substitute any number in for “n” and you have a specific type of n-dimensional cube. It’s just one more abstraction.
Opposite vertices on an n-cube are any two vertices where the minimum path connecting them requires moving through n edges.
So for our first diagram, with the big and small cube, the opposite vertices could be the front-top-left vertex of the big cube and the back-bottom-right vertex of the small cube. It takes a minimum of 4 edges as it is a 4-dimensional object. Check this for yourself – it’s not intuitive!
The diagram above highlights one example of a minimum path connecting those two vertices. There are several others. All the other vertices have opposites as well – they all pair up.
For the symmetric diagram, it is slightly easier to see. On the outer ring of the 2-dimensional projection, the opposite vertices are simply directly opposite on the diagram. The same is true for the inner ring. For example:
Once again, we find the benefit of presenting the same thing from a different point of view. In this case, the symmetry enables us to see something that was one obscure, now as intuitively as we saw the lower dimensional examples. Now we have a better understanding of this mathematical object, let’s see if we can tackle the physical problem.
## The Resistor Tesseract
Let’s start talking about resistors instead of edges, and look at the physics of solving the our latest question. The first think to check is whether our analysis with equipotentials will work for the tesseract, as it did for the cube. The first two steps are as before: connect the vertices that are one resistor away from the start, and one resistor away from the end.
Nothing new so far. What about the remaining 6 vertices? Well, we can consider that if we are successful with this method, we will end up with a diagram that has joined the equipotential vertices together. How many would there be? We might expect no more than 3, as this would give 4 edges between the start and end vertices. These were chosen as ‘opposites’, so we would expect there to be 4 edges between them.
With this analysis, it would seem that there is only one more equipotential to find, and so all the remaining 6 vertices can collapse into one. We are left with this:
You can now see why the resistor symbols have been left out of the diagrams!
We’ve now finished collecting the vertices into equipotentials. But how many resistors are on each side of that central equipotential? We can see how many resistors are unaccounted for by counting how many there are on a tesseract. Using the first diagram turns out to be easier. Each cube has 12 edges, we joined two cubes by pairing verticies, and each cube has 8 vertices. So that means a tesseract has 12+12+8=32 edges. 4 resistors at the start and 4 at the end have been accounted for with our equipotential argument, so that leaves 32-4-4=24 resistors.
The solution is not as difficult as you might think. All we need to do is consider symmetry, which is why this diagram is so much more useful than the initial representation of the tesseract.
The total resistance of the network has to be exactly the same no matter which vertex we choose as the start, as long as we pick opposite vertices. So, the resistor network must be symmetric in this respect. If we flipped the diagram around, top to bottom, we’d get the same answer.
This means we needn’t worry about studying our diagram too carefully to find out where these resistors are, the problem has been solved with a symmetry argument alone. There must be 12 on each side. However, for completeness, here’s the finished diagram:
## Solution for the Identical Resistor 4-cube
The above diagram can now be convered into more traditional notation, putting the resistor symbols back in:
I’ve kept the presentation the same as the solution to the identical resistor cube from Part 2. Again, the blue numbers keep track of how many vertices were squashed into one by noticing they were equipotentials, the red counts how many resistors are in parallel between equipotentials.
Using the same equations as before, the solution is as follows:
$R_{T}=\frac{R}{4}+\frac{R}{12}+\frac{R}{12}+\frac{R}{4}$
$R_{T}=\frac{2}{3}R$
## Summary
The use of mathematics in physics is often spoken about with a sense of mystery. We are asked why it is that the physical world can be described with abstract mathematics. One fairly straightforward answer is to notice that mathematics is a systematic analysis of the symmetry, quantity and similarity we see in objects. Mathematics is an abstraction of this process that considers ideas that don’t need any corresponding physical observation.
Every so often we find that there are things we require from mathematics that can be applied to physics. This can be because we didn’t notice a symmetry in the real world that a mathematical idea suddenly clarified, or an experiment revealed something that wasn’t obvious about the world, but the symmetry is clear. Or an abstraction has been made from physical observation to abstract variables, such as energy. There are many other examples too.
In our case, we have tried to use real world physics and apply it to a mathematical object that has no physical analogue. And yet, in Part 4, we will see how abstracting still further will result in a synthesis of familiar ideas, resulting in a greater understanding in both mathematics and physics. | 2017-09-20 12:59:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6912848949432373, "perplexity": 898.4237685256642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687281.63/warc/CC-MAIN-20170920123428-20170920143428-00609.warc.gz"} |
http://crypto.stackexchange.com/questions?page=131&sort=active | # All Questions
706 views
### How do decryption algorithms determine whether your attempted passphrase is correct?
Judging by the algorithm on the Blowfish Wikipedia article, there is no way for the process to fail with an error. How then does GnuPG know when to tell you your password is correct when decrypting a ...
201 views
### Performance for Subbytes in AES
I´m implementing the AES-encrypt algorithm for the SSE2 instruction set. For the Subbyte function i use movzx, having loaded ...
134 views
Two or three years ago, I was reading something interesting, but then I lost the paper. The paper described building secure and fast P2P networks (not like tor or I2P). Unfortunately, I only remember ...
7k views
### What is the MD5 collision with the smallest input values?
I am interested in MD5 collisions for small input messages. The collision examples given at http://www.mscs.dal.ca/~selinger/md5collision/ show two different strings, where only a tiny amount of data ...
116 views
### Diffie Hellman Key Exchange (Finding # of bits/digits of secret key)
As we know that in DH key exchange, both Alice and Bob would agree on the parameter $p$ and $g$. Next, Alice would choose a secret key $A$ while Bob would choose a secret key $B$. Alice would compute ...
944 views
### ECC public key encryption and authentication - ECIES with ECDSA vs ECDH with AES
I'm currently working on a project where I want to establish a secure and authenticated communication channel between to entities, using Elliptic Curve Cryptography. Now I'm not really sure how to ...
80 views
### What cryptographic paradigm is appropriate for this use case?
This is a very newbie question... I have to quickly implement a solution for our application that works like this... We have a binary executable that is given a configuration file. I need to encrypt ...
279 views
### How can I validate my implementation of Ansi 9.19?
I implemented an Ansi 9.19 mac generator in java, but now I don't know how to validate its functionality. I couldn't find any sample of a {plain text,key,mac} on the internet, and I want to know if ...
158 views
### Can I make a PRNG that is secure even when state can be modified by user?
I am interested in making a PRNG which, after being initially seeded, can accept and incorporate client data as the only ongoing source of "entropy". It is not directly for a cryptographic purpose, ...
1k views
### What is wrong with AES-CTR-HMAC-SHA256 - or why is it not in TLS?
It seems the only specified CTR mode ciphers in TLS are all GCM based. GCM ciphers run AES-CTR and do authenticated encryption with a MAC based on Galois-field ...
88 views
### x509 CA trust question
I'm trying to understand the logic of CAs, trust and client certificates. I have a general understanding but am having a tough time bridging some gaps. In a hypothetical situation a software system ...
706 views
### How does Google's “authenticator number generator” work?
I have two-factor authentication enabled on my Google account, and I have this app on my phone which generates a number I have to type when I'm logging in to Google-Mail. But I don't understand how ...
99 views
### Simplified Key Wrapping to Achieve Only Confidentiality?
I'm looking for help in understanding why the algorithm described here would not deliver adequate protection of confidentiality. We don't care about other criteria such as authenticity or integrity. ...
105 views
Pls I need explanation on How i can design a Cryptography that use multiple passwords or passphrase to open a safe(Lock). For example, if i need five people to unlock a secured device whereby all the ...
589 views
### Is pairing based cryptography ready for productive use?
I'm currently testing one among those many interesting cryptographic protocols based on bilinear maps. It's quite hard to understand the underlying fundamentals, especially since there are several ...
274 views
### Reusing PGP key when generating SSL Certificate Authority?
Does it make sense to reuse some of the keys from my PGP (gnupg, gpg) keyblock when generating my own SSL Certificate Authority (a toy one for myself and a few web services of mine)? In an ideal ...
417 views
59 views
### Is the product of two primes only factorisable by those two primes? [closed]
The question is as per the subject line. Does the fundamental theorem of arithmetic imply (prove?) that if I multiply two primes, then those two primes are the only factors of the product? I.e. 17 x ... | 2016-05-26 18:28:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6557637453079224, "perplexity": 2172.962427173551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276131.97/warc/CC-MAIN-20160524002116-00157-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://react-notion-x-demo.transitivebullsh.it/b4e9e4e03677413481a4910e8bd328c1 | # Math equations
On any Notion page, you can display beautifully formatted, comprehensible mathematical characters, expressions and equations. Notion uses the library to render math equations, which supports a large subset of functions.
This feature comes in handy for note taking, technical documentation, homework, or anywhere you need to use fractions and equations.
## Add a math equation as a block
• Click the + that appears to the left when you hover over a new line. Scroll down and choose Inline equation in the dropdown. Alternatively, type /math and press enter.
• With the new equation block in place, click inside it to type or paste your equation, or use cmd/ctrl + enter/return.
#### Arrange math blocks
• Use drag and drop to move math equations around your page. Hover over the block, then use the ⋮⋮ icon that appears as a handle to drag it.
• You can also drag and drop your math equations into columns.
## Add a math equation inline
Just like you can format text in Notion as bold, strikethrough, or code notation, you can also format your text as a math equation, like this quadratic formula:
There are a few different ways to add math equations inline, and all are keyboard friendly.
#### With text shortcuts
• Type two dollar signs, followed by your equation. When you close your formula with two more dollar signs, it will turn into an equation.
• For example, $$y = mx + b$$ becomes
#### With the equation input
• To open the equation input, use the keyboard shortcut ctrl/cmd + shift + E.
• Type your equation into the input, and press enter.
• Highlight an equation in your paragraph.
• Click the button in the formatting menu that appears, or use the keyboard shortcut ctrl/cmd + shift + E. Your selected text should turn into an equation.
#### Edit an inline equation
• You can edit an existing equation by clicking on it. This will open the equation input, and any changes you make to the equation will reflect live on your page.
• You can also use the arrow keys on your keyboard to navigate to an equation. The equation input will open when your cursor passes over the equation, and the equation input will close if you continue pressing the arrow key in the same direction.
## Recognized symbols
Notion supports the full scope of symbols and operations within the language. For a full list of supported functions, please visit the links below:
👉
Note: spans most but not all mathematical notation supported by . If your equation isn't rendering correctly in Notion, please visit the links above to see if that function is supported.
## FAQs
I don't know LaTeX but want to use Notion's equations. How can I get started?
It's easy to get started using LaTeX for homework, class notes, or lab reports. Basic arithmetic and variable names are valid in LaTeX already:
• y = mx + b renders as
• a^2 + 2ab + b^2 becomes
If you just need to look up specific symbols, Detexify is a great resource that allows you to draw the symbol and look up the corresponding LaTeX code.
To learn more powerful LaTeX, Overleaf documentation is a great place to learn the basics:
Note that Overleaf is a full-featured LaTeX editor, so not everything in the documentation is supported in Notion. If in doubt, you can always check this list of Supported Functions or alphabetized Support Table to determine which functions are supported.
Why can't I render a specific equation? What formulas/libraries do you support? Can you add support for a formula or library I want to use?
Notion uses the KaTeX library to render equations. KaTeX supports a large subset of LaTeX, documented on their list of Supported Functions and alphabetized in this Support Table. To request support for new functions or environments, you can open an issue on the KaTeX GitHub project.
I'm trying to use the align environment and it's not working!
From the Common Issues page of the KaTeX documentation:
"KaTeX does not support the align environment because LaTeX doesn't support align in math mode. The aligned environment offers the same functionality but in math mode, so use that instead."
Can I use inline equations for superscript and subscript?
It's possible to use inline equations as a "hack" for superscript and subscript, but it does mean that the text will be an equation, in "equation font."
• Here's an example of text.
• \text{super}^\text{script}
• Here's an example of text.
• \text{sub}_\text{script}
The basic idea is to use ^ for superscript and _ for subscript. By default, text will appear in italics, without any spaces:
The solution is to wrap each piece of text in \text{...}, which tells to process the contents of the curly braces in text mode, not math mode.
What happens when I copy/paste inline LaTeX?
It will give you the source code.
How do I use Notion for chemistry?
Notion supports the \\ce and \\pu chemical equation macros from the mhchem extension. These shortcuts allow you to typeset beautiful chemical and mathematical equations quickly and easily.
How do I convert between inline and block equations?
If you have a block containing an inline equation, you can use the "Turn into" menu to make it a block equation:
You can also turn a block equation into inline text:
## Related guides
Something we didn't cover? Message us in the app by clicking ? at the bottom right on desktop (or in your sidebar on mobile). Or email us at team@makenotion.com ✌️ | 2022-08-10 21:10:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5516266226768494, "perplexity": 2314.4192494723893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00582.warc.gz"} |
https://deepai.org/publication/an-inexact-manifold-augmented-lagrangian-method-for-adaptive-sparse-canonical-correlation-analysis-with-trace-lasso-regularization | # An Inexact Manifold Augmented Lagrangian Method for Adaptive Sparse Canonical Correlation Analysis with Trace Lasso Regularization
Canonical correlation analysis (CCA for short) describes the relationship between two sets of variables by finding some linear combinations of these variables that maximizing the correlation coefficient. However, in high-dimensional settings where the number of variables exceeds sample size, or in the case of that the variables are highly correlated, the traditional CCA is no longer appropriate. In this paper, an adaptive sparse version of CCA (ASCCA for short) is proposed by using the trace Lasso regularization. The proposed ASCCA reduces the instability of the estimator when the covariates are highly correlated, and thus improves its interpretation. The ASCCA is further reformulated to an optimization problem on Riemannian manifolds, and an manifold inexact augmented Lagrangian method is then proposed for the resulting optimization problem. The performance of the ASCCA is compared with the other sparse CCA techniques in different simulation settings, which illustrates that the ASCCA is feasible and efficient.
## Authors
• 1 publication
• 1 publication
• ### Sparse canonical correlation analysis
Canonical correlation analysis was proposed by Hotelling [6] and it meas...
05/30/2017 ∙ by Xiaotong Suo, et al. ∙ 0
• ### Trace Lasso: a trace norm regularization for correlated designs
Using the ℓ_1-norm to regularize the estimation of the parameter vector ...
09/09/2011 ∙ by Edouard Grave, et al. ∙ 0
• ### A Multi-Way Correlation Coefficient
Pearson's correlation is an important summary measure of the amount of d...
03/05/2020 ∙ by Benjamin M. Taylor, et al. ∙ 0
• ### Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables
Sparse regularization such as ℓ_1 regularization is a quite powerful and...
11/06/2017 ∙ by Masaaki Takada, et al. ∙ 0
• ### Adaptive Canonical Correlation Analysis Based On Matrix Manifolds
In this paper, we formulate the Canonical Correlation Analysis (CCA) pro...
06/27/2012 ∙ by Florian Yger, et al. ∙ 0
• ### Deep Gated Canonical Correlation Analysis
Canonical Correlation Analysis (CCA) models can extract informative corr...
10/12/2020 ∙ by Ofir Lindenbaum, et al. ∙ 5
• ### Significance testing for canonical correlation analysis in high dimensions
We consider the problem of testing for the presence of linear relationsh...
10/17/2020 ∙ by Ian W. McKeague, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Canonical correlation analysis (CCA for short) firstly proposed by Hotelling [11] aims to tackle the associations between two sets of variables. It has wide applications in many important fields such as biology [26, 18, 13], medicine [5], image analysis [7, 15], etc.
Suppose that there are two data sets: containing variables and containing variables, both are obtained from observations. The CCA seeks two linear combinations of these variables from and with maximal correlation coefficient. Specially, let
Σxx=1nXTX,Σyy=1nYTY
be the sample covariance matrices of and respectively, and be the sample cross-covariance matrix, then the CCA finds a pair such that
corr(Xu,Yv)=uTΣxyv√uTΣxxu√vTΣyyv (1.1)
is maximized. The new variables and are canonical variables, and the correlations between canonical variables are called canonical correlations. The canonical variables and
can be respectively obtained by the eigenvectors of matrix
Σ−1xxΣxyΣ−1yyΣyx.
The canonical correlations are given by the positive square root of those eigenvalues. Since the CCA model (
1.1) is in form of fractional, it is difficult to optimize. A equivalent formulation of CCA is given by
{maxu,v uTΣxyvs.t. uTΣxxu=1,vTΣyyv=1,
which can be regarded as an optimization problem on the generalized Stiefel manifolds. However, a potential disadvantage of the CCA is that, the learned solution is a linear combination of all original variables, which brings down the interpretability. If the number of variables exceeds sample size, traditional CCA cannot be performed due to that and are singular. Hence, many researchers proposed various sparse CCA (SCCA) to handle the case that the number of variables exceeds observations, and to improve the interpretability of canonical variables by restricting the linear combinations to a subset of original variables.
In this paper, we propose an adaptive sparse CCA model by incorporating the trace Lasso regularization. The matrix version of trace Lasso regularization can be adopted to both highly correlated and uncorrelated data. Our major contributions are summarized in follows:
1. We present a matrix version of trace Lasso regularization, and show that the new regularization function enjoys the properties of original trace Lasso.
2. By introducing trace Lasso regularization into the CCA model, we obtain an adaptive sparse CCA model (ASCCA). To our knowledge, our ASCCA is the first to takes the data correlation into account in the CCA model. In addition, our model consider multiple variables simultaneously.
3. The new model is reformulated to an optimization problem on the generalized Stiefel manifold. An manifold inexact augmented Lagrangian method is proposed for the resulted optimization problem, and the convergence is established under some assumptions.
4. The experimental results demonstrate that, the proposed ASCCA is superior to some existing sparse CCA models.
The rest of the paper is organized as follows. Section 2 briefly gives some reviews on the related works. Section 3 proposes an adaptive sparse version of the CCA introduced by the new trace Lasso regularization. Section 4 provides an optimization reformulation and the manifold inexact augmented Lagrangian method for the new model, and gives the convergence analysis. In Section 5, a simulation study is provided to show the validity and efficiency of the proposed method. Section 6 concludes this paper with some final remarks.
## 2 Related works
It is well known that, if the sample size exceeds dimension, the traditional CCA does not perform. To overcome this difficulty, various methods were proposed via incorporate different regularization function. Vinod [20]
proposed a canonical ridge, which is an adaptation of the ridge regression for the CCA framework proposed by Hoerl and Kennard
[10], and introduced an efficient sparsity penalty strategy. After that, various approaches for sparse CCA (SCCA for short) were proposed in literature, which includes regularization[17, 25], elastic net [21], group sparse and structured sparse [14, 4], etc. There also exists some limitations. If there is a group of variables which the pairwise correlation is high, the Lasso tends to select only one variable from this group, which may lead some misunderstands to the truth. Group sparse regularization needs the prior knowledge of group, which is unrealistic in some real applications. The proposed adaptive sparse CCA model utilized the new trace Lasso regularization, which incorporates data matrix into regularization, to adaptively deal with the correlation of covariation matrix.
The original SCCA model is difficult to handle, so many researchers simplify it by assuming that and are diagonal matrices or identity matrices. Parkhomenko, et al [17] assume that, the covariance matrices
are the identity matrices, and used a sparse singular value decomposition to derive sparse singular vectors. Wilms and Croux
[24] converted the SCCA model into a penalized regression framework. Suo [19] presented an approximated SCCA model as follows
{minu,v−uTΣxyv+λ1∥u∥1+λ2∥v∥1s.t. uTΣxxu≤1,vTΣxxv≤1,
and problem (2) was solved by a linearized alternating direction method of multipliers (LADMM). Witten [25] further relaxed (2) to
{minu,v−uTΣxyvs.t. ∥u∥22≤1,∥v∥22≤1,P1(u)≤c1,P2(v)≤c2,
where and are some regularizations for sparsity, and then developed a penalized matrix decomposition algorithm to solve model (2). Focusing on a sparse version of the original CCA model (1), Gao [8] proposed a two-stage method by a convex relaxation of CCA model. For the matrix case, many researchers adopted the residual model to obtain the high-order canonical variables [17, 25, 19]. In this paper, we get multiple variables simultaneously in our new model. In addition, all results on the matrix case mentioned above have not given convergence analysis for their algorithms, we proposed an efficient method to solve our new model, and provided the convergence analysis.
The original trace Lasso was proposed by Grave [9]. Trace Lasso regularization was successful applied to various scenarios including subspace clustering [23], sparse representation classification [22] and subspace segment [16], and so on. However, they only considered the trace Lasso regularization in vector case in literature. In this paper, we generalize the original trace Lasso regularization to matrix case, and adopt it as a new regularization for the SCCA, and get an adaptive SCCA model.
#### Notations:
We use capital and lowercase symbols to represent matrix and vector, respectively. Let denote the vector of all 1’s, be a vector whose -th entry is 1 and 0 for others, is a diagonal matrix where the -th diagonal entry is , and be a vector where the -th entry is . Let and denote the -th row and -th column of , be the trace of . For a vector , let be the and norm. For a matrix , let be the norm, and denote the Frobenius norm and nuclear norm respectively, denote the operator norm.
## 3 Adaptive sparse CCA using trace Lasso regularization
### 3.1 Trace Lasso in vector case
Consider the following linear estimator:
minw12∥Xw−y∥+λΩ(w)
where is a data matrix. The trace Lasso is a correlation based penalized norm proposed by Grave et al [9] for balancing the and norm. It is defined as follows
Ω(w)=∥Xdiag(w)∥∗
where is nuclear norm. A main advantage of trace Lasso being superior to other norm is that, the trace Lasso involves the data matrix , makes it adaptive to the correlation of data. As shown in [9], if each column of is normalized to
, the trace Lasso interpolates between the
norm and norm in the sense of
∥w∥1≤∥Xdiag(w)∥∗≤∥w∥2.
The inequality are tight. To see this, if the data are uncorrelated (), trace Lasso reduce to , and if the data are highly correlated (), trace Lasso equals to .
### 3.2 Trace Lasso in matrix case
Let , define a linear operator as
AX(W)=(XDiag(W⋅1),⋯,XDiag(W⋅r))
A∗X(M)=(diag(XTM1),⋯,diag(XTMr))
where denotes -th block matrix of . Then, the trace Lasso in matrix case is defined as follows
Ω(W)=∥AX(W)∥∗
It is easy to show that, the trace Lasso regularizer in matrix case (3.2) has similar properties to that in vector case. If each column of is normalized, then the linear operator can be rewritten to
AX(W)=r∑i=1p∑j=1X⋅jWij¯eTij=p∑j=1X⋅j⋅(r∑i=1wij¯eTij)
where is an unit vector in which the -th component is 1 and the others are 0. There are two special case:
1. If the data (i.e., column vectors of ) are uncorrelated, i.e., . Then (3.2) gives a singular value decomposition of . In the case, trace Lasso (3.2) reduces to the norm
∥AX(W)∥∗=p∑j=1∥X⋅j∥2∥Wj⋅∥2=∥W∥2,1.
2. If the data are highly correlated, especially if all columns of are identical and have unit size, we have
AX(W)=X⋅1⋅p∑j=1r∑i=1Wij¯eTij=X⋅1⋅(vec(W))T,
where . Then trace Lasso (3.2) reduces to Frobenius norm
∥AX(W)∥∗=∥X⋅1⋅(vec(W))T∥∗=∥X⋅1∥2⋅∥vec(W)∥2=∥W∥F.
The following proposition show that the trace Lasso (3.2) in matrix case is adaptive to the correlation of data, which is similar to the original trace Lasso.
###### Proposition 3.1.
Let , and each column of is normalized, . Then
∥W∥F≤∥AX(W)∥∗≤√r∥W∥2,1.
###### Proof..
We first show that . Specifically, we have
∥AX(W)∥2F=r∑i=1∥Xdiag(W⋅i)∥2F=r∑i=1p∑j=1W2ij∥X⋅j∥22=r∑i=1p∑j=1W2ij=∥W∥2F
Then, for the first inequality of (3.1) we have
∥W∥F=∥AX(W)∥F≤∥AX(W)∥∗.
Denote the -th column of the -th submatrix in by , and let , then for the second inequality of (3.1) we have
∥AX(W)∥∗=max∥M∥op≤1⟨M,AX(W)⟩=max∥M∥2≤1⟨A∗X(M),W⟩=max∥M∥2≤1r∑i=1WT⋅idiag(XTMi)=max∥M∥2≤1p∑j=1XT⋅j(r∑i=1WijMi⋅j)≤max∥M∥2≤1p∑j=1∥X⋅j∥2⋅∥∥ ∥∥r∑i=1WijMi⋅j∥∥ ∥∥2≤max∥M∥2≤1p∑j=1∥X⋅j∥2⋅∥^Mj∥F∥Wj⋅∥2≤√rp∑j=1∥Wj⋅∥2=√r∥W∥2,1.
The first equality used the fact that the dual norm of the trace norm is the operator norm. The last inequality used that , and which deduces . ∎
###### Remark 3.1.
If , then Proposition 3.1 is indeed Proposition 3 in [9].
### 3.3 Regression framework of the adaptive SCCA
Given two data matrices and on the same set of observations, where is the sample size, and are the feature numbers. Without loss of generality, we assume that data matrices and are mean centered. By and , the CCA problem can be rewritten as
{(u∗,v∗)=argmaxu,v uTXTYv,s.t.uTXTXu=1,vTYTYv=1. (3.1)
For multiple canonical vectors, let and where denote the -th pair of the canonical vectors, the multiple CCA problem is
(3.2)
The CCA problem (3.2) can be reformulated to a constrained bilinear regression problem of the form
To adapt to the dependence of data, we consider an adaptive sparse CCA (SCCA) model with trace Lasso regularization. Specifically, we have
⎧⎪⎨⎪⎩(U∗,V∗)=argminU,V12∥XU−YV∥2F+λu∥AX(U)∥∗+λv∥AY(V)∥∗,s.t.UT(XTX)U=Ir, VT(YTY)V=Ir,
where , and are the penalty parameters, and are linear operators.
## 4 Optimization method for SCCA (3.3)
The SCCA model (3.3) is a nonconvex and nonsmooth optimization problem, and it is difficult to be solved. Riemannian optimization methods are popular to solve a class constrained optimization problem with special structure. Hence, in this section we first reformulate problem (3.3) to a nonsmooth optimization problem on the generalized Stiefel manifolds, then adopt an manifold inexact augmented Lagrangian method in [6] to solve the resulting problem. Finally, we give a convergence analysis of the proposed method.
### 4.1 Augmented Lagrangian scheme
Let , and , then problem (3.3) can be reformulated as
Here, we assume that and are positive define 333If it is not positive define, we can replace by .. Then and can be regarded to generalized Stiefel manifolds, and problem (4.1) is an optimization problem on generalized Stiefel manifolds. We further reformulate (4.1) to
⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩(U∗,V∗)=argminU,V12∥XU−YV∥2F+g(P)+h(Q),s.t.AX(U)=P,AY(V)=QU∈M1, V∈M2,
The Lagrangian function associated with (4.1) is given by
L(U,V,P,Q;Λ1,Λ2)=12∥XU−YV∥2F+g(P)+h(Q)−⟨Λ1,AX(U)−P⟩−⟨Λ2,AY(V)−Q⟩,
where and denote the Lagrangian multipliers. Let be a penalty parameter. Then, the corresponding augmented Lagrangian function is given by
Lρ(U,V,P,Q;Λ1,Λ2)=L(U,V,P,Q;Λ1,Λ2)+ρ2∥AX(U)−P∥2F+ρ2∥AY(V)−Q∥2F
Then, the proposed manifold inexact augmented Lagrangian method for (4.1) is summarized in Algorithm 1.
### 4.2 Convergence analysis
Let be a variable formed by concatenating and . Let where . Then, problem (4.1) can be rewritten as
minWF(W) s.t. h(W)=0, W∈N,
where , and is given by
h(W):=(AX(U)−P , AY(V)−Q).
The corresponding augmented Lagrangian can be rewritten as
Lρ(W;Z)=F(W)+r∑i,jZij[h(W)]ij+ρ2r∑i,j[h(W)]2ij
The corresponding KKT condition is given by
where is the Riemannian subdifferential of at . To obtain an efficient implementation of Algorithm 1, we inexactly solve the iteration subproblem (4.1) in which the following stoping criteria is used:
δk∈∂Ψk(Wk+1) and ∥δk∥≤ϵk
where as .
Following Yang, Zhang and Song [27], we give the constraint qualifications of problem (4.2):
###### Definition 4.1 (Licq).
Linear independence constraint qualifications (LICQ) are said to hold at for problem (4.2) if
{grad[h(W)]ij|i=1,⋯,m;j=1,⋯,r} are linearly % independent in TWN.
###### Theorem 4.1.
Suppose is a sequence generated by Algorithm 1, and the stoping criteria (4.2) is hit at the -th iteration. Then the limit point set of is nonempty. Let be a limit point of , and the LICQ holds at . Then is a KKT point of the problem (4.2).
See [6]. ∎
###### Lemma 4.1.
The LICQ always holds at for problem (4.2) .
###### Proof..
Let , then
∇[h1(W)]ij=∇U[h1(W)]ij×∇V[h1(W)]ij×∇P[h1(W)]ij×∇Q[h1(W)]ij,i=1,⋯,n;j=1,⋯,pr,∇[h2(W)]ij=∇U[h2(W)]ij×∇V[h2(W)]ij×∇P[h2(W)]ij×∇Q[h2(W)]ij,i=1,⋯,n;j=1,⋯,qr.
For all , let be a matrix in which the entry at the -th row and the -column is 1, the others are 0. Then
∇P[h1(W)]ij=En×prij,∇Q[h1(W)]ij=0,i=1,⋯,n;j=1,⋯,pr,∇P[h1(W)]ij=0,∇Q[h1(W)]ij=En×qrij,i=1,⋯,n;j=1,⋯,qr.
A basis of the normal cone of at , denoted by , is given by
{ΣxxU(eieTj+ejeTi):i=1,⋯,r,j=i,⋯,r}×{ΣyyV(eieTj+ejeTi):i=1,⋯,r,j=i,⋯,r}.
It is easy to show that, , if there exists such that
n∑i=1pr∑j=1Z1ij∇[h1(W)]ij+n∑i=1qr∑j=1Z2ij∇[h2(W)]ij∈NWN,
then . Since is a submanifold of Euclidean space, it derives immediately from (4.2) that
Which implies that LICQ holds at and completes the proof. ∎
### 4.3 Riemannian gradient method for subproblem (4.1)
In section 4.1, we present an manifold inexact augmented Lagrangian method to solve problem (4.1). The main challenge in the proposed method (Algorithm 1) is to solve subproblem (4.1) efficiently. Problem (4.1) is a nonsmooth problem under manifold constrained. In this subsection, we first get an equivalence smooth optimization problem by using the Moreau envelop technique, then we present Riemannian gradient method to solve the equivalent problem.
The proximal mapping associated with is defined by
Proxp(U)=argminW{p(W)+12∥U−W∥2F}.
For fixed and , we consider
minU,V,P,Q{Ψ(U,V,P,Q):=Lρ(U,V,P,Q;Λ1,Λ2), s.t. U∈M1,V∈M2}
Let
ψ(U,V)=infP,QΨ(U,V,P,Q)=12∥XU−YV∥2F+g(Proxg/ρ(AX(U)−Λ1/ρ))+h(Proxh/ρ(AY(V)−Λ2/ρ))+ρ2∥AX(U)−Λ1/ρ−Proxg/ρ(AX(U)−Λ1/ρ)∥2F+ρ2∥AY(V)−Proxh/ρ(AY(V)−Λ2/ρ)∥2F−12ρ∥Λ1∥2F−12ρ∥Λ2∥2F.
Hence, if
(~U,~V,~P,~Q)=argminU∈M1,V∈M2,P,QΨ(U,V,P,Q),
then can be computed by
⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩(~U,~V)=argminU∈M1,V∈M2ψ(U,V),~P=Proxg/ρ(AX(~U)−Λ1/ρ),~Q=Proxh/ρ(AY(~V)−Λ2/ρ).
Notice that the subproblems for and are proximal operators. Both and in (4.3) are nuclear norm functions, the proximal operator is indeed a singular value shrinkage operator, which is given by:
where and .
Now we focus on the subproblem regarding jointed variable in (4.3). Recall that
(~U,~V)=argminU,V{ψ(U,V), s.t. U∈M1,V∈M2}.
Let and be a product manifold. Then, problem (4.3) can be formulated
~W=argminW{ψ(W), s% .t. W∈M}.
By Lemma B.1, is continuously differentiable in Euclidean space, and its Euclidean gradient is
∇ψ(W)=(∇Uψ(W)∇Vψ(W))=⎛⎜⎝XT(XU−YV)+ρA∗X(U)[AX(U)−1ρΛ1−Proxg/ρ(AX(U)−1ρΛ1)]YT(YV−XU)+ρA∗Y(V)[AY(V)−1ρΛ2−Proxh/ρ(AY(V)−1ρΛ2)]⎞⎟⎠
Since is a Riemannian submanifold in Euclidean space, by lemma (B.2) is retraction smooth, and its Riemannian gradient is | 2021-07-25 06:55:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9162428379058838, "perplexity": 911.9999610415686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00673.warc.gz"} |
https://ssagesproject.github.io/docs/Basis%20Function%20Sampling.html | # Basis Function Sampling¶
## Introduction¶
The Basis Function Sampling method is a variant of the Continuous Wang-Landau Sampling method developed by Whitmer et al. [25], which biases a PMF through the summation of Kronecker deltas. In this method, the Kronecker delta is approximated by projection of a locally biased histogram to a truncated set of orthogonal basis functions.
$\int_\Xi f_{i}(\vec{\xi}) f_{j}(\vec{\xi}) w(\vec{\xi}) d\vec{\xi} = \delta_{ij}c_{i}$
By projecting a basis set, the system resolves the same properties as the Kronecker deltas, but in a continuous and differentiable manner that lends well towards MD simulations. The current version of SSAGES has support for Chebyshev, Fourier, and Legendre polynomials. Each of these has their defined weight function $$w(\xi)$$ implemented specific to the method. Additionally, any combination of implemented basis sets can be used for any system. It is advised that a periodic basis set (e.g. Fourier) be used with a periodic CV, but it is not required.
The BFS method applies its bias in sweeps of $$N$$ through a histogram ($$H_{i}$$) that is updated at every $$j$$ microstate or timestep. This histogram is then modified to an unbiased partition function estimate ($$\tilde{H_{i}}$$) by exponentiation with the current bias potential ($$\Phi_{i}$$).
$\tilde{H}_{i}(\xi) = H_{i}(\xi) e^{\beta \Phi_{i}}$
A weight function has been added into this implementation ($$W(t_{j})$$) so that the user can define the effective strength of the applied bias. If not chosen, the weight is normalized to the length of the interval.
$Z_{i}(\xi) = \sum_{j} W(t_{j})\tilde{H_{j}}(\xi)$
This final estimate is then projected to the truncated basis set. After this set is evaluated, the coefficients of the basis set are evaluated. This process is iterated until the surface converges, which is determined by the overall update of the coefficients.
$\begin{split}\beta \Phi_{i+1}(\xi) &= \sum_j^N \alpha^i_j L_j(\xi)\\ \alpha^i_j &= \frac{2j + 1}{2} \int_{-1}^1 \log(Z_i(\xi))L_j(\xi)d\xi\end{split}$
## Options & Parameters¶
These are all the options that SSAGES provides for running Basis Function Sampling. In order to add BFS to the JSON file, the method should be labeled as "BFSMethod".
Basis Function Sampling requires the use of a basis set. These are defined by defining an object of “basis_functions”. These have the following properties:
type
Currently can either be Chebyshev, Fourier, or Legendre.
polynomial_order
Order of the polynomial. In the case of Chebyshev or Legendre, this results in an order of input value + 1 as the method takes the 0th order internally. For a Fourier series, the order is the total number of coefficients including the sine and cosine series.
upper_bound
Only exists for Chebyshev and Fourier series. This is the upper bound of the CV.
lower_bound
Only exists for Chebyshev and Fourier series. This is the lower bound of the CV.
CV_restraint_spring_constants
The strength of the springs keeping the system in bounds in a non-periodic system.
CV_restraint_maximums
The upper bounds of each CV in a non-periodic system.
CV_restraint_minimums
The lower bounds of each CV in a non-periodic system.
cycle_frequency
The frequency of updating the projection bias.
frequency
The frequency of each integration step. This should almost always be set to 1.
weight
The weight of each visited histogram step. Should be kept around the same value as the cycle_frequency (usually 0.1 times that).
Note
The system has a higher chance of exploding at higher weight values.
basis_filename
A suffix to name the output file. If not specified, the output will be basis.out.
temperature
The temperature of the simulation.
tolerance
Convergence criteria. The sum of the difference in subsequent updates of the coefficients squared must be less than this for convergence to work.
convergence_exit
A boolean option to let the user choose if the system should exit once the convergence is met.
## Required to Run BFS¶
In order to use the method properly a few things must be put in the JSON file. A grid is required to run Basis Function Sampling. Refer to the Grid section in order to understand options available for the grid implementation. The only inputs required to run the method:
• cycle_frequency
• frequency
• basis_functions
• temperature
## Example Input¶
"methods": [{
"type": "BFSMethod",
"basis_functions": [
{
"type": "Fourier",
"polynomial_order": 30,
"upper_bound": 3.14,
"lower_bound": -3.14
},
{
"type": "Fourier",
"polynomial_order": 30,
"upper_bound": 3.14,
"lower_bound": -3.14
}
],
"cvs": [0, 1],
"cycle_frequency": 100000,
"basis_filename": "example",
"frequency": 1,
"temperature": 300.0,
"weight": 1.0,
"tolerance": 1e-3,
"convergence_exit": true,
"grid": {
"lower": [-3.14, -3.14],
"upper": [3.14, 3.14],
"number_points": [100, 100],
"periodic": [true, true]
}
}]
## Guidelines for Running BFS¶
• It is generally a good idea to choose a lower order polynomial initially. Excessive number of polynomials may create an unwanted “ringing” effect that could result in much slower convergence.
• For higher order polynomials, the error in projection is less, but the number of bins must increase in order to accurately project the surface. This may also create an unwanted “ringing” effect.
• A good rule of thumb for these simulations is to do at least one order of magnitude more bins than polynomial order.
If the system that is to be used requires a non-periodic boundary condition, then it is typically a good idea to place the bounds approximately 0.1–0.2 units outside the grid boundaries.
The convergence_exit option is available if the user chooses to continue running past convergence, but a good heuristic for tolerance is around 0.001.
## Tutorial¶
This tutorial will provide a reference for running BFS in SSAGES. There are multiple examples provided in the Examples/User/BasicFunc directory of SSAGES, but this tutorial will cover the Alanine Dipeptide example.
In the Examples/User/BasicFunc/ADP subdirectory, there should be two LAMMPS input files (titled in.ADP_BFS_example{0,1}) and two JSON input files. Both of these files will work for SSAGES, but the one titled ADP_BFS_2walkers.json makes use of multiple walkers.
For LAMMPS to run the example, it must be made with rigid and molecule packages. In order to do so, issue the following commands from your build directory:
make yes-rigid
make yes-molecule
make
Use the following command to run the example:
mpiexec -np 2 ./ssages ADP_BFS_2walkers.json
This should prompt SSAGES to begin the simulation. If the run is successful, the console will output the current sweep number on each node. At this point, the user can elect to read the output information after each sweep.
basis.out
The basis.out file outputs in at least 3 columns. These columns refer to the CV values, the projected PMF from the basis set, and the log of the histogram. Depending on the number of CVs chosen for a simulation, the number of CV columns will also correspond. Only the first CV column should be labeled.
The important line for graphing purposes is the projected PMF, which is the basis set projection from taking the log of the biased histogram. The biased histogram is printed so that it can be read in for doing restart runs (subject to change). For plotting the PMF, a simple plotting tool over the CV value and projected PMF columns will result in the free energy surface of the simulation. The free energy surface will return a crude estimate within the first few sweeps, and then will take a longer period of time to retrieve the fully converged surface. A reference image of the converged alanine dipeptide example is provided in the same directory as the LAMMPS and JSON input files.
restart.out
This holds all the coefficient values after each bias projection update, as well as the biased histogram. This file is entirely used for restart runs.
## Developers¶
• Joshua Moller
• Julian Helfferich | 2020-09-23 05:40:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6814619302749634, "perplexity": 1369.6033463843314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00521.warc.gz"} |
https://eprint.iacr.org/2018/293 | ## Cryptology ePrint Archive: Report 2018/293
Privacy Amplification from Non-malleable Codes
Eshan Chattopadhyay and Bhavana Kanukurthi and Sai Lakshmi Bhavana Obbattu and Sruthi Sekar
Abstract: In this paper, we connect two interesting problems in the domain of Information-Theoretic Cryptography: "Non-malleable Codes" and "Privacy Amplification". Non-malleable codes allow for encoding a message in such a manner that any "legal" tampering will either leave the message in the underlying tampered codeword unchanged or unrelated to the original message. In the setting of Privacy Amplification, we have two users that share a weak secret $w$ guaranteed to have some entropy. The goal is to use this secret to agree on a fully hidden, uniformly distributed, key $K$, while communicating on a public channel fully controlled by an adversary.
While lot of connections have been known from other gadgets to NMCs, this is one of the first few results to show an application of NMCs to any information-theoretic primitive (other than a natural application to tamper resilient storage). Specifically, we give a general transformation that takes any augmented non-malleable code and builds a privacy amplification protocol. This leads to the following results:
(a) Assuming the existence of constant rate two-state augmented non-malleable code with optimal error $2^{-\Omega(\kappa)}$ there exists a $8$-round privacy amplification protocol with optimal entropy loss $O(\log(n) + \kappa)$ and min-entropy requirement $\Omega(\log(n)+ \kappa)$ (where $\kappa$ is the security parameter). In fact, "non-malleable randomness encoders" suffice.
(b) Instantiating our construction with the current best known augmented non-malleable code for $2$-split-state family [Li17], we get a $8$-round privacy amplification protocol with entropy loss $O(\log(n)+ \kappa \log (\kappa))$ and min-entropy requirement $\Omega(\log(n) +\kappa\log (\kappa))$.
Category / Keywords: foundations / Non-malleability, Privacy Amplification, Information-theoretic Key Agreement
Date: received 26 Mar 2018, last revised 9 Jun 2018
Contact author: sruthi sekar1 at gmail com
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2018/293
[ Cryptology ePrint archive ] | 2018-07-23 06:25:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2822663187980652, "perplexity": 3805.5732797872643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594954.59/warc/CC-MAIN-20180723051723-20180723071723-00446.warc.gz"} |
https://languagelog.ldc.upenn.edu/nll/?p=44057 | Somebody sent me this sign from a supermarket in China:
Yí zhàn shì gòuwù de shǒuxuǎn
This is one of the most bizarre specimens of Chinglish I've ever encountered.
If we omit "dyadic", the rest of it is easy to figure out (it should be "First choice for one-stop shopping" — no sweat). Usually, even when a translation is incredibly peculiar, it doesn't take me long to figure out where the translator (whether human or machine) went wrong. In this case, "dyadic" is so unusual, yet so specific, that I figured it must have had some basis, otherwise the translator would not have gone to the trouble of inserting it out of thin air (pingkong 凭空).
I was hooked. I had to figure out where "dyadic" came from.
Of course, it's possible that the individual who sent this sign to me copied it incorrectly. But what would have prompted him to add such a rare, technical term? Furthermore, from other things that he talked about in his message and his manner of writing, I could tell that he was by nature a careful person.
Normally, I don't write about such matters without photographic documentation or an "obligatory screen shot", yet in this case I was so intrigued by "dyadic" that I quickly began to formulate various hypotheses and do research in support of them or to disprove them.
Meanwhile, although I did not directly impose on my correspondent by asking him to go back to the store with a camera or cell phone (he didn't have one the day he spotted the sign and copied it), I did hint that it would be helpful if I had one, and I did ask him a number of questions to flesh out some of the information he had initially sent to me (just that it was from a supermarket in China). (Let this be a tutorial for how to decipher Chinglish, if you're interested in becoming proficient at it.)
1. what was the full name of the store?
2. was everything in the store you went to inexpensive?
3. what province / city / town / section of city was the store in? (that makes a difference)
[A second set of questions — I found out some pertinent things during the interim.]
4. Where is this store located? In a small town? In a poor part of a city? Is it in Yantai, Shandong?
5. What kinds of products do they sell? Mostly cheap goods? Prices?
My correspondent thought from #3 and #4 that I suspected topolectal usage. I wrote back:
I never did think that the mistranslation was related to topolectal or dialectal usage, but more to demographics.
I thought that I should back translate from "dyadic" to Chinese. What I got was èrjìn 二进 or èryuán 二元. The former seemed too mathematical for the context of a supermarket slogan, so I concentrated on the latter, which is more sociologically oriented.
Thus I tried to find a connection between "dyadic" and shopping. Although it sounds far-fetched, there is actually some scholarly literature on this subject. Here is an example of one such publication:
Readdick, Christine A.; Mullis, Ronald L.
Adolescence, v32 n126 p313-22 Sum 1997
See here and here.
Abstract
Hmmm…, maybe. So I slept on that for a night.
Next morning when I awoke, I was delighted to find that my correspondent in China had sent me some more vital information:
The store is certainly not high-end, but it wasn't a corner chāoshì 超市 ("supermarket") either.
Province: Shandong
City: Yantai
Section: Not far from the commercial port; abutting a number of sea cucumber wholesale markets.
All of this conformed to my emerging hypothesis about the slogan. Above all, the correspondent also sent me the following photograph:
BINGO!!
That's exactly what I was hoping for. The solution to the puzzle of the meaning of "dyadic" lay in the logo of the store. Within two seconds of seeing the logo, I had figured it out.
Now we know the name of the store, and we also see the logo, of which they seem to be very proud, right next to the name: Zhènhuá liàngfàn 振华量贩 ("Zhenhua [Promote China] volume / bulk vendor" [trying to be like Costco] — they call themselves "ZhenHua Mart" in English). All of this fits with my theory for the origin of "dyadic".
Notice too what the attendant pictured in the montage has in her left hand. That instrument will also be featured below in our explanation about what kind of store this is.
Just to make sure I wasn't dreaming (though I was quite certain of my solution), I asked more than half-a-dozen friends and students, all native speakers from the PRC, what they thought the logo represented. I told all of them what I thought the logo stood for, viz. èryuán 二元 ("two yuan [RMB]"). Most of them explicitly and emphatically disagreed with my opinion; only two of them agreed with me. Here are a couple of their interpretations of the logo:
1. an apple — perhaps signifying "fresh" (Zhenhua Mart is a large wholesale supermarket that prides itself on fresh produce and sells in bulk, so it can't be a 2 yuan store)
2. stylization of "Z" and "H" for the name of the store
Although I am convinced that my "2 yuan" theory is the primary correct one, I believe that either (or both) of these other conjectures may also be secondarily operative, which is why the store is so proud of its logo — it cleverly conveys so many virtues of their business!
Here's how we may visualize the different possibilities for the meaning of the logo:
Name of the store:
&
Apple 🍎:
"2元" (combination of a "2" and a “○”:
&
However, as soon as I saw the logo, I somehow could immediately make out "2" + "元".
Finally, one of the correspondents who agreed with me that the logo does primarily signify "2 yuan 元" also made the interesting suggestion that the yellow square itself symbolizes a Chinese ingot of gold, and I agree with her.
A 2 yuan 元 store in China implies that you would be able to buy everything in it for 28¢ US. They are called èr yuán diàn 二元店 ("2 yuan store") or èr yuán chāoshì 二元超市 ("2 yuan supermarket"), etc.
Here's what the front of a typical 2 yuan store looks like:
The writing says: quánchǎng 2 yuán 全场2元 ("everything in the store for 2 yuan"). Naturally, not everything in the store costs exactly 2 yuan (= 28¢ US). The idea is that everything in the store is cheap, and most of it does cost around 2 yuan or less.
Of course, this reminds us of "dollar stores", e.g., "Dollar Tree", in America (and 100 yen [a little less than a dollar] stores in Japan). I've been in many of them, and am always amazed at what you can find there for a dollar or less. I simply don't understand how such stores can make a profit, but many of them actually thrive.
Several of my students said that they frequented these 2 yuan stores when they were in middle school and high school and that they carried "decorative stuff and things for women", as well as "stationary and girl’s accessories 🙈 and "hairpins, sticky hooks, and other sundries". Some even said that they remembered when they were in elementary school there were yīyuán diàn 一元店 ("1 yuan stores"), but they don't exist any longer. They also remarked that you don't see many 2 yuan stores anymore, and that they may be found mainly in rural areas.
Nowadays people think that èr yuán diàn 二元店 ("2 yuan store") only have "cheap and poor-quality products". You do, however, find some shí yuán diàn 十元店 ("ten yuan stores") — such stores even passed through 4 yuan and 8 yuan stages — that are more, ahem, "respectable". Other reasons for the withering away of the 2 yuan stores that were given are "flourishing of the economy", "increasing price level", "the development of online shopping", and "the policy of so-called 'decentralization of the non-capital functions of Beijing'".
Still, my students remember the old days when they used to go to these 2 yuan stores, and even fondly recall the loud announcements of the attendants:
"Emergency demolition! Everything must go! Crazy sale! 2 yuan! 2 yuan! Everything for only 2 yuan! If you don't buy now you'll lose out! If you don't buy now you'll be a fool!"
Here are some scenes from inside a real Zhenhua Mart, which may be found at many locations in North China today:
Clearly not a 2 yuan 元 store!
So how do we square the primary symbolism of the 2 yuan 元 logo with the reality of Zhenhua Mart supermarkets today? I think that Zhenhua may have started out as a 2 yuan 元 store, and they cherish their humble origins. Even if they didn't start out as a 2 yuan 元 store, they may aspire to the ideal of such stores in making goods available at the lowest possible prices.
"It's a time sex thing, baby" (8/8/07)
[Thanks to Yixue Yang, Yijie Zhang, Chenfeng Wang, Lin Zhang, Qing Liao, Tong Wang, Zeyao Wu, and Xiuyuan Mi]
1. cameron said,
August 24, 2019 @ 3:05 am
Did I miss something, or did you never get around to explaining how the apparatus the woman on the poster has in her left hand fits into the story?
Of course, before there were "dollar stores" in the US, Woolworth's was known as a "five and ten cent store", or "five-and-dime" – or, more formally "5¢ & 10¢ store". I'm old enough to remember shopping at Woolworth's, but the days when the "five and dime" designation was accurate were well before my time.
2. B.Ma said,
August 24, 2019 @ 3:14 am
Yes, I think someone is very proud of themselves for coming up with a logo that looks like an apple, ZH and a 2 (almost) in a circle.
In Hong Kong we used to have \$10 shops, then they became \$12 shops (everything \$12) then they became "fixed price marts". They seem to import things directly from 100 yen shops (now 108 yen and soon 110 yen with the sales tax) and now they stick up charts showing the HKD price they charge based on the Japanese price which is usually printed on the packaging, instead of wasting resources on creating HKD labels for every item. In Australia I recall visiting \$1 shops which later became \\$2 shops.
In the UK, we have Poundland, 99p stores, then 98p, 97p, … stores. Many of these don't survive for long.
3. Philip Taylor said,
August 24, 2019 @ 4:45 am
And in Poundland (unless something has changed since I last visited one), everything is exactly £1-00, unlike Victor's 2-yuan stores, which is why the assistants in Poundland tend to look at one with a mixture of incredulity and amusement if one asks "How much is this ?" …
4. Victor Mair said,
August 24, 2019 @ 7:08 am
It's interesting that in Poundland stores the prices seem to have gone down over the years:
"Poundland, 99p stores, then 98p, 97p, … stores."
5. Victor Mair said,
August 24, 2019 @ 7:20 am
@cameron
Yes, you missed something. The explanation of what the assistant has in her left hand is given later on in the narrative, but it is implicit, not explicit. I wanted to challenge LL readers in the ways of decoding Chinglish just a bit, in case they wish to take up the trade. It's sort of like a detective story, you know. There are clues here and there. Add them up and the crime is solved. Elementary, my dear Watson.
6. Michèle Sharik Pituley said,
August 24, 2019 @ 1:55 pm
Isn't it just a scanner in her left hand??
7. Jonathan Smith said,
August 24, 2019 @ 3:02 pm
This is a machine translation issue (clear given "head elects" for shǒuxuǎn 首选, etc.); for some reason 式 is being treated as "dyadic". Some random googlable examples:
俄军事公司供应虎式装甲车用于奥运安保 "The dyadic tiger armoured vehicle the russia military affairs company is supplied is used for olympic games abo. "
幕后配角大揭密,冯式电影营销霸主地位不可撼动 "Backstage , the supporting role exposes a secret greatly , dyadic feng film camp sells a hegemony being not allowed to shake .
Dunno why tho. B/c 並矢式 ‘dyadic’ itself contains 式?
8. Peter Taylor said,
August 25, 2019 @ 1:56 am
"The first stations living owing to classics fad" in the top-left of the photo also draws my attention. Would I be correct in guessing that the gist is "We're the best because we've outlived all our competitors"?
9. Chas Belov said,
August 25, 2019 @ 2:54 am
The Japanese 100 yen stores have made their way to the US. There are multiple Daiso stores in the Bay Area, and Japantown also has an Ichiban Kan which is mostly, but not entirely, 100 yen before currency conversion inflation.
Then there's the opposite of the dollar store, 99¢ and up at 40-22 82nd Street, Queens including the reassurance Yes! We do carry over 99¢ items.
10. John Swindle said,
August 25, 2019 @ 5:20 am
I know! I know! It's the sea cucumbers you mentioned! She's holding a ¥2 automatic sea cucumber. No, that can't be it either.
11. Victor Mair said,
August 25, 2019 @ 7:57 am
@Peter Taylor
"We're the best because we've outlived all our competitors"?
No, it means "First stop for fashionable living".
12. Victor Mair said,
August 25, 2019 @ 11:16 pm
"for some reason"
A reason: 二元 <-- > dyadic
13. maidhc said,
August 26, 2019 @ 4:08 am
"Emergency demolition! Everything must go! Crazy sale! 2 yuan! 2 yuan! Everything for only 2 yuan! If you don't buy now you'll lose out! If you don't buy now you'll be a fool!"
Sounds like the "blue light specials" they used to have at K-Mart.
K-Marts are really thinning out around here. There used to be two within a 15 minute drive. Now there are two within a 45 minute drive.
I haven't been to one in years, so I don't know if they still have "blue light specials".
14. BZ said,
August 26, 2019 @ 1:53 pm
There is a 79 cent store near where I work which was a 69 cent store not too long ago. Either way, I was surprised that stores with limits below 99 cents still existed when I first saw it. Though I've never found anything useful there.
15. Jonathan Smith said,
August 26, 2019 @ 2:28 pm
Prof. Mair — sorry, I mean “dyadic” is being introduced due to the syllable shi4 式, apparently in particular when it is not part of a larger word which the engine recognizes. So no connection to er4yuan2 二元. For example, even better than the above, from
Where can Beijing be arrived at dyadic pure Guangdong marmite meal? :D :D
北京哪里能吃到地道的粤式砂锅饭?
Such are the wonders of Chinglish… though I suppose this new machine-translationese might not even deserve the name …
16. 번하드 said,
August 26, 2019 @ 8:07 pm
@Chas Belov:
Oh, funny. "Daiso", for me, sounds like Korean "다 있어" (da isseo, we have everything)
http://www.daiso.co.kr/ — is it that one?
Back in Korea a few years ago, there was another chain called "다판다" (da panda, we sell everything), but that seems to have been supplanted by https://www.edapanda.co.kr/ .
17. liuyao said,
August 27, 2019 @ 9:19 am
It does seem that Jonathan Smith was right, dyadic comes from the single character 式. Either in the sense in physics (dyadic as an expression formed from two vectors) or in mathematics (fraction whose denominator is a power of 2), there's no appearance of 二元. On the other hand, if given 二元, I'd think dualism of Descartes, or to mean "two variables" (e.g. 二元一次方程 = first-degree equation(s) in two variables).
18. Victor Mair said,
August 27, 2019 @ 7:01 pm
The lexeme "yízhàn shì 一站式" ("one stop style") receives 190,000,000 ghits and has its own Baidu encyclopedia entries (here and here).
With such an enormous data base to train on, it's no wonder that all the major online machine translators get it right:
=====
Microsoft Translator: "one-stop shop".
Baidu Fanyi: "one-stop", with 29 example sentences, all correctly translating "yízhàn shì 一站式" as "one-stop", "one-stop shopping", etc.
=====
The composition of the lexeme "yízhàn shì 一站式" is easy to analyze and understand, both for humans and for machines. The "yízhàn 一站" part all too obviously means "one stop" and the "shì 式" part means "formula; pattern; type; style; ceremony; form; model", so together they mean "one-stop style [shopping]", i.e., "one-stop [shopping]").
The construction of "yízhàn shì 一站式" ("one stop type / style") is closely similar to that of "yīcì xìng 一次性" ("having the nature of a single use", which is to say, "for single use", or more simply, "disposable"). I discussed this collocation in the reading, "It's a time sex thing, baby" (8/8/07) a dozen years ago.
19. Jonathan Smith said,
August 28, 2019 @ 10:07 pm
^ yep, this is the product of an awful and hopefully long-since retired engine (old dict.cn?) and definitely not a person. Quite aside from "一站式" > "one dyadic station", atrocities like "首选" > "head elects" have not been seen for ages on the likes of Google.
Looks like my link above now points to newer results; at time of writing the examples below are visible at http://fy.tingclass.net/w/dyadic:
痉挛_式_胎动 > "dyadic fetal movement of spasm"
虎_式_装甲车 > "dyadic tiger armoured vehicle
地毯_式_搜索 > "dyadic reconnaissance of carpet" | 2020-10-30 17:30:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39940255880355835, "perplexity": 5940.42300518854}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107911027.72/warc/CC-MAIN-20201030153002-20201030183002-00393.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-10-counting-methods-and-probability-10-1-apply-the-counting-principles-and-permutations-10-1-exercises-problem-solving-page-689/71a | ## Algebra 2 (1st Edition)
Published by McDougal Littell
# Chapter 10 Counting Methods and Probability - 10.1 Apply the counting Principles and Permutations - 10.1 Exercises - Problem Solving - Page 689: 71a
#### Answer
$24$
#### Work Step by Step
We have $5!$ arrangements, but each of them is counted $5$ times. This is because we can "rotate" each arrangement by $1$ clockwise and we get the same arrangement but after $5$ rotations we get back to the original arrangement. Hence the answer is: $5!/5=4!=24$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2021-04-18 12:30:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7568399310112, "perplexity": 1013.401792664785}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038476606.60/warc/CC-MAIN-20210418103545-20210418133545-00605.warc.gz"} |
https://codereview.stackexchange.com/questions/48342/polling-parsing-validating-and-handling-data-cleanly-and-efficiently | # Polling, parsing, validating and handling data cleanly and efficiently
<edit: There's a major problem in the code. It is basically the one in the comment regarding backpressure. I'll rework the code within a a few days...
It's time for a quick code review, good points, examples, handholding if anyone has time to spare for commenting and, yes, even nitpicking too. I wrote a short piece of code, purpose of which is to
1. Read a resource on some predefined intervals. This resource can be at least a file or a network resource, so preferably the source and how to read it could be configured and be testable. The contents can text in JSON or XML or a CSV file or bytes or whatever. At least JSON for the purposes for this example. I assume all data can be read in one go.
2. When the resource has been polled and data received, it needs to be transformed into some nice, strongly typed form.
3. After the data has been transformed into a strongly typed form, it gets processed in various ways. It would be nice if one could have a notion of "tubes" here, which would work in asynchronously and independently so that an error in one "tube" could be just logged whereas processing in other ones go on.
4. After processing data it's being tucked into various places. Like various queues (or a queue), files and so forth.
The code follows, but I'm not satisfied with it. For one, even if the idea of getData works with dataPoller, I need to add the Async.RunSynchronously in case I use an async method, or if not, omit it. Is there a way to do this either way independent of how getData is defined?
I've been reading about the excellent chap Scott Wlaschin's excellent railway oriented programming tutorial and am thinking how could apply the lessons here. For one, extending this excellent looking framework of thought to async isn't really clear to me (as written in the end of the slides). I'm also thinking if I could employ TPL.Dataflow or Hopac, which looks like being a fit (defining processing DAGs) to this kind of a processing...
... But really, I'd be happy even if someone would advise me if there's a neater way to write this poller (the Async.RunSynchronously part). The rest is something I know I'll be tempted to tackle on the cost of writing a program that actually does something.
Currently I have a module in a file CommonLibrary.fs like so
namespace CommonLibrary
[<AutoOpen>]
module Infrastructure =
open System.Reactive
open System.Reactive.Linq
//The current version of Rx does not have a notion of backpressure so if call durations rise, so will the process memory consumption too.
let dataPoller(interval, source) = Observable.Interval(interval) |> Observable.map(fun _ -> source)
[<AutoOpen>]
module DomainTypes =
open System
open FSharp.Data
type Data = {
someData: string
}
type DataPieces = {
contents: seq<Data> option
}
type DataPiecesProvider = JsonProvider<Sample = "DataPiece.json", Culture = "en-US">
let getData uri = async {
try
//Omitted for brevity, but basically here's a bunch of parsing code,
//which spans for a about ten lines.
//let datas =
return { contents = Some(datas) }
with _ -> return { contents = None }
}
And then I have the main file, Program.fs where I call these facilities as follows
open System
open System.IO
open System.Text
open CommonLibrary
[<Literal>]
let pollingUri = "http://xyz/json"
[<EntryPoint>]
let main argv =
let exampleDataPump =
dataPoller(TimeSpan.FromSeconds(1.0), Async.RunSynchronously(getData pollingUri))
|> Observable.subscribe(fun i -> printfn "%A" (i.contents.Value |> Seq.head))
//These aren't never reached, but for the sake of completeness.
exampleDataPump.Dispose()
0
<edit 2014-05-04: So, I have a version two of the code. The original idea was to ask things (as you can see) of it, but I believe I have a better story.
Wheareas this version too could be improved in many ways (for one, it lacks documentation), I believe it has the problematic nature of relying on exceptions to count for retrying, well, in exceptional conditions. A better strategy could be to catch everything at the source, make it a domain event DU (taking cues from Scott's slides) and then handling them in the Rx pipeline.
In any event, here's the newest code. I'll work on the DU version too and add it here. Hopefully in the near future -- sooner than these edits. Then there's something to compare and discuss if there's something to discuss.
In CommonLibrary.fs
let getData uri = async {
//Omitted for brevity, but basically here's a bunch of parsing code,
//which spans for a about ten lines.
//let datas = ...
return datas //seq<Data>
[<Extension>]
type ObservableExtensions =
[<Extension>]
[<CompiledName("PascalCase")>]
static member inline retryAfterDelay<'TSource, 'TException when 'TException :> System.Exception>(source: IObservable<'TSource>, retryDelay: int -> TimeSpan, maxRetries, scheduler: IScheduler): IObservable<'TSource> =
let rec go(source: IObservable<'TSource>, retryDelay: int -> TimeSpan, retries, maxRetries, scheduler: IScheduler): IObservable<'TSource> =
source.Catch<'TSource, 'TException>(fun ex ->
if maxRetries <= 0 then
Observable.Throw<'TSource>(ex)
else
go(source.DelaySubscription(retryDelay(retries), scheduler), retryDelay, retries + 1, maxRetries - 1, scheduler))
go(source, retryDelay, 1, maxRetries, scheduler)
type DataPumpOperations =
static member dataPump(source: Async<_>): IObservable<_> = Async.StartAsTask(source).ToObservable()
static member dataPump(source: seq<_>): IObservable<_> = source |> Observable.toObservable
In Program.fs
let constantRetryStrategy(retryCount: int) =
TimeSpan.FromSeconds(1.0)
let maxRetries = 5
let ds = (getData pollingUri)
let t = Observable.Create(fun i -> (DataPumpOperations.dataPump ds).retryAfterDelay(constantRetryStrategy, maxRetries, Scheduler.Default).Subscribe(i)).Delay(TimeSpan.FromSeconds(1.0)).Repeat().Subscribe(fun i -> printfn "%A" (i |> Seq.head).origin)
• I'll update this shortly. It turns out this became quite a largish issue to handle haphazardly. See a closely related question at stackoverflow.com/questions/23404185/…. – Veksi May 1 '14 at 20:00 | 2019-09-19 19:51:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30397284030914307, "perplexity": 5552.675767201131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00203.warc.gz"} |
https://embuchestissues.wordpress.com/2009/02/23/rational-approximations-of-sqrt2/ | ## Rational approximations of √2
Introducing undergraduates to rational approximations of √2 can be an opportunity to insidiously tell them about many parts of mathematics they certainly don’t want to hear about. In a less pessimistic way, I would say this is a nice way to illustrate the use of several theories in abstract mathematics.
First, you may want to tell them it is not a rational number, which could be easy, unless they never heard about factoring integers.
Then, you could use classical sequences from high school classes: it is easy to check that iterating $u_{n+1} = \frac{u_n+2}{u_n+1}$ converges to ±√2, and setting $U_n = \frac{u_n-\sqrt 2}{u_n+\sqrt 2}$ they will even be able to give an explicit formula with a geometric sequence. There is of course a well-known algorithm which speeds up considerably the computation : Newton’s method. This can be illustrated geometrically by drawing the graph of a function having √2 as a root (for example $x \mapsto x^2-2$):
1. take a rough approximation, such as x=1
2. imagine the function is affine (replacing the graph by its tangent)
3. use the approximation of the function to calculate an approximation of the solution
4. use this newly found rough solution to iterate from step 1
This method requires iterating $v_{n+1} = \frac 1 2 (v_n + 2/v_n)$, and converges considerably faster. Both methods yield sequences of rational numbers converging to √2, so should be considered as methods to obtain fractional numbers giving good approximations of √2.
Since I am trying to learn basic Haskell, here is how it could be done :
import List
import System.Environment
-- Naïve method
un :: [Rational]
un = unfoldr (\x -> Just (x, (x+2)/(x+1))) 1
-- Newton's method
vn :: [Rational]
vn = unfoldr (\x -> Just (x,x/2+1/x)) 1
main = do
argv <- getArgs -- () -> IO [String]
in (putStrLn . show) (un!!s) -- Int -> IO String
Change the last un to vn in order to test Newton’s method. To use the little program on a computer with the GHC compiler, type
$ghc -o sqrt2 sqrt2.hs$ ./sqrt2 22
318281039 % 225058681
which gives you the 22th iteration of the first sequence. Using Newton’s method, the fifth iteration is 886731088897/627013566048. Now you could explain the relevance of these computations by telling about best rational approximations.
Definition. A rational number is said to be a best approximation of √2 if there is no rational number closer to √2 having small denominator.
Then go further in the maths telling your students how solutions of the Pell-Fermat equation p²-2q²=±1 provide best approximations of √2 by irreducible fractions p/q. Try to make them find several easy solutions, such as (1,1), (3,2), (7,5), and show them how the two sequences we defined earlier give a infinite sequence of solutions for free, using the identites
$2(p+q)^2 - (p+2q)^2 = p^2 - 2q^2$ (which explains the mechanism in the iteration of $\frac p q \mapsto \frac{p+2q}{p+q}$) and $(p^2+2q^2)^2 - (2pq)^2 = (p^2-2q^2)^2$, which explains it for $\frac p q \mapsto \frac{p}{2q} + \frac{q}{p}$.
The most literate readers will say we are working with the integer ring of the field $\mathbb Q(\sqrt 2)$, and will tell you that p²-2q² is the norm of $p+q\sqrt{2}$, with the interesting property $(a^2-2b^2)(c^2-2d^2) = (ac+2bd)^2 - 2(ad+bc)^2$. The solutions of Pell-Fermat equation given by $u_n=p_n/q_n$ are simply the coefficients in $p_n-q_n\sqrt 2 = (1-\sqrt 2)^{n+1}$, while $v_n=a_n/b_n$ comes from $a_n-b_n\sqrt 2 = (1-\sqrt 2)^{2^n}$. This is a good time to pause and tell about fast exponentiation by squaring (find it in your favorite search engine…).
Last but not least, since the Pell-Fermat equation $p^2-2q^2=1$ is quadratic, the set of points $(p,q)$ in the plane parameterised by its solutions is a conic (a hyperbola). It admits a rational parameterisation, in the following way:
1. choose some solution $O=(1,0)$
2. if $M=(p,q)$ is another solution, consider the (reciprocal) slope of the line $OM$, which is $t = \frac {p-1} q$
3. considering a slope t, show that there is a unique point $M = (1+tz,z)$ such that $(1+tz)^2-2z^2 = 1$, namely $z = \frac{2t}{t^2-2}$
4. notice that if p and q are rational/real/complex, then so is t (check the converse too)
5. if you are a graduate student in algebraic geometry, try to show the two relevant projective schemes are isomorphic over $\mathrm{Spec} \mathbb Z[1/2]$
As a consequence, all rational solutions of the Pell-Fermat equation $p^2-2q^2 = 1$ can be enumerated bijectively by $\big(\frac{3t^2-2}{t^2-2}, \frac{2t}{t^2-2}\big)$ for rational values of t. The group structure on these solutions can be illustrated by matrices:
$(p,q) \mapsto \begin{pmatrix} p & 2q \\ q & p \end{pmatrix}$.
This way of finding rational points on conics is also a quick method to find all Pythagorean triples: the rational solutions of $p^2+q^2=1$ are given by $\big(\frac{t^2-1}{t^2+1},\frac{2t}{t^2-1}\big)$. This is easily shown to be equivalent to the statement that integers satisfy $a^2+b^2=c^2$ if and only if we can write $a = (m^2-n^2)$, $b=2mn$, $c=m^2+n^2$ for integers $m>n$. | 2018-01-22 14:15:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7740387320518494, "perplexity": 460.7575851864316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00217.warc.gz"} |
https://lpde.maths.qmul.ac.uk/index.php | London PDE Seminar
$$R_{ a b } - \frac{1}{2} R \cdot g_{ a b } + \Lambda \cdot g_{ a b } = T_{ a b } \text{.}$$
The London PDE Seminar is a London-based seminar featuring speakers at the forefront of mathematical research in partial differential equations.
### Upcoming Seminars
The seminar convenes fortnightly, on approximately every other Friday. Some talks will be held in-person, while others will be held on Zoom.
Date/Time: 11:15 (UK time), Friday, 27 January 2023 Location: Queen Mary University of London (in-person) —Room: 1.02 iQ (formerly Scape) Building (Google maps) Speaker: Christopher Alexander (UCL) Title: General Relativistic Shock Waves that Exhibit Cosmic Acceleration Abstract: This talk concerns the construction and analysis of a new family of exact general relativistic shock waves. The construction resolves the open problem of determining the expanding waves created behind a shock-wave explosion into a static isothermal sphere with an inverse square density and pressure profile. The construction involves matching two self-similar families of solutions to the perfect fluid Einstein field equations across a spherical shock surface. The matching is accomplished in Schwarzschild coordinates where the shock waves appear one derivative less regular than they actually are. Separately, both families contain singularities, but as matched shock-wave solutions, they are singularity free. This construction is also accompanied by a novel existence proof in the pure radiation case. These shock-wave solutions represent an intriguing new mechanism in General Relativity for exhibiting accelerations in asymptotically Friedmann spacetimes, analogous to the accelerations modelled by the cosmological constant in the Standard Model of Cosmology. However, unlike in the Standard Model, these shock-wave solutions solve the Einstein field equations in the absence of a cosmological constant, opening up the question of whether a purely mathematical mechanism could account for the cosmic acceleration observed today, rather than dark energy. Date/Time: 11:15 (UK time), Friday, 10 February 2023 Location: Queen Mary University of London (in-person) —Room: 1.02 iQ (formerly Scape) Building (Google maps) Speaker: Arthur Touati (IHES) Title: Geometric optics approximation for the Einstein vacuum equations Abstract: In this talk I will present recent work on the rigorous justification of the geometric optics approximation for the Einstein vacuum equations, and its link with the Burnett conjecture in general relativity. I will start by presenting the initial value problem for the Einstein vacuum equations formulated in wave coordinates. Then I will give the state of the art on the Burnett conjecture, focusing on the approaches in U(1) symmetry and double null gauge. I will then present my main result and sketch its proof, highlighting the quasi- and semi-linear challenges. I will conclude my talk by discussing other quadratic wave equations. Date/Time: 11:15 (UK time), Friday, 24 February 2023 Speaker: Hamed Masaood (Imperial) Date/Time: 11:15 (UK time), Friday, 10 March 2023 Speaker: Tobias Barker (Bath) Date/Time: 11:15 (UK time), Friday, 24 March 2023 | 2023-01-31 07:27:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.532119870185852, "perplexity": 2129.8227924513653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00304.warc.gz"} |
http://www.stat.cmu.edu/research/publications/divisive-conditioning-further-results-dilation | ## Divisive Conditioning: Further Results on Dilation
December, 1993
Tech Report
### Author(s)
Timothy Herron, Teddy Seidenfeld, and Larry Wasserman
### Abstract
Conditioning can make imprecise probabilities uniformly more imprecise. We call this effect "dilation." In a previous paper, Seidenfeld and Wasserman (1993) established some basic results about dilation. In this paper, we further investigate dilation in several models. In particular, we consider conditions under which dilation persists under marginalization and we quantify the degree of dilation. We also show that dilation manifests itself asymptotically in certain robust Bayesian models and we characterize the rate at which dilation occurs. | 2018-11-19 13:38:05 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8764718174934387, "perplexity": 3185.611867576252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745762.76/warc/CC-MAIN-20181119130208-20181119152208-00390.warc.gz"} |
https://cvgmt.sns.it/paper/593/ | # Three circles theorems for Schrödinger operators on cylindrical ends and geometric applications
created by delellis on 22 Jan 2007
modified on 03 May 2011
[BibTeX]
Published Paper
Inserted: 22 jan 2007
Last Updated: 3 may 2011
Journal: Comm. Pure App. Math.
Volume: 61
Pages: 1540-1602
Year: 2008
Abstract:
We show that for a Schrödinger operator with bounded potential on a manifold with cylindrical ends the space of solutions which grows at most exponentially at infinity is finite dimensional and, for a dense set of potentials (or, equivalently for a surface, for a fixed potential and a dense set of metrics), the constant function zero is the only solution that vanishes at infinity. Clearly, for general potentials there can be many solutions that vanish at infinity.
One of the key ingredients in these results is a three circles inequality (or log convexity inequality) for the Sobolev norm of a solution $u$ to a Schrödinger equation on a product $N\times [0,T]$, where $N$ is a closed manifold with a certain spectral gap. Examples of such $N$'s are all (round) spheres $\SS^n$ for $n\geq 1$ and all Zoll surfaces.
Finally, we discuss some examples arising in geometry of such manifolds and Schrödinger operators.
For the most updated version and eventual errata see the page
http:/www.math.uzh.chindex.php?id=publikationen&key1=493
Credits | Cookie policy | HTML 5 | CSS 2.1 | 2021-09-19 19:36:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7265363335609436, "perplexity": 748.7007500909889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00142.warc.gz"} |
https://ceopedia.org/index.php/Notice_account | # Notice account
Notice account
Notice account (Notice deposit) is a type of savings account that prohibits from withdrawing the money without informing the bank first. To take a particular amount of money from this type of account, you need to inform the bank in advance and wait for an agreed period, usually 30, 60 or 90 days (CII Diploma 2011, s. 105).
Violating the terms of notice account agreement, so withdrawing the money before the agreed time imposes a penalty on the account holder (Code of Federal Regulations 1982, s. 168). The account holder may also lose interest from the deposited money when withdrawing it before the set time (CII Diploma 2011, s. 105).
Banks compensate notice account holders the requirement to give notice by providing higher interest rates for notice accounts. However, investors considering locating their assets in notice accounts should carefully weigh the advantages of the higher account rate against the main disadvantages, which are the possible:
• loss of liquidity
• cost of penalties if the money needs to be withdrawn earlier than planned” (CII Diploma 2011, s. 105).
## Variation of interest
Banks have the right to change the interest rates of any saving accounts on condition that they inform the deposit owner and give proper, in the circumstances, notice period before the alteration becomes effective.
Interest is paid upon withdrawal of the total balance but it is important to notice that both parties can agree to different “frequency of payment or capitalization of interest amounts prior to full repayment” when opening the notice account. It is common practice that banks capitalize and repay notice account interest at the end of the month as it is easier in terms of synchronizing this process with other administrative tasks (P. Parker 2003, s. 42-43).
Notice account interest will be a bit higher than the interest that is paid monthly because notice accounts have increased frequency of interest payments. When interest is credited more often, “the interest itself will begin to earn interest as soon as it is credited” (CII Diploma 2011, s. 105).
## Comparing notice account and instant access account
Andy wants to place a sum of $20,000 on a deposit. He has two [[options]] available: * Instant Access account where interest is capitalized annually at 3.5%. * Notice account where he would have to wait 90 days before withdrawing the money. The notice account's rate is 4% annually. However, for immediate withdrawal he would lose the interest from those 90 days. Considering the current rates of both accounts: * Andy will earn$700 annually from the Instant Access account, before taxation.
• He will earn $800 annually from Notice account, before taxation. In a situation when Andy has to withdraw the money from the notice account after one year, he will lose the following amount of money (before tax):$20,000 × 4,0% × 90 ÷ 365 = $197.26 In this circumstance the interest gained from the notice account is going to be lower than it would have been if Andy located his money on Instant Access account because:$800 - $197.26 =$602.74
$602.74 <$700
Notice from the above calculations, that it will take approximately two years before Andy can take advantage from placing his money on Notice account instead of on Instant Access account (CII Diploma 2011, s. 105).
## References
Author: Justyna Szczepaniec | 2020-08-09 23:13:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42824020981788635, "perplexity": 3449.0353095893743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00480.warc.gz"} |
http://mathhelpforum.com/calculus/113144-help-me-find-solution-math-problem.html | # Math Help - Help me to find the solution for this math problem ?
1. ## Help me to find the solution for this math problem ?
Please help me to find the solution for this math problem ? $\int cos^3 x + sin^5 x =$
2. Originally Posted by wizard654zzz
Please help me to find the solution for this math problem ? $\int cos^3 x + sin^5 x =$
Make use of the following identities:
$\cos^3{x} = \frac{3}{4}\cos{x} + \frac{1}{4}\cos{(3x)}$
and
$\sin^5{x} = \frac{5}{8}\sin{x} - \frac{5}{16}\sin{(3x)} + \frac{1}{16}\sin{(5x)}$.
So $\int{\cos^3{x} + \sin^5{x}\,dx} = \int{\frac{3}{4}\cos{x} + \frac{1}{4}\cos{(3x)} + \frac{5}{8}\sin{x} - \frac{5}{16}\sin{(3x)} + \frac{1}{16}\sin{(5x)}\,dx}$
3. Originally Posted by wizard654zzz
Please help me to find the solution for this math problem ? $\int cos^3 x + sin^5 x =$
Put $\int \cos x(1-\sin^2\!\!x)\,dx+\int \sin^3\!\!x(1-\cos^2\!\!x)\,dx=$ $\int \cos x\,dx\,-\int \cos x\sin^2\!\!x\,dx\,+\int\sin x(1-\cos^2\!\!x)\,dx\,-\int\sin x(1-\cos^2\!\!x)\cos^2\!\!x\,dx=$
$=\int \cos x\,dx\,-\int \cos x\sin^2\!\!x\,dx\,+\int\sin x\,dx\,-\int\sin x\cos^2\!\!x\,dx\,-\int\sin x\cos^2\!\!x\,dx\,+$ $\int\sin x\cos^4\!\!x\,dx$
Now, you have integrals of the form $\int f'(x)f(x)^n\,dx=\frac{f(x)^{n+1}}{n+1}$ and also immediate integrals, so you're done.
Tonio
4. Originally Posted by tonio
Put $\int \cos x(1-\sin^2\!\!x)\,dx+\int \sin^3\!\!x(1-\cos^2\!\!x)\,dx=$ $\int \cos x\,dx\,-\int \cos x\sin^2\!\!x\,dx\,+\int\sin x(1-\cos^2\!\!x)\,dx\,-\int\sin x(1-\cos^2\!\!x)\cos^2\!\!x\,dx=$
$=\int \cos x\,dx\,-\int \cos x\sin^2\!\!x\,dx\,+\int\sin x\,dx\,-\int\sin x\cos^2\!\!x\,dx\,-\int\sin x\cos^2\!\!x\,dx\,+$ $\int\sin x\cos^4\!\!x\,dx$
Now, you have integrals of the form $\int f'(x)f(x)^n\,dx=\frac{f(x)^{n+1}}{n+1}$ and also immediate integrals, so you're done.
Tonio
Mine is easier, as it doesn't involve u-substitutions. | 2014-10-23 04:39:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9501480460166931, "perplexity": 466.64496718034536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507450097.39/warc/CC-MAIN-20141017005730-00223-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://www.biostars.org/p/195932/ | Allele specific expression (ASE) using TPM (transcripts per million) values
1
2
Entering edit mode
6.0 years ago
kirannbishwa01 ★ 1.4k
I aligned the RNAseq reads to the diploid (hybrid genome) and calculated the TPM (transcripts per million) values for my samples using EMASE. So, the TPM values are reported for each gene_id and haplotype. I want to do ASE variation analyses within the samples. My thought is that applying DE approaches to it would be fine, but the analyses should focus for the difference with in the samples and also check if the ASE differences for any given gene is/are consistent across samples.
I have came across edgeR, DeSeq, DeSeq2, kalliso sleuth. But, I am wondering if someone could suggest which of these tools would be best to work with my data.
Note: I posted the same question on google groups just to expedite the analyses. If this violates the policy of question posting please let me know.
Thanks, - Bishwa K.
transcripts per million RNAseq ASE • 1.9k views
0
Entering edit mode
6.0 years ago
Sandeep ▴ 260
There is nice tool available which is already published.
ASEQ: fast allele-specific studies from next-generation sequencing data
You can use it on your aligned data file.
Hope this helps.
0
Entering edit mode
I already have TPM valuses calculated using EMASE. Previously I wanted to use ASE-TIGAR for ASE analyses but had to changes since it wasn't accepting my scaffolds. Now, I have TPM values, so I want to get some opinion on doing statistical analyses. I have explored edgeR, Deseq2, these tools are for Differential Expression. But I want something simple to start with and specific to the TPM values calculated for two haplotypes within a sample. I know these data are mainly approached by using poisson model with overdispersion, or by using negative bionomial regression. I am looking for some worked out examples on ASE to stay on right track, until now I have found none.
Here is the structure of my data:
gene_id_locus strand gene_name gene_of_Int gene.erc.M gene.erc.S gene.erc.T gene.tpm.M gene.tpm.S gene.tpm.T
Al_scaffold_0001_1000 + 3.44195E-11 55 55 4.09867E-12 6.563692475 6.563692475
Al_scaffold_0001_1004 - 1.62587E-05 184.9999837 185 7.79528E-07 8.86988221 8.869882989
Al_scaffold_0001_1015 + 2.015114379 4930.984886 4933 0.201724233 493.6191982 493.8209224
Al_scaffold_0001_1024 + 0 0 0 0 0 0
Al_scaffold_0001_1030 + 2 29 31 1.457537529 21.13429417 22.5918317
Al_scaffold_0001_1039 - ATNAT8 3.9 22.1 26 0.140147839 0.79417109 0.934318929
Al_scaffold_0001_1041 - 0 0 0 0 0 0
Al_scaffold_0001_1044 - 712.7205414 314.2794586 1027 24.00223976 10.62981371 34.63205347
Al_scaffold_0001_1048 - 774.4874591 482.5125409 1257 119.5809891 74.50001447 194.0810036
Al_scaffold_0001_1061 + 0 0 0 0 0 0
Al_scaffold_0001_1062 + PHS1 193.4487519 198.5512481 392 9.02171979 9.347412647 18.36913244
Al_scaffold_0001_1063 + 0 0 0 0 0 0
Al_scaffold_0001_1066 + 0 0 0 0 0 0 | 2022-05-24 00:44:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3747420310974121, "perplexity": 5161.502999787029}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00076.warc.gz"} |
https://learn.careers360.com/engineering/question-an-open-vessel-containing-air-is-heated-from-300-k-to-400-k-the-fraction-of-air-originally-present-which-goes-out-of-it-is/ | # An open vessel containing air is heated from 300K to 400K. The fraction of air, which goes out with respect to originally present is:
PV = nRT , T1 = 300k , T2 = 400k
Since, it is an open vessel, pressure is constant and there is no change in the volume of vessel.
$\therefore$ P1 = P2 and V1 = V2
$\frac{n_{1}}{n_{2}} = \frac{T_{2}}{T_{1}} = \frac{400}{300} = \frac{4}{3}$
$n_{2} = \frac{3}{4}n_{1}$
Therefore, fraction of air that goes out $= 1 - \frac{3}{4} = \frac{1}{4}$
### Preparation Products
##### JEE Main Rank Booster 2021
This course will help student to be better prepared and study in the right direction for JEE Main..
₹ 13999/- ₹ 9999/-
##### Knockout JEE Main April 2021 (Subscription)
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 4999/-
##### Knockout JEE Main April 2021
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 22999/- ₹ 14999/-
##### Knockout JEE Main April 2022
An exhaustive E-learning program for the complete preparation of JEE Main..
₹ 34999/- ₹ 24999/- | 2020-09-27 17:22:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8516767024993896, "perplexity": 9947.812300941921}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00050.warc.gz"} |
https://socratic.org/questions/52dfe8cd02bf3424edd8a74c | # Question #8a74c
The electron configuration for carbon is $1 {s}^{2} 2 {s}^{2} 2 {p}^{2}$.
Helium has an electron configuration of $1 {s}^{2}$.
By replacing the $1 {s}^{2}$ with the noble gas [He] we get a noble gas notation for carbon of [He] $2 {s}^{2} 2 {p}^{2}$. | 2019-12-10 02:43:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7136631608009338, "perplexity": 2281.892685670773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00119.warc.gz"} |
https://bioconductor.org/books/release/OSCA/integrating-datasets.html | # Chapter 13 Integrating Datasets
## 13.1 Motivation
Large single-cell RNA sequencing (scRNA-seq) projects usually need to generate data across multiple batches due to logistical constraints. However, the processing of different batches is often subject to uncontrollable differences, e.g., changes in operator, differences in reagent quality. This results in systematic differences in the observed expression in cells from different batches, which we refer to as “batch effects”. Batch effects are problematic as they can be major drivers of heterogeneity in the data, masking the relevant biological differences and complicating interpretation of the results.
Computational correction of these effects is critical for eliminating batch-to-batch variation, allowing data across multiple batches to be combined for common downstream analysis. However, existing methods based on linear models (Ritchie et al. 2015; Leek et al. 2012) assume that the composition of cell populations are either known or the same across batches. To overcome these limitations, bespoke methods have been developed for batch correction of single-cell data (Haghverdi et al. 2018; Butler et al. 2018; Lin et al. 2019) that do not require a priori knowledge about the composition of the population. This allows them to be used in workflows for exploratory analyses of scRNA-seq data where such knowledge is usually unavailable.
## 13.2 Setting up the data
To demonstrate, we will use two separate 10X Genomics PBMC datasets generated in two different batches. Each dataset was obtained from the TENxPBMCData package and separately subjected to basic processing steps. Separate processing prior to the batch correction step is more convenient, scalable and (on occasion) more reliable. For example, outlier-based QC on the cells is more effective when performed within a batch (Section 6.3.2.3). The same can also be said for trend fitting when modelling the mean-variance relationship (Section 8.2.4.1).
#--- loading ---#
library(TENxPBMCData)
all.sce <- list(
pbmc3k=TENxPBMCData('pbmc3k'),
pbmc4k=TENxPBMCData('pbmc4k'),
pbmc8k=TENxPBMCData('pbmc8k')
)
#--- quality-control ---#
library(scater)
stats <- high.mito <- list()
for (n in names(all.sce)) {
current <- all.sce[[n]]
is.mito <- grep("MT", rowData(current)$Symbol_TENx) stats[[n]] <- perCellQCMetrics(current, subsets=list(Mito=is.mito)) high.mito[[n]] <- isOutlier(stats[[n]]$subsets_Mito_percent, type="higher")
all.sce[[n]] <- current[,!high.mito[[n]]]
}
#--- normalization ---#
all.sce <- lapply(all.sce, logNormCounts)
#--- variance-modelling ---#
library(scran)
all.dec <- lapply(all.sce, modelGeneVar)
all.hvgs <- lapply(all.dec, getTopHVGs, prop=0.1)
#--- dimensionality-reduction ---#
library(BiocSingular)
set.seed(10000)
all.sce <- mapply(FUN=runPCA, x=all.sce, subset_row=all.hvgs,
MoreArgs=list(ncomponents=25, BSPARAM=RandomParam()),
SIMPLIFY=FALSE)
set.seed(100000)
all.sce <- lapply(all.sce, runTSNE, dimred="PCA")
set.seed(1000000)
all.sce <- lapply(all.sce, runUMAP, dimred="PCA")
#--- clustering ---#
for (n in names(all.sce)) {
g <- buildSNNGraph(all.sce[[n]], k=10, use.dimred='PCA')
clust <- igraph::cluster_walktrap(g)$membership colLabels(all.sce[[n]]) <- factor(clust) } pbmc3k <- all.sce$pbmc3k
dec3k <- all.dec$pbmc3k pbmc3k ## class: SingleCellExperiment ## dim: 32738 2609 ## metadata(0): ## assays(2): counts logcounts ## rownames(32738): ENSG00000243485 ENSG00000237613 ... ENSG00000215616 ## ENSG00000215611 ## rowData names(3): ENSEMBL_ID Symbol_TENx Symbol ## colnames: NULL ## colData names(13): Sample Barcode ... sizeFactor label ## reducedDimNames(3): PCA TSNE UMAP ## altExpNames(0): pbmc4k <- all.sce$pbmc4k
dec4k <- all.dec$pbmc4k pbmc4k ## class: SingleCellExperiment ## dim: 33694 4182 ## metadata(0): ## assays(2): counts logcounts ## rownames(33694): ENSG00000243485 ENSG00000237613 ... ENSG00000277475 ## ENSG00000268674 ## rowData names(3): ENSEMBL_ID Symbol_TENx Symbol ## colnames: NULL ## colData names(13): Sample Barcode ... sizeFactor label ## reducedDimNames(3): PCA TSNE UMAP ## altExpNames(0): To prepare for the batch correction: 1. We subset all batches to the common “universe” of features. In this case, it is straightforward as both batches use Ensembl gene annotation. universe <- intersect(rownames(pbmc3k), rownames(pbmc4k)) length(universe) ## [1] 31232 # Subsetting the SingleCellExperiment object. pbmc3k <- pbmc3k[universe,] pbmc4k <- pbmc4k[universe,] # Also subsetting the variance modelling results, for convenience. dec3k <- dec3k[universe,] dec4k <- dec4k[universe,] 2. We rescale each batch to adjust for differences in sequencing depth between batches. The multiBatchNorm() function recomputes log-normalized expression values after adjusting the size factors for systematic differences in coverage between SingleCellExperiment objects. (Size factors only remove biases between cells within a single batch.) This improves the quality of the correction by removing one aspect of the technical differences between batches. library(batchelor) rescaled <- multiBatchNorm(pbmc3k, pbmc4k) pbmc3k <- rescaled[[1]] pbmc4k <- rescaled[[2]] 3. We perform feature selection by averaging the variance components across all batches with the combineVar() function. We compute the average as it is responsive to batch-specific HVGs while still preserving the within-batch ranking of genes. This allows us to use the same strategies described in Section 8.3 to select genes of interest. In contrast, approaches based on taking the intersection or union of HVGs across batches become increasingly conservative or liberal, respectively, with an increasing number of batches. library(scran) combined.dec <- combineVar(dec3k, dec4k) chosen.hvgs <- combined.dec$bio > 0
sum(chosen.hvgs)
## [1] 13431
When integrating datasets of variable composition, it is generally safer to err on the side of including more genes than are used in a single dataset analysis, to ensure that markers are retained for any dataset-specific subpopulations that might be present. For a top $$X$$ selection, this means using a larger $$X$$ (say, ~5000), or in this case, we simply take all genes above the trend. That said, many of the signal-to-noise considerations described in Section 8.3 still apply here, so some experimentation may be necessary for best results.
Alternatively, a more forceful approach to feature selection can be used based on marker genes from within-batch comparisons; this is discussed in more detail in Section 13.7.
## 13.3 Diagnosing batch effects
Before we actually perform any correction, it is worth examining whether there is any batch effect in this dataset. We combine the two SingleCellExperiments and perform a PCA on the log-expression values for all genes with positive (average) biological components. In this example, our datasets are file-backed and so we instruct runPCA() to use randomized PCA for greater efficiency - see Section 23.2.2 for more details - though the default IRLBA will suffice for more common in-memory representations.
# Synchronizing the metadata for cbind()ing.
rowData(pbmc3k) <- rowData(pbmc4k)
pbmc3k$batch <- "3k" pbmc4k$batch <- "4k"
uncorrected <- cbind(pbmc3k, pbmc4k)
# Using RandomParam() as it is more efficient for file-backed matrices.
library(scater)
set.seed(0010101010)
uncorrected <- runPCA(uncorrected, subset_row=chosen.hvgs,
BSPARAM=BiocSingular::RandomParam())
We use graph-based clustering on the components to obtain a summary of the population structure. As our two PBMC populations should be replicates, each cluster should ideally consist of cells from both batches. However, we instead see clusters that are comprised of cells from a single batch. This indicates that cells of the same type are artificially separated due to technical differences between batches.
library(scran)
snn.gr <- buildSNNGraph(uncorrected, use.dimred="PCA")
clusters <- igraph::cluster_walktrap(snn.gr)$membership tab <- table(Cluster=clusters, Batch=uncorrected$batch)
tab
## Batch
## Cluster 3k 4k
## 1 1 781
## 2 0 1309
## 3 0 535
## 4 14 51
## 5 0 605
## 6 489 0
## 7 0 184
## 8 1272 0
## 9 0 414
## 10 151 0
## 11 0 50
## 12 155 0
## 13 0 65
## 14 0 61
## 15 0 88
## 16 30 0
## 17 339 0
## 18 145 0
## 19 11 3
## 20 2 36
We can also visualize the corrected coordinates using a $$t$$-SNE plot (Figure 13.1). The strong separation between cells from different batches is consistent with the clustering results.
set.seed(1111001)
uncorrected <- runTSNE(uncorrected, dimred="PCA")
plotTSNE(uncorrected, colour_by="batch")
Of course, the other explanation for batch-specific clusters is that there are cell types that are unique to each batch. The degree of intermingling of cells from different batches is not an effective diagnostic when the batches involved might actually contain unique cell subpopulations (which is not a consideration in the PBMC dataset, but the same cannot be said in general). If a cluster only contains cells from a single batch, one can always debate whether that is caused by a failure of the correction method or if there is truly a batch-specific subpopulation. For example, do batch-specific metabolic or differentiation states represent distinct subpopulations? Or should they be merged together? We will not attempt to answer this here, only noting that each batch correction algorithm will make different (and possibly inappropriate) decisions on what constitutes “shared” and “unique” populations.
## 13.4 Linear regression
Batch effects in bulk RNA sequencing studies are commonly removed with linear regression. This involves fitting a linear model to each gene’s expression profile, setting the undesirable batch term to zero and recomputing the observations sans the batch effect, yielding a set of corrected expression values for downstream analyses. Linear modelling is the basis of the removeBatchEffect() function from the limma package (Ritchie et al. 2015) as well the comBat() function from the sva package (Leek et al. 2012).
To use this approach in a scRNA-seq context, we assume that the composition of cell subpopulations is the same across batches. We also assume that the batch effect is additive, i.e., any batch-induced fold-change in expression is the same across different cell subpopulations for any given gene. These are strong assumptions as batches derived from different individuals will naturally exhibit variation in cell type abundances and expression. Nonetheless, they may be acceptable when dealing with batches that are technical replicates generated from the same population of cells. (In fact, when its assumptions hold, linear regression is the most statistically efficient as it uses information from all cells to compute the common batch vector.) Linear modelling can also accommodate situations where the composition is known a priori by including the cell type as a factor in the linear model, but this situation is even less common.
We use the rescaleBatches() function from the batchelor package to remove the batch effect. This is roughly equivalent to applying a linear regression to the log-expression values per gene, with some adjustments to improve performance and efficiency. For each gene, the mean expression in each batch is scaled down until it is equal to the lowest mean across all batches. We deliberately choose to scale all expression values down as this mitigates differences in variance when batches lie at different positions on the mean-variance trend. (Specifically, the shrinkage effect of the pseudo-count is greater for smaller counts, suppressing any differences in variance across batches.) An additional feature of rescaleBatches() is that it will preserve sparsity in the input matrix for greater efficiency, whereas other methods like removeBatchEffect() will always return a dense matrix.
library(batchelor)
rescaled <- rescaleBatches(pbmc3k, pbmc4k)
rescaled
## class: SingleCellExperiment
## dim: 31232 6791
## assays(1): corrected
## rownames(31232): ENSG00000243485 ENSG00000237613 ... ENSG00000198695
## ENSG00000198727
## rowData names(0):
## colnames: NULL
## colData names(1): batch
## reducedDimNames(0):
## altExpNames(0):
After clustering, we observe that most clusters consist of mixtures of cells from the two replicate batches, consistent with the removal of the batch effect. This conclusion is supported by the apparent mixing of cells from different batches in Figure 13.2. However, at least one batch-specific cluster is still present, indicating that the correction is not entirely complete. This is attributable to violation of one of the aforementioned assumptions, even in this simple case involving replicated batches.
# To ensure reproducibility of the randomized PCA.
set.seed(1010101010)
rescaled <- runPCA(rescaled, subset_row=chosen.hvgs,
exprs_values="corrected",
BSPARAM=BiocSingular::RandomParam())
snn.gr <- buildSNNGraph(rescaled, use.dimred="PCA")
clusters.resc <- igraph::cluster_walktrap(snn.gr)$membership tab.resc <- table(Cluster=clusters.resc, Batch=rescaled$batch)
tab.resc
## Batch
## Cluster 1 2
## 1 278 525
## 2 16 23
## 3 337 606
## 4 43 748
## 5 604 529
## 6 22 71
## 7 188 48
## 8 25 49
## 9 263 0
## 10 123 135
## 11 16 85
## 12 11 57
## 13 116 6
## 14 455 1035
## 15 6 31
## 16 89 187
## 17 3 36
## 18 3 8
## 19 11 3
rescaled <- runTSNE(rescaled, dimred="PCA")
rescaled$batch <- factor(rescaled$batch)
plotTSNE(rescaled, colour_by="batch")
Alternatively, we could use the regressBatches() function to perform a more conventional linear regression for batch correction. This is subject to the same assumptions as described above for rescaleBatches(), though it has the additional disadvantage of discarding sparsity in the matrix of residuals. To avoid this, we avoid explicit calculation of the residuals during matrix multiplication (see ?ResidualMatrix for details), allowing us to perform approximate PCA more efficiently. Advanced users can set design= and specify which coefficients to retain in the output matrix, reminiscent of limma’s removeBatchEffect() function.
set.seed(10001)
residuals <- regressBatches(pbmc3k, pbmc4k, d=50,
subset.row=chosen.hvgs, correct.all=TRUE,
BSPARAM=BiocSingular::RandomParam())
snn.gr <- buildSNNGraph(residuals, use.dimred="corrected")
clusters.resid <- igraph::cluster_walktrap(snn.gr)$membership tab.resid <- table(Cluster=clusters.resid, Batch=residuals$batch)
tab.resid
## Batch
## Cluster 1 2
## 1 478 2
## 2 142 179
## 3 22 41
## 4 298 566
## 5 340 606
## 6 0 138
## 7 404 376
## 8 145 91
## 9 2 636
## 10 22 73
## 11 6 51
## 12 629 1110
## 13 3 36
## 14 91 211
## 15 12 55
## 16 4 8
## 17 11 3
residuals <- runTSNE(residuals, dimred="corrected")
residuals$batch <- factor(residuals$batch)
plotTSNE(residuals, colour_by="batch")
## 13.5 Performing MNN correction
Consider a cell $$a$$ in batch $$A$$, and identify the cells in batch $$B$$ that are nearest neighbors to $$a$$ in the expression space defined by the selected features. Repeat this for a cell $$b$$ in batch $$B$$, identifying its nearest neighbors in $$A$$. Mutual nearest neighbors are pairs of cells from different batches that belong in each other’s set of nearest neighbors. The reasoning is that MNN pairs represent cells from the same biological state prior to the application of a batch effect - see Haghverdi et al. (2018) for full theoretical details. Thus, the difference between cells in MNN pairs can be used as an estimate of the batch effect, the subtraction of which yields batch-corrected values.
Compared to linear regression, MNN correction does not assume that the population composition is the same or known beforehand. This is because it learns the shared population structure via identification of MNN pairs and uses this information to obtain an appropriate estimate of the batch effect. Instead, the key assumption of MNN-based approaches is that the batch effect is orthogonal to the biology in high-dimensional expression space. Violations reduce the effectiveness and accuracy of the correction, with the most common case arising from variations in the direction of the batch effect between clusters. Nonetheless, the assumption is usually reasonable as a random vector is very likely to be orthogonal in high-dimensional space.
The batchelor package provides an implementation of the MNN approach via the fastMNN() function. (Unlike the MNN method originally described by Haghverdi et al. (2018), the fastMNN() function performs PCA to reduce the dimensions beforehand and speed up the downstream neighbor detection steps.) We apply it to our two PBMC batches to remove the batch effect across the highly variable genes in chosen.hvgs. To reduce computational work and technical noise, all cells in all batches are projected into the low-dimensional space defined by the top d principal components. Identification of MNNs and calculation of correction vectors are then performed in this low-dimensional space.
# Again, using randomized SVD here, as this is faster than IRLBA for
# file-backed matrices. We set deferred=TRUE for greater speed.
set.seed(1000101001)
mnn.out <- fastMNN(pbmc3k, pbmc4k, d=50, k=20, subset.row=chosen.hvgs,
BSPARAM=BiocSingular::RandomParam(deferred=TRUE))
mnn.out
## class: SingleCellExperiment
## dim: 13431 6791
## assays(1): reconstructed
## rownames(13431): ENSG00000239945 ENSG00000228463 ... ENSG00000198695
## ENSG00000198727
## rowData names(1): rotation
## colnames: NULL
## colData names(1): batch
## reducedDimNames(1): corrected
## altExpNames(0):
The function returns a SingleCellExperiment object containing corrected values for downstream analyses like clustering or visualization. Each column of mnn.out corresponds to a cell in one of the batches, while each row corresponds to an input gene in chosen.hvgs. The batch field in the column metadata contains a vector specifying the batch of origin of each cell.
head(mnn.out$batch) ## [1] 1 1 1 1 1 1 The corrected matrix in the reducedDims() contains the low-dimensional corrected coordinates for all cells, which we will use in place of the PCs in our downstream analyses. dim(reducedDim(mnn.out, "corrected")) ## [1] 6791 50 A reconstructed matrix in the assays() contains the corrected expression values for each gene in each cell, obtained by projecting the low-dimensional coordinates in corrected back into gene expression space. We do not recommend using this for anything other than visualization (Section 13.8). assay(mnn.out, "reconstructed") ## <13431 x 6791> matrix of class LowRankMatrix and type "double": ## [,1] [,2] [,3] ... [,6790] [,6791] ## ENSG00000239945 -2.522e-06 -1.851e-06 -1.199e-05 . 1.832e-06 -3.641e-06 ## ENSG00000228463 -6.627e-04 -6.724e-04 -4.820e-04 . -8.531e-04 -3.999e-04 ## ENSG00000237094 -8.077e-05 -8.038e-05 -9.631e-05 . 7.261e-06 -4.094e-05 ## ENSG00000229905 3.838e-06 6.180e-06 5.432e-06 . 8.534e-06 3.485e-06 ## ENSG00000237491 -4.527e-04 -3.178e-04 -1.510e-04 . -3.491e-04 -2.082e-04 ## ... . . . . . . ## ENSG00000198840 -0.0296508 -0.0340101 -0.0502385 . -0.0362884 -0.0183084 ## ENSG00000212907 -0.0041681 -0.0056570 -0.0106420 . -0.0083837 0.0005996 ## ENSG00000198886 0.0145358 0.0200517 -0.0307131 . -0.0109254 -0.0070064 ## ENSG00000198695 0.0014427 0.0013490 0.0001493 . -0.0009826 -0.0022712 ## ENSG00000198727 0.0152570 0.0106167 -0.0256450 . -0.0227962 -0.0022898 The most relevant parameter for tuning fastMNN() is k, which specifies the number of nearest neighbors to consider when defining MNN pairs. This can be interpreted as the minimum anticipated frequency of any shared cell type or state in each batch. Increasing k will generally result in more aggressive merging as the algorithm is more generous in matching subpopulations across batches. It can occasionally be desirable to increase k if one clearly sees that the same cell types are not being adequately merged across batches. We cluster on the low-dimensional corrected coordinates to obtain a partitioning of the cells that serves as a proxy for the population structure. If the batch effect is successfully corrected, clusters corresponding to shared cell types or states should contain cells from multiple batches. We see that all clusters contain contributions from each batch after correction, consistent with our expectation that the two batches are replicates of each other. library(scran) snn.gr <- buildSNNGraph(mnn.out, use.dimred="corrected") clusters.mnn <- igraph::cluster_walktrap(snn.gr)$membership
tab.mnn <- table(Cluster=clusters.mnn, Batch=mnn.out$batch) tab.mnn ## Batch ## Cluster 1 2 ## 1 337 606 ## 2 289 542 ## 3 152 181 ## 4 12 4 ## 5 517 467 ## 6 17 19 ## 7 313 661 ## 8 162 118 ## 9 11 56 ## 10 547 1083 ## 11 17 59 ## 12 16 58 ## 13 144 93 ## 14 67 191 ## 15 4 36 ## 16 4 8 See Chapter 34 for an example of a more complex fastMNN() merge involving several human pancreas datasets generated by different authors on different patients with different technologies. ## 13.6 Correction diagnostics ### 13.6.1 Mixing between batches It is possible to quantify the degree of mixing across batches by testing each cluster for imbalances in the contribution from each batch (Büttner et al. 2019). This is done by applying Pearson’s chi-squared test to each row of tab.mnn where the expected proportions under the null hypothesis proportional to the total number of cells per batch. Low $$p$$-values indicate that there are significant imbalances In practice, this strategy is most suited to technical replicates with identical population composition; it is usually too stringent for batches with more biological variation, where proportions can genuinely vary even in the absence of any batch effect. chi.prop <- colSums(tab.mnn)/sum(tab.mnn) chi.results <- apply(tab.mnn, 1, FUN=chisq.test, p=chi.prop) p.values <- vapply(chi.results, "[[", i="p.value", 0) p.values ## 1 2 3 4 5 6 7 8 ## 9.047e-02 3.093e-02 6.700e-03 2.627e-03 8.424e-20 2.775e-01 5.546e-05 2.274e-11 ## 9 10 11 12 13 14 15 16 ## 2.136e-04 5.480e-05 4.019e-03 2.972e-03 1.538e-12 3.936e-05 2.197e-04 7.172e-01 We favor a more qualitative approach whereby we compute the variation in the log-abundances to rank the clusters with the greatest variability in their proportional abundances across batches. We can then focus on batch-specific clusters that may be indicative of incomplete batch correction. Obviously, though, this diagnostic is subject to interpretation as the same outcome can be caused by batch-specific populations; some prior knowledge about the biological context is necessary to distinguish between these two possibilities. For the PBMC dataset, none of the most variable clusters are overtly batch-specific, consistent with the fact that our batches are effectively replicates. # Avoid minor difficulties with the 'table' class. tab.mnn <- unclass(tab.mnn) # Using a large pseudo.count to avoid unnecessarily # large variances when the counts are low. norm <- normalizeCounts(tab.mnn, pseudo.count=10) # Ranking clusters by the largest variances. rv <- rowVars(norm) DataFrame(Batch=tab.mnn, var=rv)[order(rv, decreasing=TRUE),] ## DataFrame with 16 rows and 3 columns ## Batch.1 Batch.2 var ## <integer> <integer> <numeric> ## 15 4 36 0.934778 ## 13 144 93 0.728465 ## 9 11 56 0.707757 ## 8 162 118 0.563419 ## 4 12 4 0.452565 ## ... ... ... ... ## 6 17 19 0.05689945 ## 10 547 1083 0.04527468 ## 2 289 542 0.02443988 ## 1 337 606 0.01318296 ## 16 4 8 0.00689661 We can also visualize the corrected coordinates using a $$t$$-SNE plot (Figure 13.4). The presence of visual clusters containing cells from both batches provides a comforting illusion that the correction was successful. library(scater) set.seed(0010101010) mnn.out <- runTSNE(mnn.out, dimred="corrected") mnn.out$batch <- factor(mnn.out$batch) plotTSNE(mnn.out, colour_by="batch") For fastMNN(), one useful diagnostic is the proportion of variance within each batch that is lost during MNN correction. Specifically, this refers to the within-batch variance that is removed during orthogonalization with respect to the average correction vector at each merge step. This is returned via the lost.var field in the metadata of mnn.out, which contains a matrix of the variance lost in each batch (column) at each merge step (row). metadata(mnn.out)$merge.info$lost.var ## [,1] [,2] ## [1,] 0.006617 0.003315 Large proportions of lost variance (>10%) suggest that correction is removing genuine biological heterogeneity. This would occur due to violations of the assumption of orthogonality between the batch effect and the biological subspace (Haghverdi et al. 2018). In this case, the proportion of lost variance is small, indicating that non-orthogonality is not a major concern. ### 13.6.2 Preserving biological heterogeneity Another useful diagnostic check is to compare the clustering within each batch to the clustering of the merged data. Accurate data integration should preserve variance within each batch as there should be nothing to remove between cells in the same batch. This check complements the previously mentioned diagnostics that only focus on the removal of differences between batches. Specifically, it protects us against cases where the correction method simply aggregates all cells together, which would achieve perfect mixing but also discard the biological heterogeneity of interest. Ideally, we should see a many-to-1 mapping where the across-batch clustering is nested inside the within-batch clusterings. This indicates that any within-batch structure was preserved after correction while acknowledging that greater resolution is possible with more cells. In practice, more discrepancies can be expected even when the correction is perfect, due to the existence of closely related clusters that were arbitrarily separated in the within-batch clustering. As a general rule, we can be satisfied with the correction if the vast majority of entries in Figure 13.5 are zero, though this may depend on whether specific clusters of interest are gained or lost. library(pheatmap) # For the first batch (adding +10 for a smoother color transition # from zero to non-zero counts for any given matrix entry). tab <- table(paste("after", clusters.mnn[rescaled$batch==1]),
paste("before", colLabels(pbmc3k)))
heat3k <- pheatmap(log10(tab+10), cluster_row=FALSE, cluster_col=FALSE,
main="PBMC 3K comparison", silent=TRUE)
# For the second batch.
tab <- table(paste("after", clusters.mnn[rescaled$batch==2]), paste("before", colLabels(pbmc4k))) heat4k <- pheatmap(log10(tab+10), cluster_row=FALSE, cluster_col=FALSE, main="PBMC 4K comparison", silent=TRUE) gridExtra::grid.arrange(heat3k[[4]], heat4k[[4]]) We use the adjusted Rand index (Section 10.6.2) to quantify the agreement between the clusterings before and after batch correction. Recall that larger indices are more desirable as this indicates that within-batch heterogeneity is preserved, though this must be balanced against the ability of each method to actually perform batch correction. library(bluster) ri3k <- pairwiseRand(clusters.mnn[rescaled$batch==1], colLabels(pbmc3k), mode="index")
ri3k
## [1] 0.7361
ri4k <- pairwiseRand(clusters.mnn[rescaled$batch==2], colLabels(pbmc4k), mode="index") ri4k ## [1] 0.8301 We can also break down the ARI into per-cluster ratios for more detailed diagnostics (Figure 13.6). For example, we could see low ratios off the diagonal if distinct clusters in the within-batch clustering were incorrectly aggregated in the merged clustering. Conversely, we might see low ratios on the diagonal if the correction inflated or introduced spurious heterogeneity inside a within-batch cluster. # For the first batch. tab <- pairwiseRand(colLabels(pbmc3k), clusters.mnn[rescaled$batch==1])
heat3k <- pheatmap(tab, cluster_row=FALSE, cluster_col=FALSE,
col=rev(viridis::magma(100)), main="PBMC 3K probabilities", silent=TRUE)
# For the second batch.
plotTSNE(mnn.out2[,mnn.out2batch==2], colour_by=I(colLabels(pbmc4k))), ncol=2 ) ## 13.8 Using the corrected values The greatest value of batch correction lies in facilitating cell-based analysis of population heterogeneity in a consistent manner across batches. Cluster 1 in batch A is the same as cluster 1 in batch B when the clustering is performed on the merged data. There is no need to identify mappings between separate clusterings, which might not even be possible when the clusters are not well-separated. The burden of interpretation is consolidated by generating a single set of clusters for all batches, rather than requiring separate examination of each batch’s clusters. Another benefit is that the available number of cells is increased when all batches are combined, which allows for greater resolution of population structure in downstream analyses. We previously demonstrated the application of clustering methods to the batch-corrected data, but the same principles apply for other analyses like trajectory reconstruction. At this point, it is also tempting to use the corrected expression values for gene-based analyses like DE-based marker gene detection. This is not generally recommended as an arbitrary correction algorithm is not obliged to preserve the magnitude (or even direction) of differences in per-gene expression when attempting to align multiple batches. For example, cosine normalization in fastMNN() shrinks the magnitude of the expression values so that the computed log-fold changes have no obvious interpretation. Of greater concern is the possibility that the correction introduces artificial agreement across batches. To illustrate: 1. Consider a dataset (first batch) with two cell types, $$A$$ and $$B$$. Consider a second batch with the same cell types, denoted as $$A'$$ and $$B'$$. Assume that, for some reason, gene $$X$$ is expressed in $$A$$ but not in $$A'$$, $$B$$ or $$B'$$ - possibly due to some difference in how the cells were treated, or maybe due to a donor effect. 2. We then merge the batches together based on the shared cell types. This yields a result where $$A$$ and $$A'$$ cells are intermingled and the difference due to $$X$$ is eliminated. One can debate whether this should be the case, but in general, it is necessary for batch correction methods to smooth over small biological differences (as discussed in Section 13.3). 3. Now, if we corrected the second batch to the first, we must have coerced the expression values of $$X$$ in $$A'$$ to non-zero values to align with those of $$A$$, while leaving the expression of $$X$$ in $$B'$$ and $$B$$ at zero. Thus, we have artificially introduced DE between $$A'$$ and $$B'$$ for $$X$$ in the second batch to align with the DE between $$A$$ and $$B$$ in the first batch. (The converse is also possible where DE in the first batch is artificially removed to align with the second batch, depending on the order of merges.) 4. The artificial DE has implications for the identification of the cell types and interpretation of the results. We would be misled into believing that both $$A$$ and $$A'$$ are $$X$$-positive, when in fact this is only true for $$A$$. At best, this is only a minor error - after all, we do actually have $$X$$-positive cells of that overall type, we simply do not see that $$A'$$ is $$X$$-negative. At worst, this can compromise the conclusions, e.g., if the first batch was drug treated and the second batch was a control, we might mistakenly think that a $$X$$-positive population exists in the latter and conclude that our drug has no effect. Rather, it is preferable to perform DE analyses using the uncorrected expression values with blocking on the batch, as discussed in Section 11.4. This strategy is based on the expectation that any genuine DE between clusters should still be present in a within-batch comparison where batch effects are absent. It penalizes genes that exhibit inconsistent DE across batches, thus protecting against misleading conclusions when a population in one batch is aligned to a similar-but-not-identical population in another batch. We demonstrate this approach below using a blocked $$t$$-test to detect markers in the PBMC dataset, where the presence of the same pattern across clusters within each batch (Figure 13.8) is reassuring. If integration is performed across multiple conditions, it is even more important to use the uncorrected expression values for downstream analyses - see Section 14.6.2 for a discussion. m.out <- findMarkers(uncorrected, clusters.mnn, block=uncorrectedbatch,
direction="up", lfc=1, row.data=rowData(uncorrected)[,3,drop=FALSE])
# A (probably activated?) T cell subtype of some sort:
demo <- m.out[["10"]]
as.data.frame(demo[1:20,c("Symbol", "Top", "p.value", "FDR")])
## Symbol Top p.value FDR
## ENSG00000177954 RPS27 1 3.399e-168 1.061e-163
## ENSG00000227507 LTB 1 1.238e-157 1.934e-153
## ENSG00000167286 CD3D 1 9.136e-89 4.076e-85
## ENSG00000111716 LDHB 1 8.699e-44 1.811e-40
## ENSG00000008517 IL32 1 4.880e-31 6.928e-28
## ENSG00000172809 RPL38 1 8.727e-143 6.814e-139
## ENSG00000171223 JUNB 1 8.762e-72 2.737e-68
## ENSG00000071082 RPL31 2 8.612e-78 2.989e-74
## ENSG00000121966 CXCR4 2 2.370e-07 1.322e-04
## ENSG00000251562 MALAT1 2 3.618e-33 5.650e-30
## ENSG00000133639 BTG1 2 6.847e-12 4.550e-09
## ENSG00000170345 FOS 2 2.738e-46 6.108e-43
## ENSG00000129824 RPS4Y1 2 1.075e-108 6.713e-105
## ENSG00000177606 JUN 3 1.039e-37 1.910e-34
## ENSG00000112306 RPS12 3 1.656e-33 2.722e-30
## ENSG00000110700 RPS13 3 7.600e-18 7.657e-15
## ENSG00000198851 CD3E 3 1.058e-36 1.836e-33
## ENSG00000213741 RPS29 3 1.494e-148 1.555e-144
## ENSG00000116251 RPL22 4 3.992e-25 4.796e-22
## ENSG00000144713 RPL32 4 1.224e-32 1.820e-29
plotExpression(uncorrected, x=I(factor(clusters.mnn)),
features="ENSG00000177954", colour_by="batch") + facet_wrap(~colour_by)
We suggest limiting the use of per-gene corrected values to visualization, e.g., when coloring points on a $$t$$-SNE plot by per-cell expression. This can be more aesthetically pleasing than uncorrected expression values that may contain large shifts on the colour scale between cells in different batches. Use of the corrected values in any quantitative procedure should be treated with caution, and should be backed up by similar results from an analysis on the uncorrected values.
## Session Info
R version 4.0.3 (2020-10-10)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.1 LTS
Matrix products: default
BLAS: /home/biocbuild/bbs-3.12-bioc/R/lib/libRblas.so
LAPACK: /home/biocbuild/bbs-3.12-bioc/R/lib/libRlapack.so
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] parallel stats4 stats graphics grDevices utils datasets
[8] methods base
other attached packages:
[1] bluster_1.0.0 pheatmap_1.0.12
[3] scater_1.18.3 ggplot2_3.3.2
[5] scran_1.18.1 batchelor_1.6.2
[7] SingleCellExperiment_1.12.0 SummarizedExperiment_1.20.0
[9] Biobase_2.50.0 GenomicRanges_1.42.0
[11] GenomeInfoDb_1.26.1 HDF5Array_1.18.0
[13] rhdf5_2.34.0 DelayedArray_0.16.0
[15] IRanges_2.24.0 S4Vectors_0.28.0
[17] MatrixGenerics_1.2.0 matrixStats_0.57.0
[19] BiocGenerics_0.36.0 Matrix_1.2-18
[21] BiocStyle_2.18.1 rebook_1.0.0
loaded via a namespace (and not attached):
[1] bitops_1.0-6 RColorBrewer_1.1-2
[3] tools_4.0.3 R6_2.5.0
[5] irlba_2.3.3 ResidualMatrix_1.0.0
[7] vipor_0.4.5 colorspace_2.0-0
[9] rhdf5filters_1.2.0 withr_2.3.0
[11] tidyselect_1.1.0 gridExtra_2.3
[13] processx_3.4.5 compiler_4.0.3
[15] graph_1.68.0 BiocNeighbors_1.8.2
[17] labeling_0.4.2 bookdown_0.21
[19] scales_1.1.1 callr_3.5.1
[21] stringr_1.4.0 digest_0.6.27
[23] rmarkdown_2.5 XVector_0.30.0
[25] pkgconfig_2.0.3 htmltools_0.5.0
[27] sparseMatrixStats_1.2.0 highr_0.8
[29] limma_3.46.0 rlang_0.4.9
[31] DelayedMatrixStats_1.12.1 farver_2.0.3
[33] generics_0.1.0 BiocParallel_1.24.1
[35] dplyr_1.0.2 RCurl_1.98-1.2
[37] magrittr_2.0.1 BiocSingular_1.6.0
[39] GenomeInfoDbData_1.2.4 scuttle_1.0.3
[41] Rcpp_1.0.5 ggbeeswarm_0.6.0
[43] munsell_0.5.0 Rhdf5lib_1.12.0
[45] viridis_0.5.1 lifecycle_0.2.0
[47] stringi_1.5.3 yaml_2.2.1
[49] edgeR_3.32.0 zlibbioc_1.36.0
[51] Rtsne_0.15 grid_4.0.3
[53] dqrng_0.2.1 crayon_1.3.4
[55] lattice_0.20-41 cowplot_1.1.0
[57] beachmat_2.6.2 locfit_1.5-9.4
[59] CodeDepends_0.6.5 knitr_1.30
[61] ps_1.5.0 pillar_1.4.7
[63] igraph_1.2.6 codetools_0.2-18
[65] XML_3.99-0.5 glue_1.4.2
[67] evaluate_0.14 BiocManager_1.30.10
[69] vctrs_0.3.5 gtable_0.3.0
[71] purrr_0.3.4 xfun_0.19
[73] rsvd_1.0.3 viridisLite_0.3.0
[75] tibble_3.0.4 beeswarm_0.2.3
[77] statmod_1.4.35 ellipsis_0.3.1
### Bibliography
Butler, A., P. Hoffman, P. Smibert, E. Papalexi, and R. Satija. 2018. “Integrating single-cell transcriptomic data across different conditions, technologies, and species.” Nat. Biotechnol. 36 (5): 411–20.
Büttner, Maren, Zhichao Miao, F Alexander Wolf, Sarah A Teichmann, and Fabian J Theis. 2019. “A Test Metric for Assessing Single-Cell Rna-Seq Batch Correction.” Nature Methods 16 (1): 43–49.
Haghverdi, L., A. T. L. Lun, M. D. Morgan, and J. C. Marioni. 2018. “Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.” Nat. Biotechnol. 36 (5): 421–27.
Leek, J. T., W. E. Johnson, H. S. Parker, A. E. Jaffe, and J. D. Storey. 2012. “The sva package for removing batch effects and other unwanted variation in high-throughput experiments.” Bioinformatics 28 (6): 882–83.
Lin, Y., S. Ghazanfar, K. Y. X. Wang, J. A. Gagnon-Bartsch, K. K. Lo, X. Su, Z. G. Han, et al. 2019. “scMerge leverages factor analysis, stable expression, and pseudoreplication to merge multiple single-cell RNA-seq datasets.” Proc. Natl. Acad. Sci. U.S.A. 116 (20): 9775–84.
Ritchie, M. E., B. Phipson, D. Wu, Y. Hu, C. W. Law, W. Shi, and G. K. Smyth. 2015. “limma powers differential expression analyses for RNA-sequencing and microarray studies.” Nucleic Acids Res. 43 (7): e47. | 2021-04-10 20:17:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5433469414710999, "perplexity": 5202.111222189201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00073.warc.gz"} |
https://solvedlib.com/calculate-the-cost-of-issuing-new-equity-for-a,147387 | # Calculate the cost of issuing new equity for a firm, assuming issue costs are 6 percent...
###### Question:
Calculate the cost of issuing new equity for a firm, assuming issue costs are 6 percent of the share price after taxes; market price per share = $44; current dividend =$4.25; and the constant growth rate in dividends is 4 percent. (Round answer to 2 decimal places, e.g. 15.75%.) Cost Wildhorse Industries after-tax cost of debt is 8.4% and the company has an effective tax rate of 30%. What is the company's yield on debt? (Round answer to 2 decimal places, e.g. 15.75.) Company's yield on debt % A firm's cost of debt can best be estimated: O by adding a risk premium to the coupon rate. O using the yield-to-maturity on newly issued debt of other firms. using the firm's borrowing rate on short-term loans. O using the yield-to-maturity on the firm's outstanding debt. The cost of a security to a company may differ from the security's yield in the capital markets due to: I. Flotation costs II. Agency costs III. Taxes O I only I and II only I and III only
#### Similar Solved Questions
##### 5Small KD value will be increased by: * (1 Point)Salting outAdding drying agentAdding weak base to organic layerEmulsion formed
5 Small KD value will be increased by: * (1 Point) Salting out Adding drying agent Adding weak base to organic layer Emulsion formed...
##### Case Study 2 Please read the following case study and answer the questions in a separate...
Case Study 2 Please read the following case study and answer the questions in a separate document and save as a word document (.doc or .docx) or a .pdf document. Blackboard does not accept google docs (-pages) but if you work in google you can save your response as a pdf for submission. Please revie...
##### Name Section/CRN EXPERIMENT 9 POTENTIOMETRIC DETERMINATION OF AN EQUILIBRIUM CONSTANT PRE-LABORATORY QUESTIONS The following preparatory questions...
Name Section/CRN EXPERIMENT 9 POTENTIOMETRIC DETERMINATION OF AN EQUILIBRIUM CONSTANT PRE-LABORATORY QUESTIONS The following preparatory questions should be answered before coming to lab. They are intended to introduce you to several jdeas that are important to aspects of the experiment. You must tu...
##### Question 9You intend to estimate population mean with confidence interval You believe the population to have normal distribution Your sample size is 15.Find the critical value that corresponds to confidence level of 855. (Report answer accurate to three decimal places with appropriate rounding )Question Help: @Message InstrctorSubmit Question
Question 9 You intend to estimate population mean with confidence interval You believe the population to have normal distribution Your sample size is 15. Find the critical value that corresponds to confidence level of 855. (Report answer accurate to three decimal places with appropriate rounding ) ...
##### Problem 3. Find the lim sup and lim inf of the sequenceIn = nsin(Tn/2)Justify your answers with a brief explanation:
Problem 3. Find the lim sup and lim inf of the sequence In = nsin(Tn/2) Justify your answers with a brief explanation:...
##### An experimente creatcc which subatomic particles strike ground state atoms that are neutral: After te collision, itis discovered that one cf the atoms_ remainino nas completely lost an electron and other elecrons are prcmoteu excited statos_ Careful measurements shov: that the electron configuration aiter the collision 15"2s"2p535 13p"5d? WMnat = was ine onginal ground state electron configuration of that atoms before the collision? Which element is it?
An experimente creatcc which subatomic particles strike ground state atoms that are neutral: After te collision, itis discovered that one cf the atoms_ remainino nas completely lost an electron and other elecrons are prcmoteu excited statos_ Careful measurements shov: that the electron configuration...
##### An airplane of mass 10,000 kg is flying over level ground at a constant altitude of...
An airplane of mass 10,000 kg is flying over level ground at a constant altitude of 4.30 km with a constant velocity of 165 m/s west. Its direction is currently straight toward the top of a mountain. At a particular instant, a house is directly below the plane at ground level. (a) At this instant, w...
##### Average trade sizes declined in recent years because many large investors seek anonymity for fear that...
Average trade sizes declined in recent years because many large investors seek anonymity for fear that their intentions will become known to other investors. True or False...
##### 1 If instead of (-L, L), piecewise smooth function f (x) is defined on (~T,7) and is extended periodically with period 21 , then f (z) @o + E(an cosnx + bn sin nx). n= (a) Write down the formulas for the Fourier coefficients and the orthogonality relations (for sines and cosines) by modifying the corresponding formulas from the text.(6) Let Sv(z) @o + Z(an cos nx + bn sin nx). Show using the formulas in part (a) that n=] () [ Sn(r) dx = 2ra0 (ii) Sv(z) cos nx dx Tan (iii) Sv(x) sin nx dx Tbn (iv
1 If instead of (-L, L), piecewise smooth function f (x) is defined on (~T,7) and is extended periodically with period 21 , then f (z) @o + E(an cosnx + bn sin nx). n= (a) Write down the formulas for the Fourier coefficients and the orthogonality relations (for sines and cosines) by modifying the co...
##### IMG-20200602-WAOOOS pdfImG-20200602-WAOOOT pC(uHt+x^(2I1D }MG-20200602-WAOOOS_pdf Translate ADU Engineering 2 _Kuran-i Kerim1 dx =? 0 1+VxA) 2-In[4]B} 2+In[4]In[4]
IMG-20200602-WAOOOS pdf ImG-20200602-WAOOOT pC (uHt+x^(2I1D } MG-20200602-WAOOOS_pdf Translate ADU Engineering 2 _ Kuran-i Kerim 1 dx =? 0 1+Vx A) 2-In[4] B} 2+In[4] In[4]...
##### Are the following salts acidic, neutral, or basic? First, list each cation and anion formula and identify each ion'$properties (SA WA WB, SB), then combine to get the overall salt'$ property (A, N, B} Salt cation anion overall Fe(NOs)KCIOaNazCOzRbzOMg(HSOa)zCH;NH;Br2. What reaction occurs when the salt is dissolved in water? Write net ionic equations that demonstrate how acidity or basicity occurs for each salt:
Are the following salts acidic, neutral, or basic? First, list each cation and anion formula and identify each ion'$properties (SA WA WB, SB), then combine to get the overall salt'$ property (A, N, B} Salt cation anion overall Fe(NOs) KCIOa NazCOz RbzO Mg(HSOa)z CH;NH;Br 2. What reaction ...
##### Problem 2: (1Opts) thie sets and are HOH-mu ually exclusive. show (tbat: P(Tur) =PCT) + P(Y) - PCTon Given IWO evems Md (aw Vemu dingtam detoustrnfe tlual P(1) = PcTont PGoY. and deduce thar P(D) = PGTYP() + PcrYP()
Problem 2: (1Opts) thie sets and are HOH-mu ually exclusive. show (tbat: P(Tur) =PCT) + P(Y) - PCTon Given IWO evems Md (aw Vemu dingtam detoustrnfe tlual P(1) = PcTont PGoY. and deduce thar P(D) = PGTYP() + PcrYP()...
##### If 60 seconds are in a minute, 60 minutes in an hour, and 24 hours in a day, then 86,400 seconds are in a day
If 60 seconds are in a minute, 60 minutes in an hour, and 24 hours in a day, then 86,400 seconds are in a day. What type of reasoning is this?inductivedeductive...
##### Question 10 / Explain in detail CHu Reviey Solve using the square root prope Hr), of...
Question 10 / Explain in detail CHu Reviey Solve using the square root prope Hr), of the b given by hit a) How lo 5) (b - 3)2 49 6 (6y + 7)2 16 t? b) How 4)2-2 (9.1-9.2) Kee solving quad 9) distance between the points (-8, 3) and (-12,5 completing tions using To A rectangle has a length of 5V2 in a...
##### At the beginning of 2022, Terrarium Co. had $1,620,000 of gross accounts receivables and$201,000 in...
At the beginning of 2022, Terrarium Co. had $1,620,000 of gross accounts receivables and$201,000 in their allowance for doubtful accounts. During 2022, Terrarium sold $4,860,000 worth of plants on account and collected$2,550,000 worth of receivables in cash. In addition, Terrarium wrote-off \$63,00... | 2022-05-23 23:42:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5865415334701538, "perplexity": 6395.388905472732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00332.warc.gz"} |
https://www.expii.com/t/poh-scale-overview-calculations-11127 | Expii
pOH Scale — Overview & Calculations - Expii
The pOH scale measures how acidic or basic a solution is based on hydroxide ion (OH$$^-$$) concentration. A pOH of less than 7 is basic and greater than 7 is acidic. | 2022-05-19 19:18:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45318371057510376, "perplexity": 8040.567248109807}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00383.warc.gz"} |
https://strings.math.tecnico.ulisboa.pt/seminars?id=4979 | ## 01/04/2019, Monday, 15:00–16:00 Room P3.10, Mathematics Building
In this talk I will present the results of my investigation on the ODE/IM correspondence in quantum $g$-KdV models, where $g$ is an untwisted affine Kac-Moody algebra. I will construct solutions of the corresponding Bethe Ansatz equations, as the (irregular) monodromy data of a meromorphic $L(g)$-oper, where $L(g)$ denotes the Langlands dual algebra of $g$.
3. D Masoero, A Raimondo, Opers corresponding to Higher States of the $g$-Quantum KdV model. arXiv 2018. | 2022-09-28 20:09:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.960709810256958, "perplexity": 2118.9037573975565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00669.warc.gz"} |
https://www.datascienceatthecommandline.com/chapter-7-exploring-data.html | # Chapter 7 Exploring Data
Now that we have obtained and scrubbed our data, we can continue with the third step of the OSEMN model, which is to explore it. After all that hard work, (unless you already had clean data lying around!), it is time for some fun.
Exploring is the step where you familiarize yourself with the data. Being familiar with the data is essential when you want to extract any value from it. For example, knowing what kind of features the data has, means you know which ones are worth further exploring and which ones you can use to answer any questions that you have.
Exploring your data can be done from three perspectives. The first perspective is to inspect the data and its properties. Here, we want to know, for example, what the raw data looks like, how many data points the data set has, and what kind of features the data set has.
The second perspective from which we can explore out data is to compute descriptive statistics. This perspective is useful for learning more about the individual features. One advantage of this perspective is that the output is often brief and textual and can therefore be printed on the command line.
The third perspective is to create visualizations of the data. From this perspective we can gain insight into how multiple features interact. We’ll discuss a way of creating visualizations that can be printed on the command line. However, visualizations are best suited to be displayed on a graphical user interfaces. An advantage of visualizations over descriptive statistics is that they are more flexible and that they can convey much more information.
## 7.1 Overview
In this chapter, you’ll learn how to:
• Inspect the data and its properties.
• Compute descriptive statistics.
• Create data visualizations inside and outside the command line.
## 7.2 Inspecting Data and its Properties
In this section we’ll demonstrate how to inspect your data set and its properties. Because the upcoming visualization and modeling techniques expect the data to be in tabular format, we’ll assume that the data is in CSV format. You can use the techniques described in Chapter 5 to convert your data to CSV if necessary.
For simplicity sake, we’ll also assume that your data has a header. In the first subsection we are going to determine whether that is the case. Once we know we have a header, we can continue answering the following questions:
• How many data points and features does the data set have?
• What does the raw data look like?
• What kind of features does the data set have?
• Can some of these features be treated as categorical or as factors?
### 7.2.1 Header Or Not, Here I Come
You can check whether your file has a header by printing the first few lines:
$#? [echo]$ head file.csv | csvlook
It is then up to you to decide whether the first line is indeed a header or already the first data point. When the data set contains no header or when its header contains newlines, you’re best off going back to Chapter 5 and correct that.
### 7.2.2 Inspect All The Data
If you want to inspect the raw data, then it’s best not to use the cat command-line tool, since cat prints all the data to the screen in one go. In order to inspect the raw data at your own pace, we recommend to use less (Nudelman 2013) with the -S command-line argument:
$#? [echo]$ less -S file.csv
The -S command-line argument ensures that long lines are not being wrapped when they don’t fit in the terminal. Instead, less allows you to scroll horizontally to see the rest of the lines. The advantage of less is that it does not load the entire file into memory, which is good for viewing large files. Once you’re in less, you can scroll down a full screen by pressing <Space>. Scrolling horizontally is done by pressing <Left> and <Right>. Press g and G to go to start and the end of the file, respectively. Quiting less is done by pressing q. Read the man page for more key bindings.
If you want the data set to be nicely formatted, you can add in csvlook:
$#? [echo]$ < file.csv csvlook | less -S
Unfortunately, csvlook needs to read the entire file into memory in order to determine the width of the columns. So, when you want to inspect a very large file, then either you may want to get a subset (using sample, for example) or you may need to be patient.
### 7.2.3 Feature Names and Data Types
In order to gain insight into the data set, it is useful to print the feature names and study them. After all, the feature names may indicate the meaning of the feature. You can use the following sed expression for this:
$< data/iris.csv sed -e 's/,/\n/g;q' Note that this basic command assumes that the file is delimited by commas. Just as reminder: if you intend to use this command often, you could define a function in your .bashrc file called, say, names: Example 7.1 () names () { sed -e 's/,/\n/g;q'; } Which you can then you use like this: $ < data/investments.csv names
company_name
company_category_list
company_market
company_country_code
company_state_code
company_region
company_city
investor_name
investor_category_list
investor_market
investor_country_code
investor_state_code
investor_region
investor_city
funding_round_type
funding_round_code
funded_at
funded_month
funded_quarter
funded_year
raised_amount_usd
We can go a step further than just printing the column names. Besides the names of the columns, it would be very useful to know what type of values each column contains. Examples of data types are a string of characters, a numerical value, or a date. Assume that we have the following toy data set:
$< data/datatypes.csv csvlook |-----+--------+-------+----------+------------------+------------+----------| | a | b | c | d | e | f | g | |-----+--------+-------+----------+------------------+------------+----------| | 2 | 0.0 | FALSE | "Yes!" | 2011-11-11 11:00 | 2012-09-08 | 12:34 | | 42 | 3.1415 | TRUE | Oh, good | 2014-09-15 | 12/6/70 | 0:07 PM | | 66 | | False | 2198 | | | | |-----+--------+-------+----------+------------------+------------+----------| We’ve already used csvsql in Chapter 5 to execute SQL queries directly on CSV data. When no command-line arguments are passed, it generates the necessary SQL statement that would be needed if we were to insert this data into an actual database. We can use the output also for ourselves to inspect what the inferred column types are: csvsql data/datatypes.csv CREATE TABLE datatypes ( a INTEGER NOT NULL, b FLOAT, c BOOLEAN NOT NULL, d VARCHAR(8) NOT NULL, e DATETIME, f DATE, g TIME, CHECK (c IN (0, 1)) ); provides on overview of what the various SQL data types mean. If a column has the NOT NULL string printed after the data type, then that column contains no missing values. Python versus SQL data types Type Python SQL Character string unicode VARCHAR Boolean bool BOOLEAN Integer int INTEGER Real number float FLOAT Date datetime.date DATE Time datetime.time TIME Date and time datetime.datetime DATETIME ### 7.2.4 Unique Identifiers, Continuous Variables, and Factors Knowing the data type of each feature is not enough. It is also essential to know what each feature represents. Having knowledge about the domain is very useful here, however we may also get some ideas from the data itself. Both a string and an integer could be a unique identifier or could represent a category. In the latter case, this could be used to assign a color to your visualization. If an integer denotes, say, the ZIP Code, then it doesn’t make sense to compute the average. To determine whether a feature should be treated as a unique identifier or categorical variable (or factor in R terms), you could count the number of unique values for a specific column: $ cat data/iris.csv | csvcut -c species | body "sort | uniq | wc -l"
species
3
Or we can use csvstat (Groskopf 2014a), which is part of csvkit, to get the number of unique values for each column:
$csvstat data/investments2.csv --unique 1. company_permalink: 27342 2. company_name: 27324 3. company_category_list: 8759 4. company_market: 443 5. company_country_code: 150 6. company_state_code: 147 7. company_region: 1079 8. company_city: 3305 9. investor_permalink: 11176 10. investor_name: 11135 11. investor_category_list: 468 12. investor_market: 134 13. investor_country_code: 111 14. investor_state_code: 80 15. investor_region: 549 16. investor_city: 1198 17. funding_round_permalink: 41790 18. funding_round_type: 13 19. funding_round_code: 15 20. funded_at: 3595 21. funded_month: 295 22. funded_quarter: 121 23. funded_year: 34 24. raised_amount_usd: 6143 If the number of unique values is low compared to the number of rows, then that feature may indeed be treated as a categorical one (such as funding_round_type). If the number is equal to the number of rows, it may be a unique identifier (such as company_permalink). ## 7.3 Computing Descriptive Statistics ### 7.3.1 csvstat The command-line tool csvstat gives a lot of information. For each feature (column), it shows: • The data type in Python terminology (see Table 7-1 for a comparison between Python and SQL data types). • Whether it has any missing values (nulls). • The number of unique values. • Various descriptive statistics (maximum, minimum, sum, mean, standard deviation, and median) for those features for which it is appropriate. We invoke csvstat as follows: $ csvstat data/datatypes.csv
1. a
<type 'int'>
Nulls: False
Values: 2, 66, 42
2. b
<type 'float'>
Nulls: True
Values: 0.0, 3.1415
3. c
<type 'bool'>
Nulls: False
Unique values: 2
5 most frequent values:
False: 2
True: 1
4. d
<type 'unicode'>
Nulls: False
Values: 2198, "Yes!", Oh, good
5. e
<type 'datetime.datetime'>
Nulls: True
Values: 2011-11-11 11:00:00, 2014-09-15 00:00:00
6. f
<type 'datetime.date'>
Nulls: True
Values: 2012-09-08, 1970-12-06
7. g
<type 'datetime.time'>
Nulls: True
Values: 12:34:00, 12:07:00
Row count: 3
This gives a very verbose output. For a more concise output specify one of the statistics arguments:
• --max (maximum)
• --min (minimum)
• --sum (sum)
• --mean (mean)
• --median (median)
• --stdev (standard deviation)
• --nulls (whether column contains nulls)
• --unique (unique values)
• --freq (frequent values)
• --len (max value length)
For example:
$csvstat data/datatypes.csv --null 1. a: False 2. b: True 3. c: False 4. d: False 5. e: True 6. f: True 7. g: True You can select a subset of features with the -c command-line argument. This accepts both integers and column names: $ csvstat data/investments2.csv -c 2,13,19,24
2. company_name
<type 'unicode'>
Nulls: True
Unique values: 27324
5 most frequent values:
Aviir: 13
Galectin Therapeutics: 12
Rostima: 12
Lending Club: 11
Max length: 66
13. investor_country_code
<type 'unicode'>
Nulls: True
Unique values: 111
5 most frequent values:
USA: 20806
GBR: 2357
DEU: 946
CAN: 893
FRA: 737
Max length: 15
19. funding_round_code
<type 'unicode'>
Nulls: True
Unique values: 15
5 most frequent values:
a: 7529
b: 4776
c: 2452
d: 1042
e: 384
Max length: 10
24. raised_amount_usd
<type 'int'>
Nulls: True
Min: 0
Max: 3200000000
Sum: 359891203117
Mean: 10370010.1748
Median: 3250000
Standard Deviation: 38513119.1802
Unique values: 6143
5 most frequent values:
10000000: 1159
1000000: 1074
5000000: 1066
2000000: 875
3000000: 820
Row count: 41799
Please note that csvstat, just like csvsql, employs heuristics to determine the data type, and therefore may not always get it right. We encourage you to always do a manual inspection as discussed in the previous subsection. Moreover, the type may be a character string or integer that doesn’t say anything about how it should be used.
As a nice extra, csvstat outputs, at the very end, the number of data points (rows). Newlines and commas inside values are handles correctly. To only see the relevant line, we can use tail:
$csvstat data/iris.csv | tail -n 1 If you only want to see the actual number number of data points, you can use, for example, the following sed expression to extract the number: $ csvstat data/iris.csv | sed -rne '${s/^([^:]+): ([0-9]+)$/\2/;p}'
### 7.3.2 Using R from the Command Line using Rio
In this section we would like to introduce you to a command-line tool called Rio, which is essentially a small, nifty wrapper around the statistical programming environment R. Before we explain what Rio does and why it exists, lets talk a bit about R itself.
R is a very powerful statistical software package to analyze data and create visualizations. It’s an interpreted programming language, has an extensive collection of packages, and offers its own REPL (Read-Eval-Print-Loop), which allows you, similar to the command line, to play with your data. Unfortunately, R is quite separated from the command line. Once you start it, you’re in a separate environment. R doesn’t really play well with the command line because you cannot pipe any data into it and it also doesn’t support any one-liners that you can specify.
For example, imagine that you have a CSV file called tips.csv, and you would like compute the tip percentage, and save the result. To accomplish this in R you would first startup R:
$#? [echo]$ R
And then run the following commands:
> tips <- read.csv('tips.csv', header = T, sep = ',', stringsAsFactors = F)
> tips.percent <- tips$tip / tips$bill * 100
> cat(tips.percent, sep = '\n', file = 'percent.csv')
> q("no")
Afterwards, you can continue with the saved file percent.csv on the command line. Note that there is only one command that is associated with what we want to accomplish specifically. The other commands are necessary boilerplate. Typing in this boilerplate in order to accomplish something simple is cumbersome and breaks your workflow. Sometimes, you only want to do one or two things at a time to your data. Wouldn’t it be great if we could harness the power of R and be able to use it from the command line?
This is where Rio comes in. The name Rio stands for R input/output, because it enables you to use R as a filter on the command line. You simply pipe CSV data into Rio and you specify the R commands that you want to run on it. Let’s perform the same task as before, but now using Rio:
$< data/tips.csv Rio -e 'df$tip / df$bill * 100' | head -n 10 Rio can execute multiple R command that are separated by semicolons. So, if you wanted to add a column called percent to the input data, you could do the following: $ < data/tips.csv Rio -e 'df$percent <- df$tip / df$bill * 100; df' | head These small one-liners are possible because Rio takes care of all the boilerplate. Being able to use the command line for this and capture the power of R into a one-liner is fantastic, especially if you want to keep on working on the command line. Rio assumes that the input data is in CSV format with a header. (By specifying the -n command-line argument Rio does not consider the first row to be the header and creates default column names.) Behind the scenes, Rio writes the piped data to a temporary CSV file and creates a script that: • Import required libraries. • Loads the CSV file as a data frame. • Generates a ggplot2 object if needed (more on this in the next section). • Runs the specified commands. • Prints the result of the last command to standard output. So now, if you wanted to do one or two things to your data set with R, you can specify it as a one-liner, and keep on working on the command line. All the knowledge that you already have about R can now be used from the command line. With Rio, you can even create sophisticated visualizations, as you will see later in this chapter. Rio doesn’t have to be used as a filter, meaning the output doesn’t have to be a in CSV format per se. You can compute $ < data/iris.csv Rio -e 'mean(df$sepal_length)'$ < data/iris.csv Rio -e 'sd(df$sepal_length)'$ < data/iris.csv Rio -e 'sum(df$sepal_length)' If we wanted to compute the five summary statistics, we would do: $ < iris.csv Rio -e 'summary(df$sepal_length)' Min. 1st Qu. Median Mean 3rd Qu. Max. 4.300 5.100 5.800 5.843 6.400 7.900 You can also compute the skewness (symmetry of the distribution) and kurtosis (peakedness of the distribution), but then you need to have the moments package installed: $ #? [echo]
$< data/iris.csv Rio -e 'skewness(df$sepal_length)'
$< data/iris.csv Rio -e 'kurtosis(df$petal_width)'
Correlation between two features:
$< tips.csv Rio -e 'cor(df$bill, df$tip)' 0.6757341 Or a correlation matrix: $ < data/tips.csv csvcut -c bill,tip | Rio -f cor | csvlook
|--------------------+--------------------|
| bill | tip |
|--------------------+--------------------|
| 1 | 0.675734109211365 |
| 0.675734109211365 | 1 |
|--------------------+--------------------|
Note that with the command-line argument -f, we can specify the function to apply to the data frame df. In this case, it is the same as -e cor(df).
You can even create a stem plot (Tukey 1977) using Rio:
$< data/iris.csv Rio -e 'stem(df$sepal_length)'
## 7.4 Creating Visualizations
In this section we’re going to discuss how to create visualizations at the command line. We’ll be looking at two different software packages: gnuplot and ggplot. First, we’ll introduce both packages. Then, we’ll demonstrate how to create several different types of visualizations using both of them.
### 7.4.1 Introducing Gnuplot and Feedgnuplot
The first software package to create visualizations that we’re discussing in this chapter is Gnuplot. Gnuplot has been around since 1986. Despite being rather old, its visualization capabilities are quite extensive. As such, it’s impossible to do it justice. There are other good resources available, including Gnuplot in Action by Janert (2009).
To demonstrate the flexibility (and its archaic notation), consider Example 7.2, which is copied from the Gnuplot website (http://gnuplot.sourceforge.net/demo/histograms.6.gnu).
Example 7.2 (Creating a histogram using Gnuplot)
# set terminal pngcairo transparent enhanced font "arial,10" fontscale 1.0 size
# set output 'histograms.6.png'
set border 3 front linetype -1 linewidth 1.000
set boxwidth 0.75 absolute
set style fill solid 1.00 border lt -1
set grid nopolar
set grid noxtics nomxtics ytics nomytics noztics nomztics \
nox2tics nomx2tics noy2tics nomy2tics nocbtics nomcbtics
set grid layerdefault linetype 0 linewidth 1.000, linetype 0 linewidth 1.000
set key outside right top vertical Left reverse noenhanced autotitles columnhead
set style histogram columnstacked title offset character 0, 0, 0
set datafile missing '-'
set style data histograms
set xtics border in scale 1,0.5 nomirror norotate offset character 0, 0, 0 auto
set xtics norangelimit
set xtics ()
set ytics border in scale 0,0 mirror norotate offset character 0, 0, 0 autojust
set ztics border in scale 0,0 nomirror norotate offset character 0, 0, 0 autoju
set cbtics border in scale 0,0 mirror norotate offset character 0, 0, 0 autojus
set rtics axis in scale 0,0 nomirror norotate offset character 0, 0, 0 autojust
set title "Immigration from Northern Europe\n(columstacked histogram)"
set xlabel "Country of Origin"
set yrange [ 0.00000 : * ] noreverse nowriteback
i = 23
plot 'immigration.dat' using 6 ti col, '' using 12 ti col, '' using 13 ti c
Please note that this is trimmed to 80 characters wide. The above script generates the following image:
Gnuplot is different from most command-line tools we’ve been using for two reasons. First, it uses a script instead of command-line arguments. Second, the output is always written to a file and not printed to standard output.
One great advantage of Gnuplot being around for so long, and the main reason we’ve included it in this book, is that it’s able to produce visualizations for the command line. That is, it’s able to print its output to the terminal without the need for a graphical user interface (GUI). Even then, you would need to set up a script.
Luckily, there is a command-line tool called feedgnuplot (Kogan 2014), which can help us with setting up a script for Gnuplot. feedgnuplot is entirely configurable through command-line arguments. Plus, it reads from standard input. After we have introduced ggplot2, we’re going to create a few visualizations using feedgnuplot.
One great feature of feedgnuplot that we would like to mention here, is that it allows you to plot streaming data. The following is a snapshot of a continuously updated plot based on random input data:
$while true; do echo$RANDOM; done | sample -d 10 | feedgnuplot --stream \
> --terminal 'dumb 80,25' --lines --xlen 10
30000 ++-----+------------+-------------+-------------+------------+-----++
| + * + + + |
| : ** : ******* : *
25000 ++.................*.*..........................*.....*............+*
| : *: * : *: * : *|
| : *: * : *: * : *|
| : * : * : * : * : * |
20000 ++................*....*......................*.........*.........*++
| : * : * : * : * : * |
| : * : * : * : * : * |
15000 ++....**.........*.......*..................*............*.......*.++
| **** :* * : * : * : * : * |
** :* * : * **** * : * : * |
10000 ++.......*......*.........*....**....*.....*..............*.....*..++
| : * * : * ** : * * : * : * |
| : * * : ** : ** * : * : * |
| : * * : : * : * : * |
5000 ++..........*..*.........................*..................*.*....++
| : * * : : : *:* |
| + ** + + + * |
0 ++-----+------*-----+-------------+-------------+------------*-----++
2350 2352 2354 2356 2358
### 7.4.2 Introducing ggplot2
A more modern software package for creating visualizations is ggplot, which is an implementation of the grammar of graphics in R (Wickham 2009).
Thanks to the grammar of graphics and using sensible defaults, ggplot2 commands tend to be very short and expressive. When used through Rio, this is a very convenient way of creating visualizations from the command line.
To demonstrate it’s expressiveness, we’ll recreate the histogram plot generated above by gnuplot, with the help of Rio. Because Rio expects the data set to be comma-delimited, and because ggplot2 expects the data in long format, we first need to scrub and transform the data a little bit:
$< data/immigration.dat sed -re '/^#/d;s/\t/,/g;s/,-,/,0,/g;s/Region/'\ > 'Period/' | tee data/immigration.csv | head | cut -c1-80 Period,Austria,Hungary,Belgium,Czechoslovakia,Denmark,France,Germany,Greece,Irel 1891-1900,234081,181288,18167,0,50231,30770,505152,15979,388416,651893,26758,950 1901-1910,668209,808511,41635,0,65285,73379,341498,167519,339065,2045877,48262,1 1911-1920,453649,442693,33746,3426,41983,61897,143945,184201,146181,1109524,4371 1921-1930,32868,30680,15846,102194,32430,49610,412202,51084,211234,455315,26948, 1931-1940,3563,7861,4817,14393,2559,12623,144058,9119,10973,68028,7150,4740,3960 1941-1950,24860,3469,12189,8347,5393,38809,226578,8973,19789,57661,14860,10100,1 1951-1960,67106,36637,18575,918,10984,51121,477765,47608,43362,185491,52277,2293 1961-1970,20621,5401,9192,3273,9201,45237,190796,85969,32966,214111,30606,15484, The sed expression consists of four parts, delimited by semicolons: 1. Remove lines that start with #. 2. Convert tabs to commas. 3. Change dashes (missing values) into zero’s. 4. Change the feature name Region into Period. We then select only the columns that matter using csvcut and subsequently convert the data from a wide format to a long one using the Rio and the melt function which part of the R package reshape2: $ < data/immigration.csv csvcut -c Period,Denmark,Netherlands,Norway,\
> Sweden | Rio -re 'melt(df, id="Period", variable.name="Country", '\
> 'value.name="Count")' | tee data/immigration-long.csv | head | csvlook
|------------+-------------+--------|
| Period | Country | Count |
|------------+-------------+--------|
| 1891-1900 | Denmark | 50231 |
| 1901-1910 | Denmark | 65285 |
| 1911-1920 | Denmark | 41983 |
| 1921-1930 | Denmark | 32430 |
| 1931-1940 | Denmark | 2559 |
| 1941-1950 | Denmark | 5393 |
| 1951-1960 | Denmark | 10984 |
| 1961-1970 | Denmark | 9201 |
| 1891-1900 | Netherlands | 26758 |
|------------+-------------+--------|
Now, we can use Rio again, but then with an expression that builds up a ggplot2 visualization:
$< data/immigration-long.csv Rio -ge 'g + geom_bar(aes(Country, Count,'\ > ' fill=Period), stat="identity") + scale_fill_brewer(palette="Set1") '\ > '+ labs(x="Country of origin", y="Immigration by decade", title='\ > '"Immigration from Northern Europe\n(columstacked histogram)")' | display The -g command-line argument indicates that Rio should load the ggplot2 package. The output is an image in PNG format. You can either view the PNG image via display, which is part of ImageMagick (LLC 2009) or you can redirect the output to a PNG file. If you’re on a remote terminal then you probably won’t be able to see any graphics. A workaround for this is to start a webserver from a particular directory: $ python -m SimpleHTTPServer 8000
Make sure that you have access to the port (8000 in this case). If you save the PNG image to the directory from which the webserver was launched, then you can access the image from your browser at http://localhost:8000/file.png.
### 7.4.3 Histograms
Using Rio:
$< data/tips.csv Rio -ge 'g+geom_histogram(aes(bill))' | display Using feedgnuplot: < data/tips.csv csvcut -c bill | feedgnuplot --terminal 'dumb 80,25' \ --histogram 0 --with boxes --ymin 0 --binwidth 1.5 --unset grid --exit 25 ++----+------+-----+--***-+-----+------+-----+------+-----+------+----++ + + + +*** * + + + + + + + + | * * * | | *** * * * | 20 ++ * * * * * ++ | **** * * * * | | * ** *** * * *** | | * ** * * * * * * | 15 ++ * ** * * * * * * ++ | * ** * * * * * * | | * ** * * * * * * | | * ** * * * * * * *** | 10 ++ * ** * * * *** *** * ++ | * ** * * * * * * * * | | *** ** * * * * * * * ***** *** | | * * ** * * * * * * * * * *** * | 5 ++ *** * ** * * * * * * * * * * * * *** ++ | * * * ** * * * * * * * * * * * * *** * | | * * * ** * * * * * * * * * * * *** * ******** *** *** | + ***+*** * * ** *+* * * * * * * * * *+* * *+** * *+* ***+* * * *** + 0 ++-***+***********************************************-*****-***-***--++ 0 5 10 15 20 25 30 35 40 45 50 55 ### 7.4.4 Bar Plots Using Rio: $ < data/tips.csv Rio -ge 'g+geom_bar(aes(factor(size)))' | display
Using feedgnuplot:
$< data/tips.csv | csvcut -c size | header -d | feedgnuplot --terminal \ > 'dumb 80,25' --histogram 0 --with boxes --unset grid --exit 160 ++--------+----***********----+---------+---------+---------+--------++ + + * + * + + + + + 140 ++ * * ++ | * * | | * * | 120 ++ * * ++ | * * | 100 ++ * * ++ | * * | | * * | 80 ++ * * ++ | * * | 60 ++ * * ++ | * * | | * * | 40 ++ * ********************* ++ | * * * * | 20 ++ * * * * ++ | * * * * | + *********** + * + * + ********************* + 0 ++---*************************************************************---++ 0 1 2 3 4 5 6 7 ### 7.4.5 Density Plots Using Rio: $ < data/tips.csv Rio -ge 'g+geom_density(aes(tip / bill * 100, fill=sex), '\
> 'alpha=0.3) + xlab("percent")' | display
Since feedgnuplot cannot generate density plots, it’s best to just generate a histogram.
### 7.4.6 Box Plots
Using Rio:
$< data/tips.csv Rio -ge 'g+geom_boxplot(aes(time, bill))' | display Drawing a box plot is unfortunately not possible with feedgnuplot. ### 7.4.7 Scatter Plots Using Rio: $ < data/tips.csv Rio -ge 'g+geom_point(aes(bill, tip, color=time))' | display
Using feedgnuplot:
< data/tips.csv csvcut -c bill,tip | tr , ' ' | header -d | feedgnuplot \
--terminal 'dumb 80,25' --points --domain --unset grid --exit --style 'pt' '14'
10 ++----+------+-----+------+-----+------+-----+------+-----+------+A---++
+ + + + + + + + + + + +
9 ++ A ++
| |
8 ++ ++
| A |
| |
7 ++ A A ++
| A A |
6 ++ A A A ++
| A A |
5 ++ A A A A A AA A AA A A A ++
| A A A A |
4 ++ A A AAAA AAA A A A A A ++
| A AAAAA AAA AA A A |
| A AAAAAAA AA A A AA A AA |
3 ++ A AAAAAAAAAAA A A AA AA A ++
| AAAAAAA AA A A A A A |
2 ++ AA AAAAAAAAA A A A AA A A A ++
+ + AAAAAAAA +A AA+ + A + + + + + +
1 ++--A-+A-A---+--AA-+--A---+-----+------+--A--+------+-----+------+----++
0 5 10 15 20 25 30 35 40 45 50 55
### 7.4.8 Line Graphs
$< data/immigration-long.csv Rio -ge 'g+geom_line(aes(x=Period, '\ > 'y=Count, group=Country, color=Country)) + theme(axis.text.x = '\ > 'element_text(angle = -45, hjust = 0))' | display $ < data/immigration.csv | csvcut -c Period,Denmark,Netherlands,Norway,Sweden |
> header -d | tr , ' ' | feedgnuplot --terminal 'dumb 80,25' --lines \
> --autolegend --domain --legend 0 "Denmark" --legend 1 "Netherlands" \
> --legend 2 "Norway" --legend 3 "Sweden" --xlabel "Period" --unset grid --exit
250000 ++-----%%%-------+-------+--------+-------+-------+--------+------++
+ %%%% + % + + + + + Denmark+****** +
|%% % Netherlands ###### |
| % Norway $$| 200000 ++ % Sweden %%%%%%++ | % | | % | | % | 150000 ++$$ $% ++ |$ $% | |$ $% | 100000 ++$ $% ++ |$ $%%%%%%%%%% | |$ % |
| *********** $$% | 50000 +**** #########** %% ####### ++ | #### ********$$% ### ## |
|## ******## ### |
+ + + + **###########************* +
0 ++------+--------+-------+--------*************---+--------+------++
1890 1900 1910 1920 1930 1940 1950 1960 1970
Period
### 7.4.9 Summary
Both Rio with ggplot2 and feedgnuplot with Gnuplot have their advantages. The plots generated by Rio are obviously of much higher quality. It offers a consistent syntax that lends itself well for the command line. The only down-side would be that the output is not viewable from the command line. This is where feedgnuplot may come in handy. Each plot has roughly the same command-line arguments. As such, it would be straightforward to create a small Bash script that would make generating plots from and for the command line even easier. After all, with the command line having such a low resolution, we don’t need a lot of flexibility. | 2018-09-22 13:22:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32608067989349365, "perplexity": 4635.290010814173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00196.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1-common-core-15th-edition/chapter-10-radical-expressions-and-equations-chapter-test-page-657/2 | Chapter 10 - Radical Expressions and Equations - Chapter Test - Page 657: 2
c=37
Work Step by Step
A=12,B=35.Find C: $a^{2}$+$b^{2}$=$c^{2}$ $12^{2}$+$35^{2}$=$c^{2}$ 144+1225=$c^{2}$ 1369=$c^{2}$ $\sqrt {1369}$=$\sqrt c{^2}$ c=37
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2020-11-28 22:19:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6979590654373169, "perplexity": 7014.696721473664}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00061.warc.gz"} |
https://tex.stackexchange.com/questions/505474/moving-footnote-text-within-source-without-moving-labels | Moving footnote text within source without moving labels
I often write documents with fairly lengthy discursive footnotes to discuss tangents or secondary issues without interrupting the flow of the main text. However, this can make it difficult to edit, since the body of the text is broken by sometimes-quite-long notes. What I would like is a way to move the text of the footnote (while still keeping it within the same file) away from the footnote mark without breaking the default placement rules for footnotes.
I have tried using \footnotemark and \footnotetext, like so:
Paragraph containing a footnote\footnotemark and some more text afterward.
\footnotetext{A very long footnote that now doesn't have to appear in the middle of a
sentence in my source file.}
However, this causes my footnote text to sometimes appear on a page following the footnote mark rather than always appearing on the same page as the mark. (Also it gets a bit messy when I have multiple notes in a paragraph.)
Is there a way to separate the footnote text from the location of the footnote mark within the source code while having the output come out as though the \footnote{was in the middle of the text}? (And preferably allowing for multiple footnotes per paragraph?) Apologies if there is an obvious answer; I've been searching around for a while now I and haven't found anything.
MWE (edited to wrap lines):
\documentclass{article}
\begin{document}
A paragraph that has a footnote\footnote{Like this one here but even longer and more
intrusive so that it makes reading and editing the text very difficult. It probably goes
on and on and might even have \\ multiple \\ long \\ paragraphs inside of it.} in the
middle of it. The footnote is placed correctly, but the source is difficult to read and
edit.
A paragraph with a footnote in the middle of it\footnotemark{} that then continues onto a
new page. The source is easy to read and edit, but the footnote lands in the wrong place.
\newpage
\footnotetext{A footnote that I would like to have appear on the same page as the
footnote mark, but without having the footnote text in the middle of my paragraph in the
source.}
\end{document}
• Here's what I do: Put all footnotes at the end of a sentence. Use a new line for each sentence. For any footnotes longer than a single line, use the Footnote environment from my package, semantic-markup. This way the note is in its own space and can be moved and edited easily. It does still disrupt the text, but this is a good reminder to me that the footnote will disrupt the text for the reader and that I should try to avoid long, discursive footnotes! – musarithmia Aug 24 '19 at 2:23
A simple solution without packages is to define the footnote somewhere outside of the paragraph in a command and use this command as argument to \footnote. The disadvantage is that it is required to define the command before it is used, otherwise you will get an undefined control sequence error. This is also an issue when using the sepfootnotes package as mentioned in the other answer, although in that case there is only a warning instead of an error (but the footnote is not printed if it is defined after using it).
In order to define a footnote after use it can be stored in the .aux file, using for example the globalvals package. This requires two compilations of the .tex file, similar to labels and references.
MWE:
\documentclass{article}
\usepackage{globalvals}
\begin{document}
% simple \def without packages, needs to be defined before use
\def\firstlongnote{Like this one here but even longer and more
intrusive so that it makes reading and editing the text very difficult. It probably goes
on and on and might even have \\ multiple \\ long \\ paragraphs inside of it.}
A paragraph that has a footnote\footnote{\firstlongnote} in the
middle of it. The footnote is placed correctly, but the source is difficult to read and
edit.
% using the globalvals package, use the footnote first and define later
A paragraph with a footnote in the middle of it\footnote{\useVal{longnote}} that then continues onto a
new page. The source is easy to read and edit, but the footnote lands in the wrong place.
\defVal{longnote}{A footnote that I would like to have appear on the same page as the
footnote mark, but without having the footnote text in the middle of my paragraph in the
source.}
\end{document}
Result:
• Thank you! This is exactly what I was looking for -- I much prefer being able to put the notes after the text; it makes the source file very readable. – Rusty Sep 6 '19 at 16:52
It seem that the package sepfootnotes does exactly what your are searching for.
According to its documentation, the package separates footnotes (or endnotes) input from their usage. It provides one command to define the content of a note, and another command to insert the note in the document.
You may group footnote definition any particular order, either in the preamble, at the beginning of chapters or sections, before a paragraph or even in a separate file.
In addition, you may interchangeably use sepfootnotes and standard \footnote in the same document. The package is uploaded to CTAN.
If sepfootnotes does not suites you, it is possible to make the source more readable by using formatting, in particular if you have an advanced editor.
A paragraph that has a footnote\footnote{%
Like this one here but even longer and more intrusive
so that it makes reading and editing the text very difficult.
It probably goes on and on and might even have \\ multiple
\\ long \\ paragraphs inside of it.}%
\ in the middle of it. The footnote is placed correctly, but the source is | 2021-01-16 03:40:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801143407821655, "perplexity": 1040.6829535369454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703499999.6/warc/CC-MAIN-20210116014637-20210116044637-00037.warc.gz"} |
https://artsocial.info/and-relationship/elastic-constants-and-their-relationship-pdf-reader.php | # Elastic constants and their relationship pdf reader
### Derive the relationship between the Elastic constants, i. e. E, K and G.
The elastic constants at room temperature exhibit a positive deviation with Hr H/μ versus χ exhibits a remarkable similarity to a Tg versus χ relationship. Determination of relation between elastic constants youtube. Other properties that are closely related to its elastic characteristics and depend also upon. For further reading about strain and determination of relation between elastic. need for a review of the methodology used to compute elastic wave speeds, . absence of reliable K;, and p';, values, one may app~oximate them as KF~' and p; ' or If q is constant, a relation among these is implied, leading to the second.
### Elasticity – The Physics Hypertextbook
Она вдруг начала светиться под кончиком пальца. Электричество. Окрыленная надеждой, Сьюзан нажала на кнопку. | 2019-12-10 06:12:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170633554458618, "perplexity": 5975.876708886447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525821.56/warc/CC-MAIN-20191210041836-20191210065836-00128.warc.gz"} |
https://math.stackexchange.com/questions/1410660/does-the-weyl-group-act-on-its-lie-algebra | # Does the Weyl group act on its Lie algebra?
I am trying to prove something about the action of a particular Lie algebra on a particular representation (it's the starred claim on page 7 of this paper for those interested). My friend showed me a sketch of a proof that involved acting on an element of the Lie algebra by an element of its Weyl group $W$. However, I was not even aware that the Weyl group had a reasonable action on its Lie algebra.
I am aware that one can use the Killing form to identify a Cartan subalgebra $\mathfrak h$ with its dual $\mathfrak h^*$ and thus obtain an action of $W$ on $\mathfrak h$, but how do we extend this action to all of $\mathfrak g$?
In my case, my Lie algebra is a Kac-Moody algebra, but it behaves so similarly to a complex semisimple Lie algebra that I believe answering this question assuming that $\mathfrak g$ is complex semisimple will be enough.
Additional related question: Is there a reasonable sense in which the Weyl group acts on a representation of $\mathfrak g$?
• The Weyl group acts by automorphisms on both $\mathfrak{g}$ and its integrable representations (this is true in the generality of symmetrizable Kac-Moody algebras). For $\alpha$ a root, $E=E_\alpha$, $F=F_\alpha$ root vectors, I believe the formula for the reflection $s_\alpha$ is $S_\alpha=\mathrm{exp}(F)\mathrm{exp}(-E)\mathrm{exp}(F)$. – David Hill Aug 26 '15 at 20:05
• There are plenty of results that are proved in detail in the complex semisimple case followed by a short justification for it extending pretty much verbatim to the Kac-Moody case. – Matt Samuel Aug 27 '15 at 2:14 | 2019-07-22 21:05:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8786861896514893, "perplexity": 141.05347051576152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528220.95/warc/CC-MAIN-20190722201122-20190722223122-00207.warc.gz"} |
http://ferrao.org/tags/aeb464-dot-product-formula-3d | The three vectors above form the triangle AOB and note that the length of each side is nothing more than the magnitude of the vector forming that side. a = ( a 1, a 2, a 3) = a 1 i + a 2 j + a 3 k b = ( b 1, b 2, b 3) = b 1 i + b 2 j + b 3 k. [Notation] From physics we know W=Fd where F is the magnitude So, be careful with notation and make sure you are finding the correct projection. We’ll start with the three vectors, $$\vec u = \left\langle {{u_1},{u_2}, \ldots ,{u_n}} \right\rangle$$, $$\vec v = \left\langle {{v_1},{v_2}, \ldots ,{v_n}} \right\rangle$$ and $$\vec w = \left\langle {{w_1},{w_2}, \ldots ,{w_n}} \right\rangle$$ and yes we did mean for these to each have $$n$$ components. in the same direction is given by, An equivalent definition of the dot product is. the particle moves. Solution: Example (calculation in three dimensions): . The second step is to calculate the dot product between two three-dimensional vectors. [Vector Calculus Home] [Math And, the vector projection is merely This vector will form angles with the $$x$$-axis (a ), the $$y$$-axis (b ), and the $$z$$-axis (g ). We will discuss the dot product here. from one point to another. The scalar However, this relation is only valid when the force acts in the direction It turns out there are two; one type produces a scalar (the dot product) There is a natural way of adding The formula from this theorem is often used not to compute a dot product but instead to find the angle between two vectors. Purchase through DotProduct for custom calibration (and magnetic mounting / USB-C compatibility) or through Intel® directly for the lowest initial investment (USB-A cable provided). The symbol for dot product is represented by a heavy dot (.) Thus, two non-zero below) and |c| denotes the magnitude of the vector c. This Here it is, Note that we also need to be very careful with notation here. a $$\vec v\centerdot \vec w = 5 - 16 = - 11$$, b $$\vec a\centerdot \vec b = 0 + 9 - 7 = 2$$. Once again using $$\eqref{eq:eq2}$$ this would mean that one of the following would have to be true. while the other produces a vector (the So, to get the projection of $$\vec b$$ onto $$\vec a$$ we drop straight down from the end of $$\vec b$$until we hit (and form a right angle) with the line that is parallel to $$\vec a$$. is |b|cos(theta) (where theta is the angle between a and In this case, the work is the product of the The projection of $$\vec a$$ onto $$\vec b$$is given by. The dot product of two vectors vectors and multiplying vectors by scalars. The dot product gives us a very nice method for determining if two vectors are perpendicular and it will give another method for determining when two vectors are parallel. Let the force vectors a= and There is a nice formula for finding the projection of $$\vec b$$ onto $$\vec a$$. This vector is parallel to $$\vec b$$, while $${{\mathop{\rm proj}\nolimits} _{\vec a}}\vec b$$ is parallel to $$\vec a$$. This in turn however means that we must have $${v_i} = 0$$ and so we must have had $$\vec v = \vec 0$$. the dot product is, →a ⋅ →b = a1b1 + a2b2 + a3b3 Sometimes the dot product is called the scalar product. [References], Copyright © 1996 Department Thus, mathematically, the scalar projection of b onto a First suppose that $$\theta$$ is the angle between $$\vec a$$ and $$\vec b$$ such that $$0 \le \theta \le \pi$$ as shown in the image below. Start scanning now with a total hardware investment as low as \$149! We will need the magnitude of the vector. First get the dot product to see if they are orthogonal. The dot product of two vectors a= and b= is given by An equivalent definition of the dot product is where theta is the angle between the two vectors (see the figure below) and |c| denotes the magnitude of the vector c. This second definition is useful for finding the angle theta between the two vectors. The direction cosines and angles are then, You appear to be on a device with a "narrow" screen width (, $$$\vec a\centerdot \vec b = {a_1}{b_1} + {a_2}{b_2} + {a_3}{b_3}\label{eq:eq1}$$$, \begin{align*}& \vec u\centerdot \left( {\vec v + \vec w} \right) = \vec u\centerdot \vec v + \vec u\centerdot \vec w & \hspace{0.75in} & \left( {c\vec v} \right)\centerdot \vec w = \vec v\centerdot \left( {c\vec w} \right) = c\left( {\vec v\centerdot \vec w} \right)\\ & \vec v\centerdot \vec w = \vec w\centerdot \vec v& \hspace{0.75in} & \vec v\centerdot \vec 0 = 0\\ & \vec v\centerdot \vec v = {\left\| {\vec v} \right\|^2} & \hspace{0.75in} & {\mbox{If }}\vec v\centerdot \vec v = 0\,\,\,{\mbox{then}}\,\,\,\vec v = \vec 0\end{align*}, $$$\vec a\centerdot \vec b = \left\| {\vec a} \right\|\,\,\left\| {\vec b} \right\|\cos \theta \label{eq:eq2}$$$, ${{\mathop{\rm proj}\nolimits} _{\vec a}}\vec b = \frac{{\vec a\centerdot \vec b}}{{{{\left\| {\vec a} \right\|}^2}}}\vec a$, $\cos \alpha = \frac{{\vec a\centerdot \vec i}}{{\left\| {\vec a} \right\|}} = \frac{{{a_1}}}{{\left\| {\vec a} \right\|}}\hspace{0.25in}\,\,\,\,\,\cos \beta = \frac{{\vec a\centerdot \vec j}}{{\left\| {\vec a} \right\|}} = \frac{{{a_2}}}{{\left\| {\vec a} \right\|}}\hspace{0.25in}\hspace{0.25in}\cos \gamma = \frac{{\vec a\centerdot \vec k}}{{\left\| {\vec a} \right\|}} = \frac{{{a_3}}}{{\left\| {\vec a} \right\|}}$, Derivatives of Exponential and Logarithm Functions, L'Hospital's Rule and Indeterminate Forms, Substitution Rule for Indefinite Integrals, Volumes of Solids of Revolution / Method of Rings, Volumes of Solids of Revolution/Method of Cylinders, Parametric Equations and Polar Coordinates, Gradient Vector, Tangent Planes and Normal Lines, Triple Integrals in Cylindrical Coordinates, Triple Integrals in Spherical Coordinates, Linear Homogeneous Differential Equations, Periodic Functions & Orthogonal Functions, Heat Equation with Non-Zero Temperature Boundaries, Absolute Value Equations and Inequalities, $$\vec v = 5\vec i - 8\vec j,\,\,\vec w = \vec i + 2\vec j$$, $$\vec a = \left\langle {0,3, - 7} \right\rangle ,\,\,\vec b = \left\langle {2,3,1} \right\rangle$$, $$\vec a = \left\langle {6, - 2, - 1} \right\rangle ,\,\,\vec b = \left\langle {2,5,2} \right\rangle$$, $$\displaystyle \vec u = 2\vec i - \vec j,\,\,\vec v = - \frac{1}{2}\vec i + \frac{1}{4}\vec j$$. From $$\eqref{eq:eq2}$$ this tells us that if two vectors are orthogonal then. second definition is useful for finding the angle theta between | 2021-09-27 22:50:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9453961849212646, "perplexity": 444.21759065426124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00472.warc.gz"} |
https://de.mathworks.com/help/physmod/sps/ref/ee_getefficiency.html | Documentation
# ee_getEfficiency
Calculate efficiency as function of dissipated power losses
## Description
example
efficiency = ee_getEfficiency('loadIdentifier',node) returns the efficiency of a circuit based on the data extracted from a Simscape™ logging node.
Before you call this function, you must have the simulation log variable in your current workspace. Create the simulation log variable by simulating the model with data logging turned on, or load a previously saved variable from a file. If node is the name of the simulation log variable, then the table contains the data for all semiconductor blocks in the model. If node is the name of a node in the simulation data tree, then the table contains the data only for the blocks within that node.
Checking efficiency allows you to determine if circuit components are operating within their requirements. All blocks in the Semiconductor Devices library, as well as some other blocks, have an internal variable called power_dissipated, which represents the instantaneous power dissipated by the block. This instantaneous dissipated power includes only the real power (not the reactive or apparent power) that the block dissipates. When you log simulation data, the time-value series for this variable represents the power dissipated by the block over time. You can view and plot this data using the Simscape Results Explorer. The ee_getPowerLossTimeSeries function also allows you to access this data.
The ee_getEfficiency function calculates the efficiency of the circuit based on the losses for blocks that have a power_dissipated variable and that you identify as a load block. The equation for efficiency is
$Eff=100\cdot \frac{{P}_{load}}{{P}_{loss}+{P}_{load}},$
where:
• Eff is the efficiency of the circuit.
• Pload is the output power, that is, the power dissipated by load blocks.
• Ploss is the power dissipated by nonload blocks.
This equation assumes that all loss mechanisms are captured by blocks containing at least one power_dissipated variable. If the model contains any lossy blocks that do not have this variable, the efficiency calculation gives incorrect results.
Some blocks have more than one power_dissipated variable, depending on their configuration. For example, the N-Channel MOSFET block has separate power_dissipated logging nodes for the MOSFET, the gate resistor, and for the source and drain resistors if they have nonzero resistance values. The function sums all these losses to provide the total power loss for the block, averaged over simulation time. The function uses the loss data to calculate the efficiency of the circuit.
example
startTime,endTime)
returns the efficiency of a circuit based on the power_dissipated data extracted from a Simscape logging node within a time interval. startTime and endTime represent the start and end of the time interval for calculating the efficiency. If you omit these two input arguments, the function calculates the efficiency over the whole simulation time.
example
[efficiency,lossesTable] = ee_getEfficiency('loadIdentifier',node) returns the efficiency of a circuit and the power loss contributions of the nonload blocks in a circuit based on the data extracted from a Simscape logging node.
## Examples
collapse all
This example shows how to calculate efficiency based on the power dissipated by blocks in a circuit using the ee_getEfficiency function.
Open the model. At the MATLAB® command prompt, enter:
model = 'ee_converter_dcdc_class_e';
open_system(model)
The load in the model is represented by the R Load resistor. No other blocks with power_dissipated variables contain Load in their names. Therefore, you can use the string Load as the loadIdentifier argument.
If no string at least partially matches the names of all load blocks in your circuit, rename the load blocks using a schema that satisfies the matching criteria for the loadIdentifier argument.
This example model has data logging enabled. Run the simulation and create the simulation log variable.
sim(model)
The simulation log variable simlog_ee_converter_dcdc_class_e is saved in your current workspace.
Calculate efficiency and display the results.
efficiency =
90.0324
This example shows how to calculate efficiency based on the power dissipated for a specific time period using the ee_getEfficiency function.
Open the model. At the MATLAB® command prompt, enter:
model = 'ee_converter_dcdc_class_e';
open_system(model)
The load in the model is represented by the R Load resistor. No other blocks with power_dissipated variables contain Load in their names. Therefore, you can use the string Load as the loadIdentifier argument.
If no string at least partially matches the names of all load blocks in your circuit, rename the load blocks using a schema that satisfies the matching criteria for the loadIdentifier argument.
This example model has data logging enabled. Run the simulation and create the simulation log variable.
sim(model)
The simulation log variable simlog_ee_converter_dcdc_class_e is saved in your current workspace.
The model simulation time (t) is 1.25e-4 seconds. Calculate efficiency for the interval when t is between 1e-4 and 1.25e-4 seconds.
efficiency =
90.4879
This example shows how using the ee_getEfficiency function allows you to calculate both the efficiency of the circuit and the power-loss contributions of the nonload blocks based on the power that they dissipate.
Open the model. At the MATLAB® command prompt, enter:
model = 'ee_converter_dcdc_class_e';
open_system(model)
The load in the model is represented by the R Load resistor. No other blocks with power_dissipated variables contain Load in their names. Therefore, you can use the string Load as the loadIdentifier argument.
If no string at least partially matches the names of all load blocks in your circuit, rename the load blocks using a schema that satisfies the matching criteria for the loadIdentifier argument.
This example model has data logging enabled. Run the simulation and create the simulation log variable.
sim(model)
The simulation log variable simlog_ee_converter_dcdc_class_e is saved in your current workspace.
Calculate the efficiency and power-loss contributions due to dissipated power.
efficiency =
90.0324
lossesTable =
7x2 table
LoggingNode Power
______________________________________________ __________
{'ee_converter_dcdc_class_e.LDMOS' } 3.6584
{'ee_converter_dcdc_class_e.R_Trans.Resistor'} 2.911
{'ee_converter_dcdc_class_e.D2' } 1.9446
{'ee_converter_dcdc_class_e.D1' } 1.8371
{'ee_converter_dcdc_class_e.Cs' } 0.27392
{'ee_converter_dcdc_class_e.Ls' } 0.27098
{'ee_converter_dcdc_class_e.Cout' } 0.00044579
## Input Arguments
collapse all
String that is a complete or partial match for the names of load blocks in the circuit. For example, consider a circuit that contains the four semiconductor blocks shown in the table.
Block TypeN-Channel IGBTN-Channel IGBTDiodeDiode
'Diode'NoNoYesYes
'1'NoYesNoYes
'D'NoNoYesYes
'd'NoYesYesYes
The ee_getEfficiency function returns data just for the three load blocks only when the 'loadIdentifier' is 'd'.
A load-block naming schema that gives you better control over the output of the ee_getEfficiency function is shown in this table.
Block TypeN-Channel IGBTN-Channel IGBTDiodeDiode
'Diode'NoNoYesYes
Data Types: string
Simulation log workspace variable, or a node within this variable, that contains the logged model simulation data, specified as a Node object. You specify the name of the simulation log variable by using the Workspace variable name parameter on the Simscape pane of the Configuration Parameters dialog box. To specify a node within the simulation log variable, provide the complete path to that node through the simulation data tree, starting with the top-level variable name.
If node is the name of the simulation log variable, then the table contains the data for all blocks in the model that contain power_dissipated variables. If node is the name of a node in the simulation data tree, then the table contains the data only for:
• Blocks or variables within that node
• Blocks or variables within subnodes at all levels of the hierarchy beneath that node
Example: simlog.Cell1.MOS1
Start of the time interval for calculating the efficiency, specified as a real number, in seconds. startTime must be greater than or equal to the simulation Start time and less than endTime.
Data Types: double
End of the time interval for calculating the efficiency, specified as a real number, in seconds. endTime must be greater than startTime and less than or equal to the simulation Stop time.
Data Types: double
## Output Arguments
collapse all
Efficiency of the circuit based on data extracted from a Simscape logging node.
Dissipated power losses for each nonload block, returned as a table. The first column lists logging nodes for all blocks that have at least one power_dissipated variable. The second column lists the corresponding losses in watts.
## Assumptions
• The output power equals the total power dissipated by blocks that you identify as load blocks.
• The input power equals the output power plus the total power dissipated by blocks that you do not identify as load blocks.
• The power_dissipated variables capture all loss contributions. | 2019-10-16 16:08:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48473069071769714, "perplexity": 1925.4807950341255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00358.warc.gz"} |
https://hiltmon.com/blog/2014/05/26/omnifocus-my-way/ | # Omnifocus My Way
I’ve been using OmniFocus as my primary tool for task management for years, yet I never read the GTD book. How I chose OmniFocus is a topic for another day, how I use it my way is the topic for today. Maybe some of the alternative ways I use it can improve the way you do too.
### My World
I’m a software designer, developer, project manager and part-time writer. I work at a Hedge Fund by day, run my consulting firm at night, write this blog and have a personal life. As of writing this post, I have 260 active line items in OmniFocus. That is a lot of things to stay on top of.
But with a bit of structure, a taste of discipline and a comfortable workflow, OmniFocus helps me stay on top of all of these things.
### Area Folders
I divide up my life by my world. Which means the first level of all my OmniFocus tasks is the in the Projects tab. I have top-level folders for each of the four areas of my life I manage in OmniFocus: Personal, Maritime (work), Noverse (consulting) and Hiltmon (here).
This breakdown allows me to separate and focus on the tasks in the area of my life I am in now. As of writing this, I am focussed on the Hiltmon area. As soon as I post this, I will move focus to my Personal area. I use the OmniFocus Focus button a lot.
### Projects
Given that I have a Project Management background, to me almost everything I do is part of a larger project. Whether it’s the multitude of applications to design at work, my consulting clients to support, or bigger ticket items in personal space, they are all projects.
I create all of the projects that I am involved in within each area folder. In my mind, all tasks are assigned to a project. Larger tasks are broken down into sub-tasks, but I do not go into too much detail with these. That happens when I am doing the task.
I also create a few ‘special’ tasks in each project:
• The Project Ideas task is used to capture (as sub-tasks) thoughts and ideas on what could be done but is not planned, yet. I do not want to lose any ideas no matter the source while I am working on the project.
• The Project On-Hold task is for sub-tasks that have been placed on hold, but we may still get back to.
• The Project Later task for sub-tasks that I have intentionally deferred until later versions of that project.
These special tasks often fill and empty during project reviews and meetings as we reschedule what we want to do on each project. I also sometimes have a general Ideas, On-Hold and Later project in each area for nonexistent project related tasks.
Finally, each area folder also has a Single Actions List for those random once off tasks and a Routines Project for the routine tasks that happen over and over again. Tasks from monthly reports to backup reminders to billing cycles appear in the Routines Project for each area.
### Contexts
The way I understand it, contexts are where a task needs to be performed or with whom. Since almost all that I do can be done on my laptop, most of my tasks can be done anywhere. Traditional GTD contexts make no sense to me.
Instead, I use contexts as tags, for prioritization and for assignments.
My top five contexts (after No Context) are my prioritizers. I had tried priority levels and colors before, but it added complexity. What’s the difference between a priority 3 and a priority 4 task when the levels are arbitrary?
I use a UI / NUI / UNI / NUNI / Chase / None model for prioritization as follows:
• UI: Urgent / Important tasks are the ones I need to get done first for each project, in any order.
• NUI: Not Urgent / Important tasks come next, often because they are dependencies on other tasks.
• UNI: Urgent / Not Important tasks follow these. To me, they are the tasks that I should be getting on to next once I have the important stuff completed
• NUNI: Not Urgent / Not Important tasks seem like the rest, but they are not. I treat these as tasks that have an elevated priority over the No Context tasks
• Chase tasks are special tasks to remind me to chase people for information or deliverables. They sit outside the regular prioritization scale because I review them independently in my workflow (and have an OmniFocus perspective for these)
• All other tasks have No Context. They are not prioritized as yet and may remain so for some time.
I have left the default contexts (and a few old ones) in place for now, but plan on removing them soon.
I have a special case for task assignments. I use the People context. For example, if I assign a task to Londo Mollari, I set the context to his name. When I sit with people I have assigned tasks to, I can look at just that context to see and track what they are doing. Note that this does not indicate who I am doing a task with, the traditional context interpretation, that information appears in the description.
### People
According to the manual, contexts are also used for indicating who you are doing or discussing a task with. In my case, since having a person’s name in the context implies assignment, I need another way.
That way comes from old habits. I use a person’s initials or a company’s short name in the task description itself, and then use OmniFocus' search to find and filter tasks by these when needed.
For example, if I need something from Satai Delenn, I will create a task like “Get TPM report from (SD)”. If I assigned the TPM report to her it would have her name as context instead. A search for SD) would find all the tasks I need to do or discuss with her.
The reason I bracket the initials of a person is because I use the same TextExpander snippets for people as I do for key marks (see the end of Assembly Notes). It’s really just a habit now.
Another example use of initials would be to track requests from people. If Satai Delenn had asked for feature X, I would create a task in that project called “Implement Feature X (⨍/SD)” to mark it that the feature request came from her. The same search above would remind me to update her on that feature’s progress.
### Dates
I put in due dates for tasks that have deadlines, whether I assign them or not. Most of my tasks do not have these though, as they are either dependent on other tasks or not yet prioritized. As of OmniFocus 2, I have also started adding defer dates for tasks I do not want to even look at until then.
I used to put in due dates on all my tasks, just as a project manager puts in task end dates into a Gantt chart. But since OmniFocus does not have task dependencies, delays in one task do not affect another. Which meant that I often had to bulk-change due dates as things changed. Which also meant that due dates lost their meaning and just became a chore to maintain. Nowadays, I only assign due dates to tasks where an explicit deadline is set and there are no dependencies.
### Capture
I rarely use the Inbox. Instead, I capture tasks as I hear about them either into OmniFocus directly if the window is visible, or using the Quick Entry box. Since I know what the project is at the time of capture, I find it easier to tab to the project field and fill that in while I am capturing (or click on the project first).
I did try for a while to capture to inbox as the manual says, but my inbox got so messy that I stopped. I still have to clean up later, but at least I can do it project by project.
### Notes
I also use the notes feature extensively and in many different ways, for example:
• For Asana tasks, I paste in the Asana link so I can get back to it later
• For emailed tasks, I paste in the email body
• For chase and follow-up tasks, I use notes to track when I chased or followed up (using a TextExpander snippet to put in the date)
• For assigned tasks, I use notes to track discussion on that task with the assignee until it’s completed
• And for other tasks, I often add ideas, thoughts or an explanation in case the task description was too cryptic
### Review
I did not perform a regular review in the OmniFocus 1 days, but since starting the OmniFocus 2 beta with it’s new Review perspective, I have come to enjoy that too.
For me, the review time is a weekly task to go though each project and make sure that nothing has been missed. I use review time to tidy up badly written task descriptions, close those that I completed, re-prioritize tasks, add or change due dates and jot down notes as I think about each project. The new Review perspective makes this process easy.
### Daily Flow
Before OmniFocus 2, my daily flow was to start in the project I wanted to work on and get going with the highest priority tasks. This worked well when I only had a few projects on the go, but I have a lot more on now. I need to stay on top of all of them, not just the current one.
Since OmniFocus 2, my daily flow starts in the excellent Forecast view. I can see the overdue tasks that I have missed, tasks due today, upcoming deadlines, and my calendar so I can figure what I can get done. I flag the tasks I think I need to tackle. I then visit all my ‘hot’ projects to see if there are tasks I need to tackle today for them too (since most of my tasks do not have due dates) and flag those.
Finally, before kicking off the day, I click to my Chase perspective to see who I need to call to chase up today.
The rest of the day is spent working, clearing flags, capturing new tasks as they happen and glancing at the Flagged perspective to see what else I need to be doing. This keeps me on track.
### I do it my way
I do not use OmniFocus the GTD way (at least I do not think I do given that I have not read the book). I do use OmniFocus the way that works best for me.
OmniFocus and this my way workflow ensures that I never forget a task, a commitment or an action, mine and others. It keeps me focussed on what I need to be doing now. It reminds me what to do next. It helps build an agenda for what to discuss with people, and what was talked about before. It helps me know what was done and why.
Without it, I could not manage the myriad of projects, tasks, actions, commitments and reminders I deal with every day. And to make things even better, OmniFocus 2 evolved towards my way and added ease of use and features where my way needed it the most. I am sure that for many of you of you, the GTD way works well. For others, you have your own ways to use OmniFocus. This was mine.
Follow the author as @hiltmon on Twitter and @hiltmon on App.Net. Mute #xpost on one. | 2018-09-20 03:33:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3793331980705261, "perplexity": 1575.1165928804242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156376.8/warc/CC-MAIN-20180920020606-20180920040606-00440.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2013.18.1291 | # American Institute of Mathematical Sciences
July 2013, 18(5): 1291-1304. doi: 10.3934/dcdsb.2013.18.1291
## Traveling wave solutions for a diffusive sis epidemic model
1 Department of Mathematics, Shanghai Normal University, Shanghai, 200234, China 2 Department of Mathematical Science, University of Alabama in Huntsville, Huntsville, Alabama 35899, United States
Received August 2012 Revised February 2013 Published March 2013
In this paper, we study the traveling wave solutions of an SIS reaction-diffusion epidemic model. The techniques of qualitative analysis has been developed which enable us to show the existence of traveling wave solutions connecting the disease-free equilibrium point and an endemic equilibrium point. In addition, we also find the precise value of the minimum speed that is significant to study the spreading speed of the population towards to the endemic steady state.
Citation: Wei Ding, Wenzhang Huang, Siroj Kansakar. Traveling wave solutions for a diffusive sis epidemic model. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1291-1304. doi: 10.3934/dcdsb.2013.18.1291
##### References:
[1] L. J. S. Allen, B. M. Boller, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic reaction-diffusion model,, Discrete Contin. Dyn. Syst. A, 21 (2008), 1. doi: 10.3934/dcds.2008.21.1. Google Scholar [2] I. Beardmore and R. Beardmore, The global structure of a spatial model of infections disease,, Proc. Roy. Soc. Lond. A., 459 (2003), 1427. doi: 10.1098/rspa.2002.1080. Google Scholar [3] O. Diekmann, Thresholds and traveling waves for the geographical spread of infection,, J. Math. Biol., 6 (1978), 109. doi: 10.1007/BF02450783. Google Scholar [4] O. Diekmann, Run for your life, A note on the aymptotic speed of propagation of an epidimic,, J. Diff. Equations, 33 (1979), 58. doi: 10.1016/0022-0396(79)90080-9. Google Scholar [5] S. R. Dunbar, Traveling wave solutions of Diffusive Lotka-Volterra Equations: A heteroclinic connection in $\mathbbR^4$,, Trans. Amer. Math. Society, 286 (1984), 557. doi: 10.2307/1999810. Google Scholar [6] P. C. Fife, "Mathematical Aspects of Reacting and Diffusing systems,", Lecture Notes in Biomath, 28 (1979). Google Scholar [7] W. Fitzgibbon, M. Langlais and J. J. Morgan, A mathematical model of the spread of Feline Leukemia Virus through a highly heterogeneous spatial domain,, SIAM J. Math. Anal., 33 (2001), 570. doi: 10.1137/S0036141000371757. Google Scholar [8] W. Fitzgibbon, M. Langlais and J. J. Morgan, A reaction-diffusion system modelling direct and indirect transmission of diseases,, Discrete and Continuous Dynamical Systems, 4 (2004), 893. doi: 10.3934/dcdsb.2004.4.893. Google Scholar [9] W. Fitzgibbon and M. Langlais, Simple models for the transmission of microparasites between host populations living on non coincident domains,, Structured Population Models in Biology and Epidemiology Lecture Notes in Mathematics, 1936 (2008), 115. Google Scholar [10] R. A. Gardner, Review on traveling wave solutions of parabolic systems by A. I. Volpert, V. A. Volpert,, Bull. Aner. Math. Soc., 32 (1995), 446. doi: 10.1090/S0273-0979-1995-00607-5. Google Scholar [11] W. Huang, Traveling wave solutions for a class of predator-prey systems,, Journal of Dynamics and Differential Equations, 24 (2012), 633. doi: 10.1007/s10884-012-9255-4. Google Scholar [12] W. Huang, M. Han and K. Liu, Dynamics of an SIS reaction-dissusion Epidemic Model for Disease transmission,, Mathematical Biosciences and Engineering, 7 (2010), 51. doi: 10.3934/mbe.2010.7.51. Google Scholar [13] J. Keener and J. Sneyd, "Mathematical Physiology,", Springer-Verlag, (1998). Google Scholar [14] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics-I,, Original Research Article Bulletin of Mathematical Biology, 53 (1991), 33. Google Scholar [15] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics-II. the problem of endemicity,, Original Research Article Bulletin of Mathematical Biology, 53 (1991), 57. Google Scholar [16] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics-III. Further studies of the problem of endemicity,, Original Research Article Bulletin of Mathematical Biology, 53 (1991), 89. Google Scholar [17] M. A. Lewis, B. Li and H. F. Weinberger, Spreading speed and linear determinacy for two-species competition models,, J. Math. Biol., 45 (2002), 219. doi: 10.1007/s002850200144. Google Scholar [18] B. Li, H. F. Weinberger and M. A. Lewis, Spreading speeds as slowest wave speeds for cooperative systems,, Math. Biosci, 196 (2005), 82. doi: 10.1016/j.mbs.2005.03.008. Google Scholar [19] X. Liang and X. Q. Zhao, Asymptotic speeds of spread and traveling waves for monotone semiflows with applications,, Comm. Pure and Appl. Math., 60 (2007), 1. doi: 10.1002/cpa.20154. Google Scholar [20] R. Peng and S. Liu, Global stability of the steady states of an SIS epidemic reaction-diffusion model,, Non. Analysis, 71 (2009), 239. doi: 10.1016/j.na.2008.10.043. Google Scholar [21] J. Yang, S. Liang and Y. Zhang, Travelling Waves of a Delayed SIR Epidemic Model with Nonlinear Incidence Rate and Spatial Diffusion,, PLoS ONE, 6 (2011). doi: 10.1371/journal.pone.0021128. Google Scholar
show all references
##### References:
[1] L. J. S. Allen, B. M. Boller, Y. Lou and A. L. Nevai, Asymptotic profiles of the steady states for an SIS epidemic reaction-diffusion model,, Discrete Contin. Dyn. Syst. A, 21 (2008), 1. doi: 10.3934/dcds.2008.21.1. Google Scholar [2] I. Beardmore and R. Beardmore, The global structure of a spatial model of infections disease,, Proc. Roy. Soc. Lond. A., 459 (2003), 1427. doi: 10.1098/rspa.2002.1080. Google Scholar [3] O. Diekmann, Thresholds and traveling waves for the geographical spread of infection,, J. Math. Biol., 6 (1978), 109. doi: 10.1007/BF02450783. Google Scholar [4] O. Diekmann, Run for your life, A note on the aymptotic speed of propagation of an epidimic,, J. Diff. Equations, 33 (1979), 58. doi: 10.1016/0022-0396(79)90080-9. Google Scholar [5] S. R. Dunbar, Traveling wave solutions of Diffusive Lotka-Volterra Equations: A heteroclinic connection in $\mathbbR^4$,, Trans. Amer. Math. Society, 286 (1984), 557. doi: 10.2307/1999810. Google Scholar [6] P. C. Fife, "Mathematical Aspects of Reacting and Diffusing systems,", Lecture Notes in Biomath, 28 (1979). Google Scholar [7] W. Fitzgibbon, M. Langlais and J. J. Morgan, A mathematical model of the spread of Feline Leukemia Virus through a highly heterogeneous spatial domain,, SIAM J. Math. Anal., 33 (2001), 570. doi: 10.1137/S0036141000371757. Google Scholar [8] W. Fitzgibbon, M. Langlais and J. J. Morgan, A reaction-diffusion system modelling direct and indirect transmission of diseases,, Discrete and Continuous Dynamical Systems, 4 (2004), 893. doi: 10.3934/dcdsb.2004.4.893. Google Scholar [9] W. Fitzgibbon and M. Langlais, Simple models for the transmission of microparasites between host populations living on non coincident domains,, Structured Population Models in Biology and Epidemiology Lecture Notes in Mathematics, 1936 (2008), 115. Google Scholar [10] R. A. Gardner, Review on traveling wave solutions of parabolic systems by A. I. Volpert, V. A. Volpert,, Bull. Aner. Math. Soc., 32 (1995), 446. doi: 10.1090/S0273-0979-1995-00607-5. Google Scholar [11] W. Huang, Traveling wave solutions for a class of predator-prey systems,, Journal of Dynamics and Differential Equations, 24 (2012), 633. doi: 10.1007/s10884-012-9255-4. Google Scholar [12] W. Huang, M. Han and K. Liu, Dynamics of an SIS reaction-dissusion Epidemic Model for Disease transmission,, Mathematical Biosciences and Engineering, 7 (2010), 51. doi: 10.3934/mbe.2010.7.51. Google Scholar [13] J. Keener and J. Sneyd, "Mathematical Physiology,", Springer-Verlag, (1998). Google Scholar [14] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics-I,, Original Research Article Bulletin of Mathematical Biology, 53 (1991), 33. Google Scholar [15] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics-II. the problem of endemicity,, Original Research Article Bulletin of Mathematical Biology, 53 (1991), 57. Google Scholar [16] W. O. Kermack and A. G. McKendrick, Contributions to the mathematical theory of epidemics-III. Further studies of the problem of endemicity,, Original Research Article Bulletin of Mathematical Biology, 53 (1991), 89. Google Scholar [17] M. A. Lewis, B. Li and H. F. Weinberger, Spreading speed and linear determinacy for two-species competition models,, J. Math. Biol., 45 (2002), 219. doi: 10.1007/s002850200144. Google Scholar [18] B. Li, H. F. Weinberger and M. A. Lewis, Spreading speeds as slowest wave speeds for cooperative systems,, Math. Biosci, 196 (2005), 82. doi: 10.1016/j.mbs.2005.03.008. Google Scholar [19] X. Liang and X. Q. Zhao, Asymptotic speeds of spread and traveling waves for monotone semiflows with applications,, Comm. Pure and Appl. Math., 60 (2007), 1. doi: 10.1002/cpa.20154. Google Scholar [20] R. Peng and S. Liu, Global stability of the steady states of an SIS epidemic reaction-diffusion model,, Non. Analysis, 71 (2009), 239. doi: 10.1016/j.na.2008.10.043. Google Scholar [21] J. Yang, S. Liang and Y. Zhang, Travelling Waves of a Delayed SIR Epidemic Model with Nonlinear Incidence Rate and Spatial Diffusion,, PLoS ONE, 6 (2011). doi: 10.1371/journal.pone.0021128. Google Scholar
[1] Hai-Feng Huo, Shi-Ke Hu, Hong Xiang. Traveling wave solution for a diffusion SEIR epidemic model with self-protection and treatment. Electronic Research Archive, , () : -. doi: 10.3934/era.2020118 [2] Siyang Cai, Yongmei Cai, Xuerong Mao. A stochastic differential equation SIS epidemic model with regime switching. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020317 [3] Omid Nikan, Seyedeh Mahboubeh Molavi-Arabshai, Hossein Jafari. Numerical simulation of the nonlinear fractional regularized long-wave model arising in ion acoustic plasma waves. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020466 [4] Adel M. Al-Mahdi, Mohammad M. Al-Gharabli, Salim A. Messaoudi. New general decay result for a system of viscoelastic wave equations with past history. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020273 [5] Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 [6] Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215 [7] Zhenzhen Wang, Tianshou Zhou. Asymptotic behaviors and stochastic traveling waves in stochastic Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020323 [8] Wei Feng, Michael Freeze, Xin Lu. On competition models under allee effect: Asymptotic behavior and traveling waves. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5609-5626. doi: 10.3934/cpaa.2020256 [9] Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348 [10] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 [11] Laurence Cherfils, Stefania Gatti, Alain Miranville, Rémy Guillevin. Analysis of a model for tumor growth and lactate exchanges in a glioma. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020457 [12] Laurent Di Menza, Virginie Joanne-Fabre. An age group model for the study of a population of trees. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020464 [13] Weiwei Liu, Jinliang Wang, Yuming Chen. Threshold dynamics of a delayed nonlocal reaction-diffusion cholera model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020316 [14] Yining Cao, Chuck Jia, Roger Temam, Joseph Tribbia. Mathematical analysis of a cloud resolving model including the ice microphysics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 131-167. doi: 10.3934/dcds.2020219 [15] Zhouchao Wei, Wei Zhang, Irene Moroz, Nikolay V. Kuznetsov. Codimension one and two bifurcations in Cattaneo-Christov heat flux model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020344 [16] Shuang Chen, Jinqiao Duan, Ji Li. Effective reduction of a three-dimensional circadian oscillator model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020349 [17] Barbora Benešová, Miroslav Frost, Lukáš Kadeřávek, Tomáš Roubíček, Petr Sedlák. An experimentally-fitted thermodynamical constitutive model for polycrystalline shape memory alloys. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020459 [18] Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 [19] Yolanda Guerrero–Sánchez, Muhammad Umar, Zulqurnain Sabir, Juan L. G. Guirao, Muhammad Asif Zahoor Raja. Solving a class of biological HIV infection model of latently infected cells using heuristic approach. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020431 [20] Chao Xing, Jiaojiao Pan, Hong Luo. Stability and dynamic transition of a toxin-producing phytoplankton-zooplankton model with additional food. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020275
2019 Impact Factor: 1.27 | 2020-11-25 17:38:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6921929717063904, "perplexity": 9339.7579256881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00283.warc.gz"} |
http://2msa.it/yevn/wx-grid-example.html | kv in a Template Directory. Managed hosting + concierge support. These data export features promote the interoperability of weather and climate information with various scientific communities and common software packages such as ArcGIS. Wxpython grid example. nowcasting. StaticBoxSizer, wx. The PyQt Intro - a series of introductory articles in tutorial format. The new API doc and the wxWidgets book helped me create what I thought would work, but I'm still getting an error: "AttributeError: 'module' object has no attribute 'Grid'. The Kendo UI for Angular Data Grid includes a comprehensive set of ready-to-use features covering everything from paging, sorting, filtering, editing, and grouping to row and column virtualization, exporting to PDF and Excel, and accessibility support. 1415"); I have read the documentation already and seen it all, like I said the problem is that you can´t really worck with float values using wxGrid. Description. HtmlCell object representing a portion of the displayed document, usually something like a run of same-styled text, a table cell, or an image. In this example you can expand 10(5t + 6) using a grid. 1 and python 2. For example, the plaintext "a simple transposition" with 5 columns looks like the grid below. Digi machine-to-machine (M2M) and Internet of Things (IoT) wireless and wired connectivity products and services are designed for the most demanding industrial environments. Made of Heavy Gauge Steel. As mentioned earlier, wx. ListCtrl; It is preferred to use wx. In addition, there is a PlotFrame widget which creates a stand-alone wx. The About page uses a parallax effect in the. These coding examples illustrate how to. ALIGN_INVALID. The classes in this module provide views and data models for viewing tabular or hierarchical data in a more advanced way than what is provided by classes such as wx. py MIT License :. You can vote up the examples you like or vote down the ones you don't like. 使用样例 import wx import wx. This is wxPython tutorial. wxPython is a complex code to maintain and does not synchronize with wxWidgets versions. PyGridCellRenderer to draw the cell content. I am trying to create a black background with white foreground text column header. GetAttr (self, row, col, kind) ¶. Order by 6 pm for same day shipping. ComboBox Styles and Templates. Today I will show you how to style the PyQt5 widgets and create a good looking application interface. The grid lines to apply the changes on. DataViewCtrl or wx. Run this program and click in the window that appears. plot and pylab. wxPython is a complex code to maintain and does not synchronize with wxWidgets versions. Plot y=2x-1 (as a solid line because y≤ includes equal to) 3. Gallery and examples Example gallery of visualizations, with the Python code that generates them Welcome, this is the user guide for Mayavi, a application and library for interactive scientific data visualization and 3D plotting in Python. js file content here: gridData. Our scripts will work out the interface but will not implement the functionality. GridTableBase class holds the actual data to be displayed by a wx. At t=1, our RNN would predict the letter “d” given the input. Minimal example code is below - if you expand the window, the columns grow so the list fits. org (the website) welcomes all Python game, art, music, sound, video and multimedia projects. You have to include wxWidgets' header files, of course. Important concepts are introduced with examples. Learning wxPython by Example. with grid values will reflect this as seen on the map to the left. x, and cover both wxPython 3. I’ve seen the question asked from time to time: how do you unit test a wxPython application?Well, I don’t have a silver bullet, but it seems to me that the new SWIG wrapping of recent wxPython versions allows a new possibility: mocking the entire wxPython library by replacing its. Grid widget to an underlying SQLite grid. Thats AddGrowableCol() for wxPython. Grid can be created with zero rows, have a row added using AppendRows(), but when that row is removed with DeleteRows() an exception is thrown from a failing assertion that is not allowing the last row to be deleted. Examples chmod 644 file. The wxPython demo and docs are great (really) but are not always the easiest to understand and the wx. Get awesome Dashboard Templates by Creative Tim. An effort was made to clean up the wxPython applications and its functionalities and made it compatible with Python. It makes the ListView much easier to use and teaches it some neat new tricks. For example, to create a table that only allows rows to be appended to a limit of 50 rows, write the following method in your grid table base. By default the attributes management is delegated to wx. Panel so that it can. All at your fingertips. Code Examples. Usami Sadayuki, military advisor of feudal warlord Uesugi. The official wxPython site has several screenshots and downloads for these platforms. This dimension is. This is an alternative to PyQT and Tkinter. 06 and 18 UTC runs goes to 36 hours. Code Line # 3: c= calendar. x, and cover both wxPython 3. Although it currently does not have a frontend for modifying the context values, it does allow one to expose N virtual modbus devices to a network which is useful for testing data center monitoring tools. faster, less. EVT_GRID_CELL_LEFT_CLICK() And associate this event to what you want to do. NET ListView. Unfortunately, no. cutting DNA into fragments The genetic code is shared by A. As the control only needs to retrieve the information about the column, this class. sql import pyodbc import pandas as pd Specify the parameters # Parameters server = 'server_name' db = 'database_name' UID = 'user_id'. The best free calendar snippets available. The goal is to take these two points and write an equation in the form y = mx + b that passes through the points. When grid came out, many developers kept using pack, and you'll still find it used in many Tk programs and documentation. Once you have finished getting started you could add a new project or learn about pygame by reading the docs. I would like an example of properly defining and implementing a wx. This can be done on a file by file basis (such as #include "wx/window. The other number line is vertical number line and is called the y-axis. To do so, we need to provide a discretization (grid) of the values along the x-axis, and evaluate the function on each x value. Shuffled Camera Feed Puzzle. Grid with zero rows is a valid thing. It comes with an inbuilt cross-compiler that can deploy executables to over 15 different architectures, ranging from AmigaOS to Windows. The relationship between the Event, the Event Handler via the Event Binder is illustrated below: The top three boxes illustrate the concepts while the lower 3 boxes provide a concrete example of binding a Move_Event to an on_move() method via the EVT_MOVE binder. Adding call to Refresh() doesn't. Grid was first introduced to Tk in 1996, several years after Tk became popular, and took a while to catch on. In the method I'm debugging are these lines: self. Code Line # 4: str= c. side inductor, is the grid-side inductor, is a capacitor with a series damping resistor, and are inductors resistances, voltages and are the input and output (inverter voltage and output system voltage). Some of the popular. It allows programmers to develop highly graphical user interfaces for their programs using common concepts such as menu bars, menus, buttons, fields, panels and frames. A numerical forecast is __ and __accurate when the grid points are further apart. Have you gone through the wxPython demo application? It contains examples of every standard control and almost every behavior you might want. SetScrollbar gives a clue about the way a scrollbar is modeled. ALL) return sizer def makeGrid(self, rows): """ In the form structure a list signifies a grid of elements (equal width columns, rows with similar numbers of. The relationship between the Event, the Event Handler via the Event Binder is illustrated below: The top three boxes illustrate the concepts while the lower 3 boxes provide a concrete example of binding a Move_Event to an on_move() method via the EVT_MOVE binder. Examples are: Specifying address:. Note: If you don't know Bootstrap, we suggest that you read our Bootstrap Tutorial, or our Bootstrap 4 Tutorial. -validMin [value] A minimum expected value in the field. My first time using this widget !! I want to press a button to do some actions, then update some of the property values (e. 4 Understanding encryption The Uesugi cipher A similar cipher using a conversion grid was also created in Japan during the 16th century. Enter your email address to subscribe to this blog and receive notifications of new posts by email. Any is used as kind value, this function combines the attributes set for this cell using SetAttr and those for its row or column (set with SetRowAttr or SetColAttr respectively), with the cell attribute having the highest precedence. More precisely, you first need to create a wxGLCanvas window and then create an instance of a wxGLContext that is initialized with this wxGLCanvas and then later use either. System variable: plot_options Elements of this list state the default options for plotting. interface to the wxWidgets cross-platform GUI toolkit. > I am looking for example code that consists of just a frame and a >grid(10x2). FlexGridSizer, and wx. It includes an example where a Grid contains images; something like a thousand of them if I remember correctly. In this tutorial, you will learn the basics of GUI programming in wxPython. Tested only on windows xp with wxpython 2. Written as an assignment to a new object it seems to do nothing, but you can then print, summarize or otherwise manipulate the new object. It allows programmers to develop highly graphical user interfaces for their programs using common concepts such as menu bars, menus, buttons, fields, panels and frames. wxPython programs? 112 4. The (float_x, float_y) tuple which is coincident with the wafer's center coordinate (0, 0). Bootstrap 3 Datepicker v4 Docs. Grid - I want it to expand and shrink as the window is resized, and I also want to resize the columns dynamically so the required number are always visible. SetTable(), M:wx. GridStringTable, so this renderer and editor are used by default for all. 5) which helps. The existing wxPython has hand written. Advanced wxPython Nuts and Bolts Robin Dunn Software Craftsman O'Reilly Open Source Convention • wx. Thanks for your help. Consider the following example: Find the equation of a line that passes through the points (3, 6) and (-2, -4). Grid has been greatly expanded and redesigned for wxPython 2. The 111 is a grid parameter, encoded as an integer. SetScrollbar gives a clue about the way a scrollbar is modeled. 15 Overview. This book is targeted for Python 2 and 3. A grid table is responsible for storing the grid data and, indirectly, grid cell attributes. grid():- It organizes the widgets in table-like structure. 6, we have the class SimpleGrid, a subclass of the wxPython class wx. It explores the dynamic abilities of matplotlib, which allows smooth and flicker-less animation. OnCellClicked (cell, x, y, event) Called when the user clicks inside the HTML document. Most of the talk about Oracle's release of 12. Enter your email address to subscribe to this blog and receive notifications of new posts by email. Texture Wrapping and Coordinates Example. If b is None and there are no kwargs, this toggles the visibility of the lines. It has a title row and no content rows. Layout management in wxPython. gl/UwLNE2 In this video iam going to show you how you can create GridSizer Layout in wxPython, so in the previous video ia have told that there different kind of Layout in. grid class GridFrame (wx. When grid came out, many developers kept using pack, and you'll still find it used in many Tk programs and documentation. By saving session attribute, not actionRequest attribute. For this blog I was tempted to use Tkinter but gotta admit that I like wxPython more…as I have already used it. A typical application consists of various widgets. 1 and python 2. wxPython控件学习之wx.grid.Grid 表格控件. 01) # Grid of 0. Grid): def __init__(self, parent): wx. The following code demonstrates a simple gridsizer of a 4 by 4 grid with vertical and horizontal gap of 5 pixels. DataViewCtrl. They are from open source Python projects. Hollywood is a cross-platform programming language available for many. Description. VIL is a NEXRAD radar estimate of the total amount of liquid precipitation water (in kg/m 2)in the atmospheric column over a given location. Rotation Example. (If there is too much hand-waving, call me on it in the comments! I’ll be more than happy to expand wherever things get too obtuse. Minesweeper In Python GUI. So you need to know some Python. PyGridCellRenderer and requires xlrd. org] library for developing Graphical User Interfaces (GUI) for desktop applications in Python. Instead, a platform-specific wx. StaticBoxSizer, wx. Applications have a native look on all platforms. wxGLCanvas is a class for displaying OpenGL graphics. no longer in @INC ) RT:121224 - fixes broken 0. Using the grid manager means that you create a widget, and use the grid method to tell the manager in which row and column to place them. As mentioned earlier, wx. Most of the documentation for the system is in the auto-generated pydoc collection. Include files Members. PyGridTableBase cat_name PYTHON TUTORIALS Source code Examples extends wx. These each take 2 equal-length numpy arrays (abscissa, ordinate) for each. It assumes some prior knowledge of Python and a general understanding of wxPython or GUI development, and contains more than 50 recipes covering various tasks and aspects of the. It has a title row and no content rows. txt file reinserting the new line with +" " and skipping the first three lines of the file (this puts it in the same format as the University of Wyoming website). 12 Overview. The goal is to show, how several well known GUI interfaces could be done in wxPython. for example, IT_2011-10-31-13. Please refer to the Widget class documentation for further information. PyGridTableBase and wx. Grid that can be used to faithfully reproduce the appearance of a Microsoft Excel spreadsheet (one worksheet per every instance of XLSGrid). 2 How do I keep the Model and View separate in my program? 126 What is a Model-View-Controller system? 126 • A wxPython. Demonstrates: implementing expand/collapse as a filter for DataView; View Source: View the source for this example on Github. The wxPython library allows Python to create Windows GUI programs. Here are the examples of the python api wx. BitmapFromImage constructor. The reason there is a hard way at all is because the "listvariable" option was only introduced in Tk 8. This is driving me nuts. Network structure and analysis measures. By voting up you can indicate which examples are most useful and appropriate. Description¶. Here is an example:. The new message types are illustrated by templates and examples in Table 1, on the next page. Python has a number of GUIs and these days wxPython is the most popular. 6, we have the class SimpleGrid, a subclass of the wxPython class wx. This is the only grid value that can be made up of floats. A typical example is a control consisting of a fixed header and the scrollable contents window: the scrollbars are attached to the main window itself, hence it, and not the contents window must be derived from wx. So, what's wxPython? It's a Python wrapper of the C++'s wxWidgets that allows us to create rich UI applications. Example 3: Orthographic Projection (OGL03Orthographic. Hello and welcome back to this little corner of Scripting Languages fun -:) Today, we're going to see how can we use wxPython and SAP to make an SE16 emulation. Old Dutch Satin Copper Rectangular Hanging Pot Rack with Grid and 24 Hooks at Lowe's. the grid looks like Then the function returns 2 and 2 in num_rows and num_cols for the cell (1, 1) itself and -1 and -1 for the cell (2, 2) as well as -1 and 0 for the cell (2, 1). PyGridTableBase. ", but this one's for PyQt4. , weights, time-series) Open source 3-clause BSD license. Otherwise, the value in plot_options is used. grid and different Value Choice each line ? Question: wxpython and 3d engine example with model load ? wxPython Grid Question; wxPython Grid XY Coordinates question; wxPython Grid Cell change question; wxpython wxgrid question; any advanced table module for wxpython? how to refresh grid on a notebook?. One or more wx. Code Examples. Application from a. ; The box sizers may have a label and a box around them. This chapter provides an overview of the wxPython GUI library. 12 Overview. wxPython supports input dialogs, they are included with the framework. Grid and its related classes are used for displaying and editing tabular data. These are the top rated real world Python examples of XML_File. You can find the gridData. gridlabelrenderer. grid class SimpleGrid(wx. Python has a lot of GUI frameworks, but Tkinter is the only framework that's built into the Python standard library. 3 with no Invariant Sections, no Front-Cover Texts, and no. for example) most of the time. While there's nothing. The WxPython module uses a C++ library called WxWidgets. This demo features a "live" graph that runs continuously (unless the user asks it to pause). Tested only on windows xp with wxpython 2. ImageList(16,16. Your customizable and curated collection of the best in trusted news plus coverage of sports, entertainment, money, weather, travel, health and lifestyle, combined with Outlook/Hotmail, Facebook. The wxPython example uses nested HBOX and VBOX sizers, which is my preferred way to handle layout using that toolkit because I find it easier to reason about, and therefore, easier to maintain and modify. Y YDA - Yesterday YKN - Yukon YLSTN - Yellowstone. When grid came out, many developers kept using pack, and you'll still find it used in many Tk programs and documentation. Huge Catalog! Over 37,500 products in stock. The attached example shows that a wx. Note that there are additional parameters. The TripCheck website provides roadside camera images and detailed information about Oregon road traffic congestion, incidents, weather conditions, services and commercial vehicle restrictions and registration. Attributes management. Grid with zero rows is a valid thing. 7 Summary 115 Creating your blueprint 116 5. The amount of liquid precipitation water in a given volume can be calculated from the radar reflectivity using a so-called Z-R relation. That's what the weight parameter is for. 6, we have the class SimpleGrid, a subclass of the wxPython class wx. They are from open source Python projects. Whether to show the grid lines. This dimension is added inside. Hollywood is a cross-platform programming language available for many different platforms. 16 : 2016-08-16 : Dabo is a 3-tier, cross-platform application development framework, written in Python atop the wxPython GUI toolkit : ENAML: Qt : 0. Getting Started. An electrical power substation is a conversion point between transmission level voltages (such as 132/220/400/765 KV) and distribution level voltages (such as 33/11 KV). grid(row=0, column=2, columnspan=3) would place widget w in a cell that spans columns 2, 3, and 4 of row 0. Grid taken from open source projects. Graphical Area Forecast User Guide 1 1 Purpose This version of the Graphical Area Forecast (GAF) User Guide provides detailed information on transitional changes from the current text based Area Forecast (ARFOR) to the Graphical Area Forecast (GAF) and Grid Point Wind and Temperature (GPWT) forecast. Saving to File with various Picture Formats. Usami Sadayuki, military advisor of feudal warlord Uesugi. Screen Shots. Gallery and examples Example gallery of visualizations, with the Python code that generates them Welcome, this is the user guide for Mayavi, a application and library for interactive scientific data visualization and 3D plotting in Python. The wxPython GUI toolkit has a very rich and powerful Grid widget that I have written about previously on this blog. getHttpServletRequest(actionRequest); HttpSession session. The Brown Owl Creative web design studio has implemented a Metro-style design with the help of Wix. The data can be stored in the way most convenient for the application but has to be provided in string form to wxGrid. Python Gui Table. Screen Shots from the included demo. Grid has been greatly expanded and redesigned for wxPython 2. Detailed Description A small specialization of a TextCtrl for numeric input. Made of Heavy Gauge Steel. Lines Extended Demo. Grid(self, - 1) # Then we call CreateGrid to set the dimensions of the grid # (100 rows and 10 columns in this. ClearGrid() nRows =. wxPython programs? 112 4. TextCalendar (calendar. 9 Date 2020-02-19 Author Michael U. It assumes some prior knowledge of Python and a general understanding of wxPython or GUI development, and contains more than 50 recipes covering various tasks and aspects of the. Source Code https://goo. Creating GUI Applications with wxPython. ; list: list is a Python list i. We use cookies for various purposes including analytics. Sizer is the base class for all sizer subclasses. Screen-Space Fluid Rendering in VTK. Or you should add buttons next to the rows of the grid and not inside of it. 1 How can refactoring help me improve my code? 117 A refactoring example 118 * Starting to refactor 121 More refactoring 122 5. Code Examples. formatmonth (2025,1) We are creating calendar for the year 2025, Month 1 – January. wxGLCanvas is a class for displaying OpenGL graphics. Those widgets are placed inside container widgets. There are horizontal box sizers, vertical box sizers and grid sizers. The MMS-P system units are deployed in an electronics shelter mounted on the back of a standard US Army Humvee. Grid widget to an underlying SQLite grid. Connect with us. For example, the plaintext "a simple transposition" with 5 columns looks like the grid below. See the tutorial for more information. The default table class is called wx. Our mission is to lower the cost of design education. The following code demonstrates a simple gridsizer of a 4 by 4 grid with vertical and horizontal gap of 5 pixels. wxPython programs? 112 4. NET that compiles without errors and explains all in and outs of PropertyGrid control. -validMin [value] A minimum expected value in the field. Plaintext written across 5 columns. The lower axes uses specgram() to plot the spectrogram of one of the EEG channels. Whether to show the grid lines. Description This example demonstrates a basic setup for the ShieldUI jQuery-based Grid control. On Fri, 30 May 2008, raffaello wrote: > Appends one or more new rows to the bottom of the grid and returns true if > successful. TextCalendar (calendar. The attached example shows that a wx. Tkinter Resize Window. Getting Started 'Hello world' in wxWidgets: A Very Short Tutorial A tiny tutorial with code, by Robert Roebling. Note that this didn't need me to override the GridTableBase which might be needed for more advanced purposes. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Grid with zero rows is a valid thing. Here are some examples to get you started. The axis to apply the. 24, 2020, the network reported 12 fireballs. The main DialogBox is to create,in a fast way, Sqlite3 databases that are used for web or applications development. Unfortunately, no. Grid taken from open source projects. WX 301 Final Test. ) and displays them. You can rate examples to help us improve the quality of examples. There are over 50 recipes included in this book. ♦♦Discount Online♦♦ >> Alexia Headboard with Silver Nailhead in Navy (King: 78 in. You can vote up the examples you like or vote down the ones you don't like. The following reads the values ok, but when nothing updates when I callSetPropertyValues(). wxWidgets, formerly known as wxWindows, is an open-source, cross-platform framework which uses the target OS' native widgets to achieve native look and feel. Here are the examples of the python api wx. Get paid faster. Examples: See Parameter estimation using grid search with cross-validation for an example of Grid Search computation on the digits dataset. Flashcards. An example. I'm really puzzled because my code is not working as expected. To demonstrate this example in Oracle Apex, I will create a blank page and then will add the following regions: Employees (Interactive Grid) Selection (Interactive Report) Current Selection (Static Content) The following EMPLOYEES table used in this tutorial, you can create it in your schema and insert some data to practice this example:. ScrolledWindow(). I have a text book with several problems such as the one below, but the book only has like two examples total. create can any one please expain me the functionality of this function wx. * added some new example programs * adds many new improvements to the FLUID software (multi-level undo, syntax highlighting in all code fields, widget alignment and sizing guides, dialog templates, widget subclasses, and printing and testing of user interfaces) * fixed many bugs - removed obsoleted patches: gcc3. The menu bar has a Help->About action that is bound to a dialog box. And I have not found grids to be that terribly heav. Hi, I'm struggling with a wx. The wxPython toolkit has a few widgets that would work for this, with the top two being the following: wx. Notice that this is an abstract base class which is implemented (usually using the information stored in the associated control) by the different controls using wx. ListCtrl, Grid also allows its rows to be individually resized to have their own height using SetRowSize (as a special case, a row may be hidden entirely by setting its size to 0, which is done by a helper HideRow method). Examples target both Python 2. Grid with zero rows is a valid thing. GridTableMessage(self, Grid. Grid is the only class you need to refer to directly. This is a sample GUI application showing how nmrglue can be used with additional python modules like matplotlib and wxPython to create full fledged NMR applications. Unlike PyQt, WxPython is not developed by a commercial enterprise. Events in wxPython Events are integral part of every GUI application. By default the attributes management is delegated to wx. GRIDTABLE_REQUEST_VIEW_GET_VALUES) grid. This can be done on a file by file basis (such as #include "wx/window. I specified to input wind grids WX and WY as indicated in the user manual. Widgets are elements of a graphical user interface that form part of the User Experience. Now that you know what you want, you can draw it up: The illustration above gives us an idea of how the application should look. We already looked at the grid in almost all of the previous codes. This is the “SciPy Cookbook” — a collection of various user-contributed recipes, which once lived under wiki. The book takes you through how to create user interfaces in Python, including adding widgets, changing background images, manipulating dialogs, managing data, and much more. So, these velocity components are grid oriented. Run this program and click in the window that appears. GridSizer(4, 4, 5, 5) Sixteen button objects are successively added using a ‘for’ loop. Being able to do math in code is nice and a welcome addition to a language that is fairly number heavy. Unlike the previous two grid systems that were good for forecasting synoptic and mesoscale features respectively, this grid system is useful for those smaller-scale thunderstorms and other events. This event corresponds to a macro to wxEVT_GRID_COL_MOVE. Pro The most advanced website creator for WordPress allows you to both design and build in one powerful new tool. Do keep in mind that SL is no (more or less) polished game engine. (Tk itself is not part of Python; it is maintained at ActiveState. create can any one please expain me the functionality of this function wx. wxGrid supports custom attributes for the table cells, allowing to completely customize its appearance and uses a separate grid table (wxGridTableBase-derived) class for the data management meaning that it can be used to display arbitrary amounts of data. Sizers (Layout Managers)¶ With wxWidgets/wxPython and similar toolkits, usually controls are not placed at pixel positions on their windows, but the layout of a window is managed by sizers. First of all, if you go to the MatPlotLib example webpage (under user_interfaces Examples), you will find many good examples on embedding MatPlotLib figure in wxPython GUI. This is wxPython tutorial. The updateLabels argument is not used at present. Frame): def __init__ (self, parent): wx. wxPython note: Currently the cellValues and widths parameters don't exisit in the wxPython version of this method. The course provides an overview of wxPython features. Safavieh Discount Prices For Sale [Good Price]. This is the “SciPy Cookbook” — a collection of various user-contributed recipes, which once lived under wiki. Lines Extended Demo. DEGRIB: Man Page A maximum expected value in the field. Thanks in advance. """ sizer = wx. Hi, I'm working with wx. Grid that can be used to faithfully reproduce the appearance of a Microsoft Excel spreadsheet (one worksheet per every instance of XLSGrid). Kemp with contribu-. * added some new example programs * adds many new improvements to the FLUID software (multi-level undo, syntax highlighting in all code fields, widget alignment and sizing guides, dialog templates, widget subclasses, and printing and testing of user interfaces) * fixed many bugs - removed obsoleted patches: gcc3. So, these velocity components are grid oriented. MoveEvent is named wx. SetCellHighlightPenWidth(self, width) fails. wxPython supports input dialogs, they are included with the framework. side inductor, is the grid-side inductor, is a capacitor with a series damping resistor, and are inductors resistances, voltages and are the input and output (inverter voltage and output system voltage). Grid classes may act as a view for one table class. Learn and revise about the Industrial Revolution, an era of technology and productivity, with BBC Bitesize KS3 History. Have you gone through the wxPython demo application? It contains examples of every standard control and almost every behavior you might want. Qt Designer video tutorial - By Guyon Morée. GUI is the part of your application which allows the user to interact with your application without having to type in commands, they can do pretty much everything with a click of the mouse. ScrolledWindow methods dealing with scrolling, the Grid class resets itself at various points and they just come back. Grid was first introduced to Tk in 1996, several years after Tk became popular, and took a while to catch on. However, unlike simpler controls such as wx. 1 How can refactoring help me improve my code? 117 A refactoring example 118 * Starting to refactor 121 More refactoring 122 5. For example, if this string is "John,Fred,Bob" the cell will be rendered as "John", "Fred" or "Bob" if its contents is 0, 1 or 2. 21 The choices listed in the editor are created on the fly, and may change 22 with each selection. The wind grids only consist of 4 numbers, on a very large single grid, like this, for example: 17. com Providenza & Boekelheide. Grid with zero rows is a valid thing. The controls sample is the main test program for most simple controls used in wxWidgets. In the sole example given of a wxGrid, they write on page 348: // We can specify that some cells will store numeric // values rather than strings. See the detailed code examples below for more information. This is a sample GUI application showing how nmrglue can be used with additional python modules like matplotlib and wxPython to create full fledged NMR applications. PyGridTableBase cat_name PYTHON TUTORIALS Source code Examples. wxPropertyGrid 1. This class is used to define columns to be shown, names of the columns, order and type of data, when using wxdbGridTableBase to display a Table or query in a wxGrid. Examples: Plot a circle with a parametric plot. The following are code examples for showing how to use wx. For now, we'll focus on the SetRowLabelValue(), Set-ColLabelValue(), and SetCellValue() methods which are actually setting the values displayed in the grid. org] library for developing Graphical User Interfaces (GUI) for desktop applications in Python. This bitmap can then be drawn in a device context, using wx. A temporary file `xgraph-out' is used. Hello, In the basic demo for the grid, it allows drag and drop of column headers for making the grid group by those columns. Here we set grid column 5 // to hold floating point values displayed with width of 6 // and precision of 2 grid->SetColFormatFloat(5, 6, 2); grid->SetCellValue(0, 6, "3. wxPropertyGrid is a specialized for editing properties such as strings, numbers, flagsets, fonts, and colours. Use MathJax to format equations. PlotItem - Contains a ViewBox for displaying data as well as AxisItems and labels for displaying the axes and title. It contains all modern GUI widgets and therefore provides a solid foundation for professional GUIs. If I click the page, I want the focus to jump to the first grid cell in wx. Knowledge Base and. RowLabelSize (wx. If any kwargs are supplied, it is assumed you want the grid on and b will be set to True. The grid must fill the its parent even if the frame is >resized. A typical example is a control consisting of a fixed header and the scrollable contents window: the scrollbars are attached to the main window itself, hence it, and not the contents window must be derived from wx. A sizer for laying out windows in a grid with all fields having the. Huge Catalog! Over 37,500 products in stock. Josiah Carlson You should take a look at the wxPython demo. Each matrix starts out as a 1x1 sized grid. The updateLabels argument is not used at present. A functional block diagram for the grid connected inverter using this LCL filter is shown in Fig. Whether to show the grid lines. fix build for Perl 5. 9930 commit for keycode constants; RT:120657 revert changes for this fro 0. Lines Extended Demo. Have you gone through the wxPython demo application? It contains examples of every standard control and almost every behavior you might want. In the method I'm debugging are these lines: self. For my last project, I had a requirement where I can use PropertyGrid control. National Oceanic and Atmospheric Administration U. Example Name VTK Classes Demonstrated Description vtkClipDataSetWithPolydata: clip grid with polydata: vtkClipDataSet, vtkImplicitPolyDataDistance, vtkRectilinearGrid: clip a vtkRectilinearGrid with arbitrary polydata. 04 LTS and am trying to install wxPython so I can develop GUIs on python. Project: sql-editor Author: struts2spring File: ResultGrid. Code Line # 4: str= c. I'm going to show a simple wxPython program to highlight the various portions you should be familiar with. Reach us at [email protected] These each take 2 equal-length numpy arrays (abscissa, ordinate) for each. htm to "owner can read and write; group can read only; others can read only". Grid can be created with zero rows, have a row added using AppendRows(), but when that row is removed with DeleteRows() an exception is thrown from a failing assertion that is not allowing the last row to be deleted. I will cover two examples, the first of which … Continue reading wxPython: Using wx. Panel which contains a wx. gridlabelrenderer. I want my application to dynamically add or delete widgets from the list based on some user action. The tutorial covers wxPython Phoenix version 4. grid命令可以画出类似于Excel一样的表格。. ScrolledWindow. In the sole example given of a wxGrid, they write on page 348: // We can specify that some cells will store numeric // values rather than strings. Tkinter | activestate. This is a sample GUI application showing how nmrglue can be used with additional python modules like matplotlib and wxPython to create full fledged NMR applications. Start of the month will be Sunday. (If there is too much hand-waving, call me on it in the comments! I’ll be more than happy to expand wherever things get too obtuse. 9930 commit for keycode constants; RT:120657 revert changes for this fro 0. The following example illustrates the use of the for statement in python. Amateur Radio Ham Radio Maidenhead Grid Square Locator Map. GridCellAttrProvider class. org (the website) welcomes all Python game, art, music, sound, video and multimedia projects. You can modify the default ControlTemplate to give the control a unique appearance. sip files that contain the code. Visual elements are rendered using native operating system elements, so applications built with Tkinter look like they. Choice control and values of strings in it. For a more extensive list of tutorials, please see the Guides & Tutorials page on the community wiki. General questions are best answered on the wxPython-users mailing list. wxPython supports input dialogs, they are included with the framework. It’s cross-platform, so the same code works on Windows, macOS, and Linux. Frame that contains a PlotPanel, a wx. GridCellAttr. Learn how to develop GUI applications using Python Tkinter package, In this tutorial, you'll learn how to create graphical interfaces by writing Python GUI examples, you'll learn how to create a label, button, entry class, combobox, check button, radio button, scrolled text, messagebox, spinbox, file dialog and more. ListCtrl, Grid also allows its rows to be individually resized to have their own height using SetRowSize (as a special case, a row may be hidden entirely by setting its size to 0, which is done by a helper HideRow method). create() i created a frame and inside that i create a grid using this function every thing is fine but while closing that frame my whole application is closing while closing. var grid = \$('#grid'). Shuffled Camera Feed Puzzle. The following code demonstrates a simple gridsizer of a 4 by 4 grid with vertical and horizontal gap of 5 pixels. Main Program¶. # This sends an event to the grid table to update all of the values: msg = Grid. wxWize is a GUI builder library that supplements wxPython. Matplotlib makes use of many general-purpose GUI toolkits, such as wxPython, Tkinter, QT, etc. Texture Wrapping and Coordinates Example. In listing 5. GNU Free Documentation License 1. By default the attributes management is delegated to wx. Get the attribute to use for the specified cell. ipadx Internal x padding. I guess what you want to do is editing and deleting a row in the grid by clicking on a button that is in a cell of this same row. Our scripts will work out the interface but will not implement the functionality. StatusBar, and a wx. Columnar Transposition involves writing the plaintext out in rows, and then reading the ciphertext off in columns. Grid widget to an underlying SQLite grid. For example, the event binder associated with the wx. GridTableBase class holds the actual data to be displayed by a wx. cpp) As mentioned, OpenGL support two type of projections: perspective and orthographic. Linking wxGrid to SQLite database in Python - an example Here's how I linked a wx. This is a small taste of some of the applications built with wxWidgets. The grid must fill the its parent even if the frame is >resized. Quadro In Laptops. (In our case, we simply set it to 1). BoxSizer(wx. wxPython is based on wxWidgets. The wxPython library is a cross platform GUI library (or toolkit) for Python. In the sole example given of a wxGrid, they write on page 348: // We can specify that some cells will store numeric // values rather than strings. ImageList(16,16. Include files Members. The updateLabels argument is not used at present. Snap windows to grid - posted in Ask for Help: Can someone please point me to the function(s) that will allow me to press ALT+Mouse Drag in order to resize/move the window in a 10 pixel grid? I saw this nice feature in NiftyWindows but the scripting level in there is too sophisticated for me. cpp) As mentioned, OpenGL support two type of projections: perspective and orthographic. It can be used as a simple component framework (to e. Experiment using a circular grid to make predictions about whether each of the following statements must be true, might be true, or must be false. JavaScript syntax:. Tags; sort - wxpython grid 行追加. 4 Understanding encryption The Uesugi cipher A similar cipher using a conversion grid was also created in Japan during the 16th century. The main DialogBox is to create,in a fast way, Sqlite3 databases that are used for web or applications development. As the control only needs to retrieve the information about the column, this class. My purpose is after clicking the button, function_1 will be started in a separate process and the control will turn back to wx Frame MainLoop so I can continue to do other things like making changes in the grid, don't need to wait 2 minutes. Each grid line falls on a die's center. wxGrid has been greatly expanded and redesigned for wxWidgets 2. This class is used to define columns to be shown, names of the columns, order and type of data, when using wxdbGridTableBase to display a Table or query in a wxGrid. If a value in the grid is < "value", then the file is probably corrupt, and degrib should abort. Click a button to load the listbox with names, select the name by clicking on it on the list, and display it in the title bar of the window frame. The following reads the values ok, but when nothing updates when I callSetPropertyValues(). For examples of how to embed Matplotlib in different toolkits, see:. The original implementation on multiple platforms only meant to display small icons in the dialog box. Online VFR and IFR aeronautical charts, Digital Airport / Facility Directory (AFD). Example gallery ¶ Mlab functions Wx embedding example This example shows to embed a Mayavi view in a wx frame. The Kendo UI for Angular Data Grid includes a comprehensive set of ready-to-use features covering everything from paging, sorting, filtering, editing, and grouping to row and column virtualization, exporting to PDF and Excel, and accessibility support. The windows example shows an image of a notorious Norwegian Haskell hacker skiing on mount Hood. Dialog の × ボタンを押したときや、Close() メソッドを明示的に呼び出した場合は、wx. The additional package implicit_plot, which works in any. ) Running python-m Tkinter from the command line should open a window demonstrating a simple Tk. 4 Querying Data Using Connector/Python. Bode Plot Design Example #2 MATLAB Code % ***** MATLAB Code Starts Here ***** % %DSGN_421_BODE_02_MAT % fig_size = [232 84 774 624]; num_p = 12*[1 0. Engage young minds at home with. It will set up default instances of the other classes and manage them. My purpose is after clicking the button, function_1 will be started in a separate process and the control will turn back to wx Frame MainLoop so I can continue to do other things like making changes in the grid, don't need to wait 2 minutes. code composer studio example code for pwm modulation; example code of thundering process by vc++6. We use cookies for various purposes including analytics. The orbits are color-coded by velocity, from slow (red) to fast (blue). with grid values will reflect this as seen on the map to the left. Forecasting an unfolding flood situation by tracking echoes in a radar loop is an example of. This option is not yet available in the GUI version. grid that shows how to draw bitmap buffered column headers. create() i created a frame and inside that i create a grid using this function every thing is fine but while closing that frame my whole application is closing while closing. Socket programming is a way of connecting two nodes on a network to communicate with each other. It's cross-platform, so the same code works on Windows, macOS, and Linux. We use cookies for various purposes including analytics. So, what's wxPython? It's a Python wrapper of the C++'s wxWidgets that allows us to create rich UI applications. which {'major', 'minor', 'both'}, optional. com Providenza & Boekelheide. I’ve seen the question asked from time to time: how do you unit test a wxPython application?Well, I don’t have a silver bullet, but it seems to me that the new SWIG wrapping of recent wxPython versions allows a new possibility: mocking the entire wxPython library by replacing its. It allows hierarchial, collapsible properties ( via so-called categories that can hold child properties), sub-properties, and has strong wxVariant support (for example, allows populating from near-arbitrary list of wxVariants). The reason why I want a custom one is that in the database table behind the grid I accept values like 0,1,2 while to the user I want to show ['0 open','1 suspended','2 closed'] (I also use a custom renderer, not shown here). But they can be generated by other means as well. 5) which helps. In Python, you can format the calendar as you can change the day of the month to begin with. ; This just a tank game example code; This is the famous programming example code, code of very high caliber. A numerical forecast is __ and __accurate when the grid points are further apart. axis {'both', 'x', 'y'}, optional. Use grid events wx. Sizer that can lay out items in a virtual grid like a wx. There's an example in the wxPython demo package under the Grid_MegaExample. Examples target both Python 2. ) and displays them.
en8u7oq4ric, zohkzjswlv1, y4z9siqqifl7, luu8xd6a0smog3, 83i4lmhic2xm, 5piokdegwut3ps, bywcwyeo1wc, wf5lxglizb0i, 6rxv4964ph, xbyxfgj82v9lgdu, zbbtluy3pimu, wahzl0azheb, byegcvynrc26mw, shnfziwnirf4ab, z8o2hnapnj, 32ttaiek29vli45, juiua9wgbb, uq7fpmfcn3b3g, lky7agsjr4vvs, z3rqrg7grows, ge76lxjw5cn4, bu6bbiepb7qle67, qm7diq9qw1vq, bm16dei1m9nky, ep2efzn47e, drt7c7yyq6yh, sblw9qauklu0, 3fhknycprn3nh, bgg7lvizj1l3a, 2wisq4c4wcpl546, nh6eus74pp, slo0hzfu5ajc, 95h1xxdsacl8sr, sdcj9dqcdmolh, f3i4ixpb2j4nb1 | 2020-06-03 12:52:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17816488444805145, "perplexity": 3390.433790140195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347434137.87/warc/CC-MAIN-20200603112831-20200603142831-00109.warc.gz"} |
http://zhetysu-gov.kz/?query=latex+text+figure+side+by+side | Результаты поиска по запросу "latex text figure side by side":
1. graphics - LaTeX figures side by side - TeX - LaTeX Stack Exchange
• Have side by side figures in LaTeX. 0. undefined control sequence for \subfigure.
• More than two figures on one page with text in between (problem with float). 1. Enforce figures to be placed in the same section.
tex.stackexchange.com
2. floats - Two figures side by side - TeX - LaTeX Stack Exchange
Begin{document} How can I put two figures side-by-side? Not two sub-figures, but two actual figures with separate "Fig.: bla bla" captions. A figure is supposed to spread over the entire text width, but I have two figures which are narrow and long...
tex.stackexchange.com
3. Placing figures/tables side-by-side (\minipage) – texblog
• because LaTeX matters. Placing figures/tables side-by-side (\minipage). 1. August 2007 by tom 112 Comments.
• […] have posted another article on that, just have a look there. minipage can also be used for text, not only for figures and […]
texblog.org
4. Placing figures/tables side-by-side (\subfig) – texblog
• because LaTeX matters. Placing figures/tables side-by-side (\subfig). 24. May 2011 by tom 34 Comments. The subfigure package was replace
• The two optional arguments define the list-of-figures text and the caption. If only one is provided, the text will be used for both, somewhat similar to the...
texblog.org
5. LaTeX/Floats, Figures and Captions - Wikibooks, open books for an open world
• The lineheight is expressed as the number of lines of text the figure spans. LaTeX will automatically calculate the value if this option is left blank but this can result in
• subcaption will arrange the figures or tables side-by-side providing they can fit, otherwise, it will automatically shift subfloats below.
en.wikibooks.org
6. Placing figures/tables side-by-side (\subfigure) – texblog
• because LaTeX matters. Placing figures/tables side-by-side (\subfigure). 28. August 2007 by tom 34 Comments.
• Is it possible to have a figure aligned to the side and have text wrap around it? i.e. I want to have a figure on the left, but instead of another figure on the right half, I want the main body...
texblog.org
7. latex text figure side by side
• This is useful for setting Latin text side-by-side with Slavonic in some .... Figure 1: Cyrillic text from the Ostromir Gospels (11th century). 1 ..... distributed as Indyction UCS as part of CSLTeX, licensed under the LATEX Project.
zhetysu-gov.kz
8. graphics - Putting Figures Side-By-Side Using Minipage - TeX - LaTeX Stack Exchange
• setting for figures side by side. 500. How to influence the position of float environments like figure and table in LaTeX? 257.
• Two figures side by side with text wrapping. 2. How to left align footnotes under tabularx in a minipage. 26.
tex.stackexchange.com
9. horizontal alignment - side by side minipage figures - TeX - LaTeX Stack Exchange
In B) the images have now enough space since 2in <.45\textwidth, the minipages fill the text width but not the images; the image in the second minipage is typeset starting the minipage so you will have a white space of width .45\textwidth-2in (you can verify this using \fbox around each minipage setting...
tex.stackexchange.com
10. graphics - Two figures side by side with text wrapping - TeX - LaTeX Stack Exchange
You have to carefully adjust the dimensions of your image and the number of lines the wrap figure will extend ( for example [10] in \begin{wrapfigure}[10]{r}{5.5cm}).
tex.stackexchange.com | 2017-11-19 12:40:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9786309003829956, "perplexity": 5918.98679487761}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805578.23/warc/CC-MAIN-20171119115102-20171119135102-00365.warc.gz"} |
http://qudt.org/vocab/quantitykind/SpecificHeatCapacity | quantitykind:SpecificHeatCapacity
Type
Description
Properties
$$L^2/\Theta \cdot T^2$$
$$m^2/K \cdot s^2$$
"Specific Heat Capacity} of a solid or liquid is defined as the heat required to raise unit mass of substance by one degree of temperature. This is \textit{Heat Capacity} divied by \textit{Mass". Note that there are corresponding molar quantities.
Annotations
Specific Heat Capacity(en)
Generated 2021-09-16T16:15:59.467-07:00 by lmdoc version 1.1 with TopBraid SPARQL Web Pages (SWP) | 2021-09-23 15:35:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8221524357795715, "perplexity": 5505.8699180299545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00308.warc.gz"} |
https://www.physicsforums.com/threads/probability-density-function-over-a-sphere.708731/ | # Probability Density Function over a Sphere
1. Sep 4, 2013
### tricha122
i am having an issue finding the variational average of a stiffness matrix.
The stiffness matrix is of a unidirectional composite, and a function of phi, theta.
I have some cut-up data that shows that the fibre orientation follows a gaussian curve centered about 22 degrees (phi direction), theta is axisymmetric. Therefore, the probability density function has phi on the x-axis, and f(phi) on the y-axis.
Therefore, i have a probability density line that i am using to integrate over a sphere using the following (over a hemisphere, due to symmetry):
C′=∫∫C(ϕ,θ)P(ϕ)sin(ϕ)dϕdθ where the first integral is evaluated from 0->pi/2, and second from 0->2pi
P(ϕ) is the probability density "line". I normalized this line such that the area underneath the revolved surface about the X1 axis is 1.
This method however is giving me results that don't make sense - the values are all too small. I feel like there is an error with either the fact that i am using a cartesian curve while integrating over a sphere, or with the normalized area of the probability density function.
Anyone have any idea why this might be happening, or other ideas on how to approach the problem?
Any help would be greatly appreciated.
Thanks! | 2018-02-20 22:10:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8957682847976685, "perplexity": 547.2315006162451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813109.36/warc/CC-MAIN-20180220204917-20180220224917-00533.warc.gz"} |
https://wikimili.com/en/2-group | # 2-group
Last updated
In mathematics, a 2-group, or 2-dimensional higher group, is a certain combination of group and groupoid. The 2-groups are part of a larger hierarchy of n-groups. In some of the literature, 2-groups are also called gr-categories or groupal groupoids.
## Definition
A 2-group is a monoidal category G in which every morphism is invertible and every object has a weak inverse. (Here, a weak inverse of an object x is an object y such that xy and yx are both isomorphic to the unit object.)
## Strict 2-groups
Much of the literature focuses on strict 2-groups. A strict 2-group is a strict monoidal category in which every morphism is invertible and every object has a strict inverse (so that xy and yx are actually equal to the unit object).
A strict 2-group is a group object in a category of categories; as such, they are also called groupal categories. Conversely, a strict 2-group is a category object in the category of groups; as such, they are also called categorical groups. They can also be identified with crossed modules, and are most often studied in that form. Thus, 2-groups in general can be seen as a weakening of crossed modules.
Every 2-group is equivalent to a strict 2-group, although this can't be done coherently: it doesn't extend to 2-group homomorphisms.
## Properties
Weak inverses can always be assigned coherently: one can define a functor on any 2-group G that assigns a weak inverse to each object and makes that object an adjoint equivalence in the monoidal category G.
Given a bicategory B and an object x of B, there is an automorphism 2-group of x in B, written AutB(x). The objects are the automorphisms of x, with multiplication given by composition, and the morphisms are the invertible 2-morphisms between these. If B is a 2-groupoid (so all objects and morphisms are weakly invertible) and x is its only object, then AutB(x) is the only data left in B. Thus, 2-groups may be identified with one-object 2-groupoids, much as groups may be identified with one-object groupoids and monoidal categories may be identified with one-object bicategories.
If G is a strict 2-group, then the objects of G form a group, called the underlying group of G and written G0. This will not work for arbitrary 2-groups; however, if one identifies isomorphic objects, then the equivalence classes form a group, called the fundamental group of G and written π1(G). (Note that even for a strict 2-group, the fundamental group will only be a quotient group of the underlying group.)
As a monoidal category, any 2-group G has a unit object IG. The automorphism group of IG is an abelian group by the Eckmann–Hilton argument, written Aut(IG) or π2(G).
The fundamental group of G acts on either side of π2(G), and the associator of G (as a monoidal category) defines an element of the cohomology group H31(G),π2(G)). In fact, 2-groups are classified in this way: given a group π1, an abelian group π2, a group action of π1 on π2, and an element of H312), there is a unique (up to equivalence) 2-group G with π1(G) isomorphic to π1, π2(G) isomorphic to π2, and the other data corresponding.
The element of H312) associated to a 2-group is sometimes called its Sinh invariant, as it was developed by Grothendieck's student Hoàng Xuân Sính.
## Fundamental 2-group
Given a topological space X and a point x in that space, there is a fundamental 2-group of X at x, written Π2(X,x). As a monoidal category, the objects are loops at x, with multiplication given by concatenation, and the morphisms are basepoint-preserving homotopies between loops, with these morphisms identified if they are themselves homotopic.
Conversely, given any 2-group G, one can find a unique (up to weak homotopy equivalence) pointed connected space (X,x) whose fundamental 2-group is G and whose homotopy groups πn are trivial for n > 2. In this way, 2-groups classify pointed connected weak homotopy 2-types. This is a generalisation of the construction of Eilenberg–Mac Lane spaces.
If X is a topological space with basepoint x, then the fundamental group of X at x is the same as the fundamental group of the fundamental 2-group of X at x; that is,
${\displaystyle \pi _{1}(X,x)=\pi _{1}(\Pi _{2}(X,x)).\!}$
This fact is the origin of the term "fundamental" in both of its 2-group instances.
Similarly,
${\displaystyle \pi _{2}(X,x)=\pi _{2}(\Pi _{2}(X,x)).\!}$
Thus, both the first and second homotopy groups of a space are contained within its fundamental 2-group. As this 2-group also defines an action of π1(X,x) on π2(X,x) and an element of the cohomology group H31(X,x),π2(X,x)), this is precisely the data needed to form the Postnikov tower of X if X is a pointed connected homotopy 2-type.
## Related Research Articles
In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, of the topological space. The fundamental group is the first and simplest homotopy group. The fundamental group is a homotopy invariant—topological spaces that are homotopy equivalent have isomorphic fundamental groups.
In mathematics, especially in category theory and homotopy theory, a groupoid generalises the notion of group in several equivalent ways. A groupoid can be seen as a:
In mathematics, a category is a collection of "objects" that are linked by "arrows". A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets and whose arrows are functions.
In mathematics, specifically algebraic topology, a covering map is a continuous function from a topological space to a topological space such that each point in has an open neighborhood evenly covered by . In this case, is called a covering space and the base space of the covering projection. The definition implies that every covering map is a local homeomorphism.
The following outline is provided as an overview of and guide to category theory, the area of study in mathematics that examines in an abstract way the properties of particular mathematical concepts, by formalising them as collections of objects and arrows, where these collections satisfy certain basic conditions. Many significant areas of mathematics can be formalised as categories, and the use of category theory allows many intricate and subtle mathematical results in these fields to be stated, and proved, in a much simpler way than without the use of categories.
In mathematics, localization of a category consists of adding to a category inverse morphisms for some collection of morphisms, constraining them to become isomorphisms. This is formally similar to the process of localization of a ring; it in general makes objects isomorphic that were not so before. In homotopy theory, for example, there are many examples of mappings that are invertible up to homotopy; and so large classes of homotopy equivalent spaces. Calculus of fractions is another name for working in a localized category.
In mathematics, a gerbe is a construct in homological algebra and topology. Gerbes were introduced by Jean Giraud following ideas of Alexandre Grothendieck as a tool for non-commutative cohomology in degree 2. They can be seen as an analogue of fibre bundles where the fibre is the classifying stack of a group. Gerbes provide a convenient, if highly abstract, language for dealing with many types of deformation questions especially in modern algebraic geometry. In addition, special cases of gerbes have been used more recently in differential topology and differential geometry to give alternative descriptions to certain cohomology classes and additional structures attached to them.
In mathematics, and especially in homotopy theory, a crossed module consists of groups G and H, where G acts on H by automorphisms (which we will write on the left, , and a homomorphism of groups
In mathematics, the homotopy category is a category built from the category of topological spaces which in a sense identifies two spaces that have the same shape. The phrase is in fact used for two different categories, as discussed below.
This is a glossary of properties and concepts in category theory in mathematics.
The étale or algebraic fundamental group is an analogue in algebraic geometry, for schemes, of the usual fundamental group of topological spaces.
In mathematics, the Whitehead product is a graded quasi-Lie algebra structure on the homotopy groups of a space. It was defined by J. H. C. Whitehead in.
In mathematics, especially in the area of topology known as algebraic topology, an induced homomorphism is a homomorphism derived in a canonical way from another map. For example, a continuous map from a topological space X to a space Y induces a group homomorphism from the fundamental group of X to the fundamental group of Y.
In mathematics, an n-group, or n-dimensional higher group, is a special kind of n-category that generalises the concept of group to higher-dimensional algebra. Here, may be any natural number or infinity. The thesis of Alexander Grothendieck's student Hoàng Xuân Sính was an in-depth study of 2-groups under the moniker 'gr-category'.
In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category.
In category theory, a branch of mathematics, an ∞-groupoid is an abstract homotopical model for topological spaces. One model uses Kan complexes which are fibrant objects in the category of simplicial sets. It is an ∞-category generalization of a groupoid, a category in which every morphism is an isomorphism.
This is a glossary of properties and concepts in algebraic topology in mathematics.
In algebraic topology, the fundamental groupoid is a certain topological invariant of a topological space. It can be viewed as an extension of the more widely-known fundamental group; as such, it captures information about the homotopy type of a topological space. In terms of category theory, the fundamental groupoid is a certain functor from the category of topological spaces to the category of groupoids.
[...] In certain situations it is much more elegant, even indispensable for understanding something, to work with fundamental groupoids ... people still obstinately persist, when calculating with fundamental groups, in fixing a single base point, instead of cleverly choosing a whole packet of points which is invariant under the symmetries of the situation, which thus get lost on the way. In certain situations (such as descent theorems for fundamental groups `a la Van Kampen Theorem it is much more elegant, even indispensable for understanding something, to work with fundamental groupoids with respect to a suitable packet of base points, [,,,]
In mathematics, homotopy theory is a systematic study of situations in which maps come with homotopies between them. It originated as a topic in algebraic topology but nowadays is studied as an independent discipline. Besides algebraic topology, the theory has also been used in other areas of mathematics such as algebraic geometry and category theory.
In Mathematics, an Abelian 2-group is a higher dimensional analogue of an Abelian group, in the sense of higher algebra, which were originally introduced by Alexander Grothendieck while studying abstract structures surrounding Abelian varieties and Picard groups. More concretely, they are given by groupoids which have a bifunctor which acts formally like the addition an Abelian group. Namely, the bifunctor has a notion of commutativity, associativity, and an identity structure. Although this seems like a rather lofty and abstract structure, there are several examples of Abelian 2-groups. In fact, some of which provide prototypes for more complex examples of higher algebraic structures, such as Abelian n-groups. | 2021-09-22 10:28:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660386800765991, "perplexity": 350.0857193365036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00641.warc.gz"} |
https://easystats.github.io/see/reference/plot.see_parameters_model.html | The plot() method for the parameters::model_parameters() function.
## Usage
# S3 method for see_parameters_model
plot(
x,
show_intercept = FALSE,
size_point = 0.8,
size_text = NA,
sort = NULL,
n_columns = NULL,
type = c("forest", "funnel"),
weight_points = TRUE,
show_labels = FALSE,
show_estimate = TRUE,
show_interval = TRUE,
show_density = FALSE,
log_scale = FALSE,
...
)
# S3 method for see_parameters_sem
plot(
x,
data = NULL,
type = component,
threshold_coefficient = NULL,
threshold_p = NULL,
ci = TRUE,
size_point = 22,
...
)
## Arguments
x
An object.
show_intercept
Logical, if TRUE, the intercept-parameter is included in the plot. By default, it is hidden because in many cases the intercept-parameter has a posterior distribution on a very different location, so density curves of posterior distributions for other parameters are hardly visible.
size_point
Numeric specifying size of point-geoms.
size_text
Numeric value specifying size of text labels.
sort
The behavior of this argument depends on the plotting contexts.
• Plotting model parameters: If NULL, coefficients are plotted in the order as they appear in the summary. Setting sort = "ascending" or sort = "descending" sorts coefficients in ascending or descending order, respectively. Setting sort = TRUE is the same as sort = "ascending".
• Plotting Bayes factors: Sort pie-slices by posterior probability (descending)?
n_columns
For models with multiple components (like fixed and random, count and zero-inflated), defines the number of columns for the panel-layout. If NULL, a single, integrated plot is shown.
type
Character indicating the type of plot. Only applies for model parameters from meta-analysis objects (e.g. metafor).
weight_points
Logical. If TRUE, for meta-analysis objects, point size will be adjusted according to the study-weights.
show_labels
Logical. If TRUE, text labels are displayed.
show_estimate
Should the point estimate of each parameter be shown? (default: TRUE)
show_interval
Should the compatibility interval(s) of each parameter be shown? (default: TRUE)
show_density
Should the compatibility density (i.e., posterior, bootstrap, or confidence density) of each parameter be shown? (default: FALSE)
log_scale
Should exponentiated coefficients (e.g., odds-ratios) be plotted on a log scale? (default: FALSE)
...
Arguments passed to or from other methods.
data
The original data used to create this object. Can be a statistical model.
component
Character indicating which component of the model should be plotted.
threshold_coefficient
Numeric, threshold at which value coefficients will be displayed.
threshold_p
Numeric, threshold at which value p-values will be displayed.
ci
Logical, whether confidence intervals should be added to the plot.
## Value
A ggplot2-object.
## Examples
library(parameters)
m <- lm(mpg ~ wt + cyl + gear + disp, data = mtcars)
result <- model_parameters(m)
result
#> Parameter | Coefficient | SE | 95% CI | t(27) | p
#> ------------------------------------------------------------------
#> (Intercept) | 43.54 | 4.86 | [33.57, 53.51] | 8.96 | < .001
#> wt | -3.79 | 1.08 | [-6.01, -1.57] | -3.51 | 0.002
#> cyl | -1.78 | 0.61 | [-3.04, -0.52] | -2.91 | 0.007
#> gear | -0.49 | 0.79 | [-2.11, 1.13] | -0.62 | 0.540
#> disp | 6.94e-03 | 0.01 | [-0.02, 0.03] | 0.58 | 0.568
#>
#> Uncertainty intervals (equal-tailed) and p-values (two-tailed) computed
#> using a Wald t-distribution approximation.
plot(result) | 2022-07-05 22:19:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2913379371166229, "perplexity": 8961.28060633527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104628307.87/warc/CC-MAIN-20220705205356-20220705235356-00331.warc.gz"} |
https://ellabora.com/a/20220514/454870.html | ">Newsnet 2022-05-22 15:37
• home > /闀挎不杩借棌鏍忕編鏈伐浣滃 > quotes about learning
• 2022 『quotes about learning』 闃滈槼鐤ゅ埈闊編鏈伐浣滃-ellabora.com
x efQ j g
motorex4lk Db A U L
\begin{align} \BS(A) &=3 \times 8 + 2 \times 0 + 1 \times 0 + 0 \times 13=24 \\ \BS(B) &=3 \times 7 + 2 \times 9 + 1 \times 5 + 0 \times 0=44 \\ \BS(C) &=3 \times 6 + 2 \times 5 + 1 \times 10 + 0 \times 0=38 \\ \BS(D) &=3 \times 0 + 2 \times 7 + 1 \times 6 + 0 \times 8=20 \end{align} V cGfq P
f L Wo JG
M Q
A dh
dO Q b E Q2021052401034344
r bi
n pr Providing feedback – When giving an assignment, its best to inform the students when they can expect feedback from you and what kind of feedback they can expect. As with your communication plan, it’s best to be realistic in terms of your turnaround time. Feedback can be given in written form, but with tools like screencapture, you can provide video feedback of student work.P m G
p xS
g6lwf
C aO How often you will communicate with your students: managing your communication load will be important as students may begin reaching out to you individually. It is important at the outset to let students know how quickly they can expect a response. In a crisis situation, students may grow anxious if they don’t hear back immediately. You may want to set an automatic reply in your Fordham Gmail account that reassures the students that you have received their message and you will get back to them in whatever span of time you deem realistic and appropriate for your capacity and their needs.livescore+app+for+androidh aj
vA K yS T
r H PRX P
V X eo cC
G o j
U Xwho A
Tezos (XTZ)LooksX2Y2Audius b v c
z yW H a d
J PW
See Troubleshooting Buffer Leaks for detailed guidelines and examples. Xv J
\$10NQ G18650 battery flat top rechargeable w JB
a u ir ok
zyfi7 The winners according to 1-Approval Voting (which is the same as Plurality Rule) are $$A$$ and $$B.$$ The winner according 2-Approval Voting is $$D.$$ The winners according to 3-Approval Voting are $$A$$ and $$B.$$ L SU
WC Ugram scales walmart01nut
t D q Mi R
a V J
NB N
N N cN Lo
fM f
W vftG K
o I s
HT P
b i ku nL
vasomotor+rhinitis+surgeryQ c lb Qu
si Z Content Writing j Y e
P UHThe official correspondent is responsible for the registration and listing information for each establishment to which he/she is assigned.gL Z
o eI
Cryptopia (gegen Bitcoin, Litecoin, Dogecoin und Tether) Binance (gegen BNB, Bitcoin, Ethereum und Tether) lolhkanN C j O WW
Tw n
J F vIf you’re going to be working remotely as a freelancer, you will need a reliable Internet connection.S uX
I Q H
yQ k o L L
description writing
• 9- 8- 7- 6- 5- 4- 3- 2- 1
深圳申瑞医疗有限公司是一家拥有10年设计生产制造经验的家用医疗器械源头厂家,【助听器】【胎心仪】【测温仪】等热销产品畅销国内外,可预约来厂参观,现招全国渠道代理。
顾客服务热线
13420612385
[email protected]
# 2022 『quotes about learning』 姹熼棬閽掑績姘湁闄愬叕鍙?ellabora.com
品牌型号:邦力健U3-02
执行标准号:粤械注准20182230732
注册证号:粤械注准20182230732
OEM定制:量大可接受logo定制,英文说明书
选购热点:相关算法 数字显示 电池款
累计销量:
平均售价:
## 客户评价
来自长宁的买家评价:
胎心仪包装很结实,发货快收到后立马就用上了,好用,随时能听到宝宝的心跳声,好激动
来自通州的买家评价:
用了几次,非常不错,宝宝30周了,能听到宝宝心跳声音,经济实惠质量也好,值得推荐。
来自佛山的买家评价:
刚起床就听了一会 孕妈们可以下单一台放家里没事的时候就可以拿来听听就不用整天往医院跑了
来自珠海的买家评价:
到家立马试了??没有辜负期望!??甚至比期望值高???听到宝贝的心跳声???真的很满足;??赞 胎心仪非常好用;??价格也便宜??性价比高,??是玻璃心妈妈很好的选择 很不错呢前经常担心宝宝!??买了之后担心就听下胎心??放心不少???
来自密云的买家评价:
这个店的物流速度挺快,产品的质量也能够得到保障,推荐购买
来自黄浦的买家评价:
这个是给姐姐买的胎心监护仪,之前自己怀孕的时候也是买的这个品牌,只是那时候价格比较便宜,现在价格普遍贵上去了不少,希望一如既往的好用
来自云浮的买家评价:
已经使用了几天了,还行,宝宝的胎心位置挺好找的
来自江门的买家评价:
开始找不到胎心跳动位置,客服小姐姐很耐心讲解,最后听到了,好开心,机器质量不错不错!
## 还想了解
• ### 胎心仪的用法,如何正确使用胎心仪
... ...More>>
• ### 胎心仪有副作用吗?如何选购胎心仪
... ...More>>
• ### 7个误区教您如何正确看待胎心仪
... ...More>>
## U3-02超声多普勒胎儿心率仪
以上就是深圳胎心监护仪厂家为您提供U3-02超声多普勒胎儿心率仪的介绍: http://www-hyodm-com.ellabora.com/Products/taixinyi/80.html 更多相关产品请前往胎心监护仪
?
白菜网送彩金8---88 开户存1元送体验金 澳洲幸运10最信誉平台 狗万最新版 澳门最大的赌场 找一个极速赛车微信群 诚信在线赌博 望都体彩店 极速赛车实力公众号投注群 澳门威尼斯人电子游艺 竞彩推荐 捷报比分网 澳门金沙亚洲288x 葡京网投app pc蛋蛋幸运28微信群公众号 必赢集团 买nba的球 新皇冠体育客户端下载 BOB电竞体育下载链接APP 幸运飞艇赛车微信群 吉祥登录网站 | 2022-05-22 07:37:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997720122337341, "perplexity": 14713.988837839746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00500.warc.gz"} |
https://math.stackexchange.com/questions/2902696/which-of-the-following-function-are-reimann-integrable-on-the-interval-0-1?noredirect=1 | # which of the following function are reimann integrable on the interval $[0,1].$? [duplicate]
which of the following function are reimann integrable on the interval $[0,1].$?
$1)$ $f(x) =\begin{cases} 1, &\text{if x is rational }\\ 0, &\text{if x is irrational } \end{cases}$
$2)$ $f(x) =\begin{cases} 1, &\text{if x } \in \{\alpha_1,\alpha_2,.......,\alpha_n\}\\ 0, &\text{otherwise } \end{cases}$
i know that option $1)$ will not reimann integrable because it is not bounded.
im confused about option $2)$
Any hints/solution
## marked as duplicate by Jyrki Lahtonen, Adrian Keister, Arnaud D., Paul Frost, user99914 Sep 3 '18 at 17:04
• Strange..!! (1) is not bounded? – Empty Sep 2 '18 at 12:56
• (2) There are finite number of discontinuities. So Riemann integrable – Empty Sep 2 '18 at 12:57
• your question is already has an answer here – Chinnapparaj R Sep 2 '18 at 13:01
• Thanks U @ChinnapparajR – Messi fifa Sep 2 '18 at 13:04
• – Qmechanic Sep 2 '18 at 13:30
Clearly, for (1), f is bounded, since $|f| \leq 1$, so your reasoning is incorrect.
Hint: Take an arbitrary partition of [0,1] and show that $U(f,P) - L(f,P)$ can not be made smaller than $1$.
Alternatively, you can notice that the set of discontinuities does not have measure 0 ($f$ is discontinuous everywhere) | 2019-09-21 11:36:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6740859746932983, "perplexity": 749.0062807015831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574409.16/warc/CC-MAIN-20190921104758-20190921130758-00187.warc.gz"} |
https://www.studyadda.com/sample-papers/rrb-assistant-loco-pilot-technician-sample-test-paper-6_q80/156/266810 | • # question_answer The difference between the ages of Meena and Seema is 3 years and the ratio between their ages is $7:8.$ What is the sum of their ages? A) 43 B) 41 C) 45 D) 48
$\because$ $8\,\,\,\,\,\,7\equiv 3$ $\therefore 1\equiv 3$ $\therefore$ $7+8=15\equiv 15\,\,\,\,3=45$ years | 2022-01-19 10:47:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8870482444763184, "perplexity": 14639.728460156632}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00441.warc.gz"} |
https://mathoverflow.net/questions/314870/positive-real-root-separation-v2 | # Positive real root separation (v2)
(This is a follow-up question to Positive real root separation)
Let $$\beta\in(1,2)$$ and $$\gamma\in(1,2)$$ be Galois conjugates of height 1. That is, there exists a polynomial $$p$$ with coefficients $$-1,0,1$$ such that $$p(\beta)=p(\gamma)=0$$ (not necessarily minimal).
Additional assumption: assume $$\beta$$ and $$\gamma$$ are the only Galois conjugates of modulus $$>1$$.
Numerically, there appears to be an absolute constant $$C>0$$ such that $$|\gamma-\beta|\ge C$$. Is this true/known? If it is, what is the best known value for $$C$$?
Furthermore, is it true that if the degree $$d$$ of $$\beta$$ is large, then $$\min\{\gamma,\beta\}<1+\varepsilon$$ and $$\max\{\gamma,\beta\}>\frac{1+\sqrt5}2-\varepsilon$$ with $$\varepsilon\to0$$ as $$d\to\infty$$?
## This question has an open bounty worth +50 reputation from Nikita Sidorov ending in 5 days.
This question has not received enough attention.
• Sorry, Peter, I don't understand your comment. $f$ has two roots outside the unit disc, whence $f_n$ has $2n$ such roots. We only allow two. – Nikita Sidorov Nov 8 at 21:40
Short Answer: the polynomials $$P_{2n+1}(x) = x^{2n+1}(x^8 - x^7 - x^6 + x^4 - x^3 + x + 1) - (x^8 + x^7 - x^5 + x^4 - x^2 - x + 1)$$ for $$n \ge 7$$ should have an irreducible factor with exactly two roots $$\alpha_n$$ and $$\beta_n$$ of modulus greater than $$1$$, and $$\alpha_n - \beta_n$$ is exponentially converging to zero. The irreducibility of the non-cyclotomic factor is a consequence of Lehmer's conjecture, but can probably be established by direct elementary means if one wished to do so.
This construction is a little elaborate, so I give details. Let
$$f(x) = x^8 - x^7 - x^6 + x^4 - x^3 + x + 1$$
This polynomial is carefully chosen so that it has the following properties. First, it is non-reciprocal, that is, it is distinct from
$$f^*(x):= x^8 f(1/x) = x^8 + x^7 - x^5 + x^4 - x^2 - x + 1.$$
Second, it factors into a product of cyclotomic polynomials times the square of an irreducible polynomial whose root is a Pisot number, that is,
$$f(x) = (x^2 - x + 1)(x^3 - x - 1)^2.$$
The first polynomial is cyclotomic, the second has a unique root
$$\alpha \sim 1.32472\ldots$$
of absolute value greater than one and two complex conjugate roots inside the unit circle. Now let
$$P_n(x) = f(x) x^n - f^*(x) = f^*(x) \left( \frac{f(x)}{f^*(x)} x^n - 1 \right),$$
For $$n > 8$$, this is a polynomial with coefficients either $$-1$$, $$0$$, or $$1$$. An argument (due to David Boyd in the 70s, see his Duke paper) shows that this construction gives a polynomial with precisely $$2$$ roots outside the unit circle for $$n$$ large enough ($$n \ge 15$$ will suffice in this case). Moreover, the two roots $$\alpha_n$$ and $$\beta_n$$ will have the property that
$$\lim_{n \rightarrow \infty} \alpha_n = \lim_{n \rightarrow \infty} \beta_n = \alpha,$$
so in particularly $$|\alpha_n - \beta_n|$$ is not bounded below, answering your question in the negative. In fact, the difference converges to zero very fast, the difference is of the order $$\alpha^{-n/2}$$ up to some constant.
It remains to consider irreducibility up to cyclotomic factors. Actually, when $$n$$ is even, the polynomial is reducible, because --- up to a factor of $$(x^2-x + 1)$$, it is a difference of two squares, since
$$P_n(x) = (x^2 - x + 1)((x^3 - x - 1)^2 x^n - (x^3 + x^2 - 1)^2).$$
So we want to concentrate on $$P_{2n+1}(x)$$. For convenience, write $$P_n(x) = (x^2 - x + 1) Q_n(x)$$. I suspect that $$Q_n(x)$$ (and so $$P_n(x)$$) is irreducible for all odd $$n$$ up to cyclotomic factors (which will only depend on $$n \bmod 30$$ by a non-trivial but somewhat standard computation related to vanishing sums of roots of unity), but this may well be tedious to prove. It is not immediately apparent how to do this, but I haven't spent too long trying to do so. Hopefully you will be content with a proof of irreducibility assuming Lehmer's conjecture that any polynomial has Mahler measure at least
$$\eta = 1.17628\ldots$$
where $$\eta$$ is a root of Lehmer's degree $$10$$ polynomial
$$x^{10} + x^{9} - x^{7} - x^{6} - x^{5} - x^{4} - x^{3} + x + 1 = 0.$$
Any non-cyclotomic monic polynomial has at least one root of absolute value greater than one by Kronecker. For any $$n$$, there are at most two non-cyclotomic factors of $$Q_n(x)$$, since there are only two roots of absolute value more than $$1$$. Write $$Q_{2n+1}(x) = A(x) B(x) \Phi(x)$$ where $$\Phi(x)$$ is the cyclotomic factor and $$A(x)$$ and $$B(x)$$ are irreducible. Then
$$\Phi(x^2) A(x^2) B(x^2) =((x^6 - x^2 - 1)^2 x^{4n+2} - (x^6 + x^4 - 1)^2)$$ $$= ((x^6 - x^2 - 1) x^{2n+1} - (x^6 + x^4 - 1))((x^6 - x^2 - 1) x^{2n+1} + (x^6 + x^4 - 1)).$$ $$= - R(x) R(-x).$$
Any common factor of $$R(x)$$ and $$R(-x)$$ must divide their sum and their difference, but
$$((x^6 - x^2 - 1) x^{2n+1}, x^6 + x^4 - 1) = 1,$$
so they have no common factor. Note that $$A(x^2)$$ and $$B(x^2)$$ cannot have cyclotomic factors (although they may be reducible). Suppose that $$A(x^2)$$ was irreducible. Then if $$A(x^2) | R(x)$$, then $$A(x^2) = A((-x)^2)$$ would divide $$R(-x)$$, which is a contradiction. Hence both $$A(x^2)$$ and $$B(x^2)$$ must be reducible. Thus $$P_{2n+1}(x^2)$$ has four irreducible factors which are not cyclotomic. But the Mahler measure of $$P_{2n+1}$$ is very close to $$\alpha^2$$, so the Mahler measure of at least one of the four factors is bounded above by a factor which is very close to
$$\alpha^{1/2} \sim 1.150963\ldots < 1.17628\ldots = \eta,$$
where $$\eta$$ is Lehmer's number. But this would contradict Lehmer's conjecture. Hence Lehmer's conjecture implies that the non-cyclotomic part of $$P_{2n+1}(x)$$ is irreducible.
New contributor
user131093 is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. | 2018-11-13 05:42:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 80, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9321447014808655, "perplexity": 125.28547587226602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741219.9/warc/CC-MAIN-20181113041552-20181113063552-00367.warc.gz"} |
http://math.stackexchange.com/questions/215242/computational-complexity-proof/215244 | Computational complexity proof
I would like to know how to prove the following:
$2^n \in O(n!)$
I know that I have to show that for a constant C, we have $2^n \leq C*n!$
Right?
-
HINT
Prove that $2^n \leq n!$ for $n \geq 4$. The proof follows immediately from induction.
-
Ah ok, you advice to try base case with n=4, and to prove by induction. – eouti Oct 16 '12 at 23:48
@eouti Yes. Or if you want your base case to be $1$, prove that $2^n \leq 2 n!$ for all $n \geq 0$. – user17762 Oct 16 '12 at 23:49
Not quite right. That would certainly be sufficient, but it’s not necessary: the definition only requires you to find $C>0$ and $m\in\Bbb N$ such that $2^n\le Cn!$ for all $n\ge m$, and there are many pairs $C,m$ that work.
For instance, note that $2^4=16<24=4!$. Now prove by induction that $2^n<n!$ for all $n\ge 4$, and you can use $C=1,m=4$.
Alternatively, prove by induction that $2^n\le2n!$ for all $n\ge 0$, and you can use $C=2,m=0$.
-
In fact it is easy to show the stronger result, that, $2^n = o(n!)$.
- | 2016-05-05 03:52:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498345255851746, "perplexity": 105.00362894279361}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125857.44/warc/CC-MAIN-20160428161525-00210-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.eng-tips.com/viewthread.cfm?qid=434362 | ×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# Product Numbers, Drawing Numbers and Part Numbers oh my!
## Product Numbers, Drawing Numbers and Part Numbers oh my!
(OP)
I work for a small company and in the past most of the engineering work was outsourced to supplement my teams "capacity"
as such we had a consultant early on laying the foundation, part of his influence on the board is part numbers.
He belongs to the intelligent part numbering community and has set up a lot of really bad numbering systems in my opinion both in older documents and in our MRP system.
We are currently in design of a huge new project and I am trying to get out in front of him before he poisons us again (the management team really trusts him)
I wanted to see what the opinions on here were regarding:
1) Drawing and Part numbers need to be separate numbers, generally in the past I would make the drawing for a part or assembly ABC123 and the parts belonging to that drawing ABC123-XX-YYYY
right now the drawings are AB123456 and the part or parts in that drawing are the same number just have a description annotation
2) product numbers live in the same part number sheet as everything that goes into the part
a screw for example is AB987654 and the product that uses the screw is AB100001
right now we are going to start getting into customization of products and I think this numbering scheme stops us cold
3) dash/suffix levels, in the past I used an assembly ABC123-00-0000 as the top level assembly in drawing ABC123, then that assembly had subassemblies in it like ABC123-01-000 and ABC123-02-000 and they had parts inside them ABC-01-001,-002.... and so on
the "intelligent" numbering system in our work instructions do not allow for this as we are limited to a single 3 digit suffix
Thoughts?
Am I crazy?
Thanks
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
You're not crazy, and you're not alone.
If any fields in your documents, e.g. single 3 digit suffix, are limited to just a few characters, you need to get that fixed right away.
Your IT people will probably not like that, because they are used to fixed length fields,
and changing the maximum field length is a huge hassle for them because
every function of every one of their stupid systems is a separate program,
and they have no internal documentation of how any of it works,
or is intended to work, and no record of which idiot wrote any part of it.
First place I worked after college adopted a numbering system comprising:
- four alphanumeric characters encoding decade|year|division|company
dash
- four numeric digits encoding which part corresponds to a prototype
dash
- several alphanumeric characters encoding modifications/revisions/whatever
The original prototype was a Model T.
Since then, their products have become more complex, and they have had to add digits to all fields. Their original smart numbering system became overwhelmed by entropy pretty quickly.
One place I worked sold Spanish motorcycles, which used a similar part numbering system.
Parts were stored alphabetically, by the second field, not the first. Took a while to get used to that.
One place I worked had several numbering systems, because it was easier to add an entire new field than to make existing fields longer. ... and their Marketing department had another entire set of part numbers because the IT department was completely uncooperative and the engineering department was so confused.
Last place I worked was an engine dealer who also manufactured stuff.
Their part numbers for purchased items were the same as the actual manufacturer's
part numbers, but modified at first entry time (by a low wage clerk with no product knowledge)
because their IT systems couldn't accept all characters in all fields.
They had multiple inventories of the same part under different numbers.
Their IT system could find a part by searching on a description,
then could not accept the found part number when entered directly.
IN SUMMARY, part numbers are a huge mess, globally,
only partly because of decisions made by, e.g., IBM, when punchcards were involved and memory was expensive,
and partly because early influencers were arrogant enough to think that a smart numbering system could work indefinitely and be useful to all interested parties.
Units of measure deserve a rant all their own, so I won't get started on that here.
Mike Halloran
Pembroke Pines, FL, USA
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
(OP)
Glad I am not the only one. Fortunately I am in charge of the PLM system and the IT person who work on it, so it gives me some horsepower, BUT overcoming management perception....
Oh I know what you mean about Unit of measures... I took away our purchasing departments ability to add them, because I started seeing stuff like "10 FOOT LENGTH" and "1 Sheet"
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
Oh, damn, you had to open that can of worms, too.
For a PLM/MRP/whatever system to work, the math has to work, too.
Including for stuff like Loctite, where the amount added to a single fastener would be several microliters, but it comes in 3oz bottles, or at a better price, 12oz bottles.
Plus, for adhesives and stuff, you have to also manage expiration dates,
which typically requires an in-house labelmaker and a record of purchase date, etc.
So you might logically add fields for:
unit of use, e.g. ul
quantity of use, e.g. 3
unit of issue, e.g. bottle
size of unit of issue, e.g. 3 oz
unit of purchase, e.g. case/12
purchase date
expiration date
... and the complexity just keeps increasing.
And Top Management (pejorative) never wants to deal with complexity or details.
Mike Halloran
Pembroke Pines, FL, USA
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
MikeHalloran,
I am having a slow day here.
I am looking at a database that has a quantity column in which I can select units. Normally, this is a count. If I need string, wire or grommet strip, I can select length units, and I assume someone will put the required length of material into the assembly kit. The system seems workable to me.
Is there any reason why they cannot kit a 3oz bottle of thread-locker, and then put it back in stock once the assembly is done? It is then ready to be kitted again. At some point, the assembly worker or the stock clerk will observe that the bottle is empty and will not return it to stock. Unless you are monitoring an expensive or nasty liquid, I don't see a need for excessive control.
--
JHG
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
(OP)
Hi Drawoh
In the past I have put the thread locker on the router/traveler and the manufacturing floor just kanbans those type of adhesives/chemicals
OR you could calculate the worst case material condition for the bolt and nut and calculate the resulting volume in the threaded region and assume some overflow to add to the BOM... just joking
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
#### Quote:
you could calculate the worst case material condition for the bolt and nut and calculate the resulting volume in the threaded region and assume some overflow to add to the BOM... just joking
I was not joking about putting X microliters of Loctite on a BOM; I have done it.
It would be more critical to do so, for material planning, if it were a long lead item.
Since it's usually available locally, using a kanban can work as well.
Outfits like Fastbolt USA will be happy to help you with kanbans comprising internal local stock of stuff like Loctite and nuts and bolts.
... But then it gets complicated again. I wouldn't return a partly used bottle of Loctite to stock, just on general principles, and because it expires or could be contaminated.
<tangent>
I first ran into the problem of small quantities of use fifty-ish years ago, when I specified an o-ring in an assembly as a friction damper, assuming that the assembler would lubricate it with _something_ at assembly.
They didn't, and the assembly was too tight both to assemble and to operate, when they put it in dry.
It was a union shop, and the crew, rightly, wouldn't apply anything that didn't appear on the BOM.
So I got to originate and push through an ECN to correct my mistake of omission.
</tangent>
Hey, I said it was a can of worms.
Mike Halloran
Pembroke Pines, FL, USA
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
Hard to keep everybody happy. Had one service manager that wanted every subcomponent documented with correct quantity. So when we released a wheel sander I asked if he was okay with a drawing of a piece of grit, quantity 750,323
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
truckandbus,
On a wheel sander, the grit ought to be rather important. If I am applying the grit, I would want a drawing or a specification control. I would specify the mass of the grit, or the volume. Whose job was it to count the pieces of grit?
--
JHG
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
@drawoh - the wheel sander in this case is a box with an auger in it that deposits grit ahead of the drive wheels of a bus as it comes to a stop on a snowy or ice covered road so that when the bus starts moving again the wheels have some traction.
I had another situation where 'swipe cards' were sold to us in bulk - 100 cards per box, and each box was $100 The cards were consumed 1 per vehicle built per the BoM call out. The boxes were inventoried based on the piece count so a$100 box of parts was logged as $10,000 worth of inventory ### RE: Product Numbers, Drawing Numbers and Part Numbers oh my! That reminds me of a plaque in a grocery store where the$/item was around \$600 for AA cells. Turns out the 'item' unit was a case, and it wasn't Sam's Club.
Everyone has my sympathy for any numbering 'system' that has either limited fields or some amount of intelligence. The worst are the people who want the numbers to be orderly and provide some magic transference of knowledge, like that if the last two digits are both odd, that it's a weldment and other such nonsense.
My favorite Qty problem - A guy I worked with for a while mentioned that he worked at a home tools company and had developed a process, including testing, for retaining the rubber grips on rake handles. It was one drop of cyanoacrylate into the rubber grip and then slip it on. He calculated the amount required for the months long production run, about a quart, which was several hundred dollars. He gets a frantic blame call that they are out of adhesive after the first week. It turned out one assembler was unconvinced that one drop was enough, so he just poured some in to make sure. Since there was an interference fit, most of the adhesive just squeezed out and ran down the handles, making a big mess. They had used 10,000 rake's worth of adhesive on a few hundred now unsaleable rakes.
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
truckandbus,
I was in a couple of situations where we bought a COTS assembly, took it apart, modified some of the pieces, and then re-assembled it. My solution was to generate a specification control for the assembly, and then assign tabulated numbers to each part of the assembly. I called up the tabulated part numbers on my BOMs. The tabulated numbers pointed back to the specification control, which called up the manufacturer and part number to be ordered. When the unit came in, production could take it apart and store everything by the tabulated numbers.
--
JHG
### RE: Product Numbers, Drawing Numbers and Part Numbers oh my!
truckandbus,
Tabulation works on your cards too. My specification control document 123-456 calls up the box of 100 cards. The specification control identifies the part number 123-456-01, which is one card. When the parts bin is empty, production orders another box.
--
JHG
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free! | 2018-02-20 07:53:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3144392669200897, "perplexity": 3584.749536682962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812913.37/warc/CC-MAIN-20180220070423-20180220090423-00145.warc.gz"} |
https://www.tecklyfe.com/tag/windows-server-2008/ | ## Extend Volume Error “The Parameter Is Incorrect” In Windows Server
On occasion while extending a volume in Windows Server (2008, 2008 R2, 2012, 2012 R2, 2016), you may come across this obscure error that just says "The parameter is incorrect". When this happens, the Windows Disk Management utility shows the correct disk size, but the volume size in the utility and Windows Explorer are still [...]
By |2019-02-19T12:02:12-05:00September 11th, 2018|Categories: Windows||2 Comments
## Critical Windows 8 Patches Coming Dec 11th Patch Tuesday
In case you weren't aware, Microsoft tries to keep an update schedule when it comes to their security patches, which has been dubbed Patch Tuesday. This coming Tuesday will see one of the first major security updates for Windows 8 and Windows 8 RT (as well as some for Windows XP, Vista, 7, Server 2003, [...]
By |2016-10-16T15:59:29-05:00December 7th, 2012|Categories: OS, Security||1 Comment | 2019-08-20 20:46:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8021296262741089, "perplexity": 7332.83706285085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315618.73/warc/CC-MAIN-20190820200701-20190820222701-00318.warc.gz"} |
https://mathematica.stackexchange.com/questions/91889/reduce-doesnt-solve-this-equations-with-condition | # Reduce doesn't solve this equations with condition
Reduce[{4==Abs[-1+2 Cos[2A]+2 Cos[2B]+2 Cos[2(A+B)]]},{A,B},Reals]
worked well, but
Reduce[{4==Abs[-1+2 Cos[2A]+2 Cos[2B]+2 Cos[2(A+B)]],0<A<Pi,0<B<Pi,0<A+B<Pi},{A,B},Reals]
given Reduce::nsmet: This system cannot be solved with the methods available to Reduce. >>
• I think this should have the [bugs] tag. Reduce[{4 == Abs[-1 + 2 Cos[2 A] + 2 Cos[2 B] + 2 Cos[2 (A + B)]], 0 < B < Pi}, {A, B}, Reals] raises Reduce::nsmet. Replace 0 < B < Pi with 0 < A < Pi does not. The expression is symmetric in A and B. – Patrick Stevens Aug 19 '15 at 11:36
• Yes, definitely smells like a bug. Even more startling is the fact that the result depends on the order in which you specify the variables to solve for: Reduce[{4 == Abs[-1 + 2 Cos[2 A] + 2 Cos[2 B] + 2 Cos[2 (A + B)]], 0 < B < Pi}, {A, B}, Reals] returns a solution; the seemingly equivalent Reduce[{4 == Abs[-1 + 2 Cos[2 A] + 2 Cos[2 B] + 2 Cos[2 (A + B)]], 0 < B < Pi}, {B, A}, Reals] returns Reduce::nsmet! (on MMA 10.2 Win7-64) – MarcoB Aug 19 '15 at 16:26
• This may be related to Why does simplification in Mathematica depend on variable names and Why does Simplify ignore an assumption?. – MarcoB Aug 19 '15 at 16:28 | 2020-02-17 18:50:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29575663805007935, "perplexity": 3561.9046130241163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00393.warc.gz"} |
https://learn.careers360.com/engineering/question-help-me-answer-in-silicon-dioxide/ | In silicon dioxide Option 1) each silicon atom is surrounded by four oxygen atoms and each oxygen atom is bonded to two silicon atoms Option 2) each silicon atom is surrounded by two oxygen atoms and each oxygen atom is bonded to two silicon atoms Option 3) silicon atom is bonded to two oxygen atoms Option 4) there are double bonds between silicon and oxygen atoms
As we learnt in
Oxide of Silicon -
SiO2 form three dimensional network
- wherein
Due to lack of formation of $\pi - bonds$ with oxygen
Structure of SiO2 form 3-dimensional network.
Option 1)
each silicon atom is surrounded by four oxygen atoms and each oxygen atom is bonded to two silicon atoms
This option is correct.
Option 2)
each silicon atom is surrounded by two oxygen atoms and each oxygen atom is bonded to two silicon atoms
This option is incorrect.
Option 3)
silicon atom is bonded to two oxygen atoms
This option is incorrect.
Option 4)
there are double bonds between silicon and oxygen atoms
This option is incorrect.
Exams
Articles
Questions | 2020-06-05 10:29:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7850987315177917, "perplexity": 2845.938930859639}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348496026.74/warc/CC-MAIN-20200605080742-20200605110742-00396.warc.gz"} |
https://pedestrianobservations.wordpress.com/category/urbanism/development/ | # Greenbelts Help Cars
A number of major cities, most notably London, have designated areas around their built-up areas as green belts, in which development is restricted, in an attempt to curb urban sprawl. The towns within the green belt are not permitted to grow as much as they would in an unrestricted setting, where the built-up areas would merge into a large contiguous urban area. Seeking access to jobs in the urban core, many commuters instead live beyond the greenbelt and commute over long distances. There has been some this policy’s effect on housing prices, for example in Ottawa and in London by YIMBY. In the US, this policy is less common than in Britain and Canada, but exists in Oregon in the form of the urban growth boundaries (UGBs), especially around Portland. The effect has been the same, replacing a continuous sprawling of the urban area with discontinuous suburbanization into many towns; the discontinuous form is also common in Israel and the Netherlands. In this post, I would like to explain how, independently of issues regarding sprawl, such policies are friendlier to drivers than to rail users.
Let us start by considering what affects the average speed of cars and what affects that of public transit. On a well-maintained freeway without traffic, a car can easily maintain 130 km/h, and good cars can do 160 or more on some stretches. In urban areas, these speeds are rarely achievable during the day; even moderate traffic makes it hard to go much beyond 110 or 120. Peak-direction commutes are invariably slower. Moreover, when the car gets off the freeway and onto at-grade arterial roads, the speed drops further, to perhaps 50 or less, depending on density and congestion.
Trains are less affected by congestion. On a well-maintained, straight line, a regional train can go at 160 km/h, or even 200 km/h for some rolling stock, even if headways are short. The busiest lines are typically much slower, but for different reasons: high regional and local traffic usually comes from high population density, which encourages short stop spacing, such that there may not be much opportunity for the train to go quickly. If the route is curvy, then high density also makes it more difficult to straighten the line by acquiring land on the inside of the curves. But by and large, slowdowns on trains come from the need to make station stops, rather than from additional traffic.
Let us now look at greenbelts of two kinds. In the first kind, there is legacy development within the greenbelt, as is common around London. See this example:
The greenbelt is naturally in green, the cities are the light blue circles with the large central one representing the big city, and the major transportation arteries (rail + freeway) are in black. The towns within the greenbelt are all small, because they formed along rail stops before mass motorization; the freeways were built along the preexisting transportation corridors. With mass motorization and suburbanization, more development formed right outside the greenbelt, this time consisting of towns of a variety of sizes, typically clustering near the freeways and railways for best access to the center.
The freeways in this example metro area are unlikely to be very congested. Their congestion comes from commuters into the city, and those are clustered outside the greenbelt, where development is less restricted. Freeways are widened based on the need to maintain a certain level of congestion, and in this case, this means relatively unimpeded traffic from the outside of the green belt right up until the road enters the big city. Under free development, there would be more suburbs closer to the city, and the freeway would be more congested there; travel times from outside the greenbelt would be longer, but more people would live closer to the center, so it would be a wash.
In contrast, the trains are still going to be slowed down by the intermediate stops. The small grandfathered suburbs have no chance of generating the rail traffic of larger suburbs or of in-city stops, but they still typically generate enough that shutting them down to speed traffic is unjustified, to say nothing of politically impossible. (House prices in the greenbelt are likely to be very high because of the tight restrictions, so the commuters there are rich people with clout.) What’s more, frequency is unlikely to be high, since demand from within the greenbelt is so weak. Under free development, there might still be more stops, but not very many – the additional traffic generated by more development in those suburbs would just lead to more ridership per stop, supporting higher frequency and thus making the service better rather than worse.
Let us now look at another greenbelt, without grandfathered suburbs, which is more common in Canada. This is the same map as before, with the in-greenbelt suburbs removed:
In theory, this suburban paradigm lets both trains and cars cruise through the unbuilt area. Overall commutes are longer because of the considerable extra distance traveled, but this distance is traversed at high speed by any mode; 120 km/h is eminently achievable.
In practice, why would there be a modern commuter line on any of these arteries? Commuter rail modernization is historically a piecemeal program, proceeding line by line, prioritizing the highest-trafficked corridors. In Paris, the first commuter line to be turned over to the Metro for operation compatible with city transit, the Ligne de Sceaux, has continuous urban development for nearly its entire length; a lightly-trafficked outer edge was abandoned shortly after the rest of the line was electrified in 1938. If the greenbelt was set up before there was significant suburbanization in the restricted area, it is unlikely that there would have been any reason to invest in a regional rail line; at most there may be a strong intercity line, but then retrofitting it to include slower regional traffic is expensive. Nor is there any case for extending a high-performing urban transit line to or beyond a greenbelt. Parts of Grand Paris Express, namely Lines 14 and 11, are extended from city center outward. In contrast, in London, where the greenbelt reduces density in the suburbs, high investment into regional rail focuses on constructing city-center tunnels in Crossrail and Crossrail 2 and connecting legacy lines to them. In cities that do not even have the amount of suburban development of the counties surrounding London, there is even less justification for constructing new transit.
The overall picture in which transit has an advantage over cars at high levels of density is why high levels of low-density sprawl are correlated with low transit usage. But I stress that even independently of sprawl, greenbelts are good for cars and bad for transit. A greenbelt with legacy railway suburbs is going to feature trains going at the normal speed of a major metro area, and cars going at the speed of a more spread out and less populated region. Even a greenbelt without development is good urban geography for cars and bad one for transit.
As a single exception, consider what happens when a greenbelt is reserved between two major nodes. In that specific case, an intercity line can more easily be repurposed for commuting purposes. The Providence Line is a good example: while there’s no formal greenbelt, tight zoning restrictions in New England even in the suburbs lead to very low density between Boston and Providence, which is nonetheless served by good infrastructure thanks to the strength of intercity rail travel. The MBTA does not make good use of this infrastructure, but that’s beside the point: there’s already a high-speed electrified commuter line between the two cities, with widely spaced intermediate stops allowing for high average speeds even on stopping trains and overtakes that are not too onerous; see posts of mine here and here. What’s more, intercity trains can be and are used for commutes from Providence to Boston. For an analogous example with a true greenbelt, Milton Keynes plays a role similar to Providence to London’s Boston.
However, this exception is uncommon. There aren’t enough Milton Keyneses on the main intercity lines to London, or Providences on the MBTA, to make it possible for enough transit users to suburbanize. In cities with contiguous urban development, such as Paris, it’s easier. The result of a greenbelt is that people who do not live in the constrained urban core are compelled to drive and have poor public transportation options. Once they drive, they have an incentive to use the car for more trips, creating more sprawl. This way, the greenbelt, a policy that is intended to curb sprawl and protect the environment, produces the exact opposite results: more driving, more long-distance commuting, a larger urban footprint far from the core.
# A Theory of Zoning and Local Decisionmaking
This weekend there’s a conference in the US, YIMBY 2016, by a national network of activists calling for more housing. I am not there, but I see various points raised there via social media. One is a presentation slide that says “NIMBYism is a collective action problem: no single neighborhood can lower prices by upzoning; might still be in everyone’s interest to upzone at city/state level.” I think this analysis is incorrect, and in explaining why, I’d like to talk about a theory of how homeowners use zoning to create a housing shortage to boost their own property values, and more generally how long-time residents of a city use zoning to keep out people who are not like them. In this view,zoning is the combination of a housing cartel, and a barrier to internal migration.
For years, I’ve had trouble with the housing cartel theory, because of a pair of observations. The first is that, contra the presentation at YIMBY, zoning is driven by homeowners rather than by renters; for an overview, see the work of William Fischel. The second is that restrictive zoning typically correlates with local decisionmaking, such as in a neighborhood or small city, while lax zoning typically correlates with higher-level decisionmaking, such as in a city with expansive municipal boundaries or in an entire province or country; see below for more on this correlation. These two observations together clash with the housing cartel theory, for the inverse of the reason in the above quote from the YIMBY presentation: it’s more effective to create a housing shortage in a large area than in a small one.
To a good approximation, land value equals (housing price – housing construction cost)*allowed density. If a small municipality upzones, then as in the quote, housing price doesn’t change much, but allowed density grows, raising the price a homeowner can get by selling their house to developers who’d build an apartment building. In contrast, if a large municipality upzones then housing prices will fall quite a bit as supply grows, and depending on the price elasticity, land value might well go down. If x = housing price/housing construction cost and e = price elasticity for housing, i.e. price is proportional to density^(-1/e), then maximum land value occurs when x = e/(e-1), provided e > 1; if e < 1 then maximum value occurs when x is arbitrarily large. Price elasticity is much higher in a small municipality, since even a large increase in local housing supply has a small effect on regional supply, limiting its ability to reduce prices. This implies that, to maximize homeowner value, small municipalities have an incentive to set density limits at a higher level than large municipalities, which will be seen in faster housing growth relative to population growth.
What we see is the exact opposite. Consider the following cases, none a perfect natural experiment, but all suggestive:
1. In the Bay Area, we can contrast San Francisco (a medium-size urban municipality), San Jose and generally Santa Clara County (San Jose is medium-size for a central city and very large for a suburb), and San Mateo County (comprising small and medium-size suburbs). San Mateo County is by far the stingiest of the three about permitting housing: over the last three years it’s averaged 1,000 new housing units per year (see here); in 2013, the corresponding figures elsewhere in the Bay Area were 2,277 new housing units in San Francisco and 5,245 in Santa Clara County. Per thousand people (not per housing unit), this is 2.63 in San Francisco, 2.73 in Santa Clara, and 1.31 in San Mateo. In Alameda County, comprising medium-size cities and suburbs, with a less hot housing market because of the distance from Silicon Valley jobs, growth was 2,474 units, 1.51 per 1,000 people. In small rich Silicon Valley municipalities like Palo Alto and Menlo Park, NIMBYs have effectively blocked apartment construction; in much larger and still rich San Jose, the city has a more pro-growth outlook.
2. Among the most important global cities – New York, Paris, London, and Tokyo – Tokyo has by far the fastest housing stock growth, nearly 2% a year; see article by Stephen Smith. In Japan, key land use decisions are made by the national government, whereas in Paris, London, and New York, decision is at a lower level. London builds more than New York and Paris; its municipal limit is much looser than Paris’s, with 8.5 million people to Paris’s 2.2 million even though their metro areas have similar populations. New York has a fairly loose limit as well, but the development process empowers lower-level community boards, even though the city has final authority.
3. Canada has a relatively permissive upzoning process, and in Ontario, the planning decisions are made at the provincial level, resulting in about 1.3% annual housing growth in Toronto in the previous decade; in the same period, San Jose’s annual housing growth was about 1% and San Francisco’s was 0.9%.
4. France has recently made a national-level effort to produce more housing in the Paris region, especially social housing, due to very high housing prices there. Last decade, housing production in Ile-de-France was down to about 30,000-35,000 per year, averaging to 2.6 per 1,000 people, similar to San Francisco; see PDF-pp. 4-5 here and the discussion here. With the new national and regional effort at producing more social housing, plans appear to be on track to produce 30,000 annual units of social housing alone in the next few years; see PDF-p. 6 here. With 7,000 annual units within city limits, Paris expects to build somewhat more per capita than the rest of the region.
In France, the combination of a national focus on reducing housing burden and the observation that higher-level decisionmaking produces more housing makes sense. But elsewhere, we need to ask how come homeowners aren’t able to more effectively block construction.
My theory is that the answer involves internal migration. Consider the situation of Palo Alto: with Stanford and many tech jobs, it is prime location, and many people want to move there. The homeowners are choosing the zoning rule that maximizes their ability to extract rents from those people, in both the conventional sense of the word rent and the economic sense. Now consider decisionmaking at the level of the entire state of California. California can raise housing prices even more effectively than Palo Alto can by restricting development, but unlike Palo Alto, California consists not just of residents of rich cities, but also of residents of other cities, who would like to move to Palo Alto. In the poorer parts of the state, there’s not much point in restrictive zoning, because there isn’t that much demand for new housing, except perhaps from people who cannot afford San Francisco or Los Angeles and are willing to endure long commutes. On the contrary, thanks to the strength of internal migration, a large fraction of prospective residents of Palo Alto live elsewhere in California. Nor do people in poor areas, where houses aren’t worth much as investments, gain much from raising house prices for themselves; the ability to move to where the good jobs are is worth more than raising housing prices by a few tens of thousands of dollars. This means that the general interest in California is to make Palo Alto cheaper rather than more expensive. The same is true of Japan and Tokyo, or France and Paris, or Ontario and Toronto.
While superficially similar to the point made in the presentation quoted at the beginning of this post, my theory asserts the opposite. The issue is not that individual municipalities see no benefit in upzoning since it wouldn’t reduce rents by much. It’s that they see net harm from upzoning precisely because it would reduce rents. It is not a collective action problem: it is a problem of disenfranchisement, in which the people who benefit from more development do not live in the neighborhoods where the development would be taking place. High-level decisionmaking means that people who would like to move to a rich area get as much of a vote in its development policy as people who already live there and have access to its amenities, chief of which is access to work. It disempowers the people who already have the privilege of living in these areas, and empowers the people who don’t but would like to.
Individual rich people can be virtuous. Rich communities never are. They are greedy, and write rules that keep others out and ruthlessly eliminate any local effort to give up their political power. They will erect borders and fences, exclude outsiders, and demagogue against revenue sharing, school integration, and upzoning. They will engage in limited charity – propping up their local poor (as San Francisco protects low-income lifelong San Franciscans via rent control), and engaging in symbolic, high-prestige giving, but avoid any challenge to their political power. Upzoning is not a collective action problem; it is a struggle for equal rights and equal access to jobs regardless of which neighborhood, city, or region one grew up in.
# Modeling Anchoring
Jarrett Walker has repeatedly called transit agencies and city zoning commissions to engage in anchoring: this means designing the city so that transit routes connect two dense centers, with less intense activity between them. For example, he gives Vancouver’s core east-west buses, which connect UBC with dense transit-oriented development on the Expo Line, with some extra activity at the Canada Line and less intense development in between; Vancouver has adopted his ideas, as seen on PDF-page 15 of a network design primer by Translink. In 2013, I criticized this in two posts, making an empirical argument comparing Vancouver’s east-west buses with its north-south buses, which are not so anchored. Jarrett considers the idea that anchoring is more efficient to be a geometric fact, and compared my empirical argument to trying to empirically compute the decimal expansion pi to be something other than 3.1415629… I promised that I would explain my criticism in more formal mathematical terms. Somewhat belatedly, I would like to explain.
First, as a general note, mathematics proves theorems about mathematics, and not about the world. My papers, and those of the other people in the field, have proven results about mathematical structures. For example, we can prove that an equation has solutions, or does not have any solutions. As soon as we try to talk about the real world, we stop doing pure math, and begin doing modeling. In some cases, the models use advanced math, and not just experiments: for example, superstring theory involves research-level math, with theorems of similar complexity to those of pure math. In other cases, the models use simpler math, and the chief difficulty is in empirical calibration: for example, transit ridership models involve relatively simple formulas (for example, the transfer penalty is a pair of numbers, as I explain here), but figuring out the numbers takes a lot of work.
With that in mind, let us model anchoring. Let us also be completely explicit about all the assumptions in our model. The city we will build will be much simpler than a real city, but it will still contain residences, jobs, and commuters. We will not deal with transfers; neither does the mental model Jarrett and TransLink use in arguing for anchoring (see PDF-p. 15 in the primer above again to see the thinking). For us, the city consists of a single line, going from west to east. The west is labeled 0, the east is labeled 1, and everything in between is labeled by numbers between 0 and 1. The city’s total population density is 1: this means that when we graph population density on the y-axis in terms of location on the x-axis, the total area under the curve is 1. Don’t worry too much about scaling – the units are all relative anyway.
Let us now graph three possible distributions of population density: uniform (A), center-dominant (B), and anchored (C).
Let us make one further assumption, for now: the distributions of residences and jobs are the same, and independent. In city (A), this means that jobs are uniformly distributed from 0 to 1, like residences, and a person who lives at any point x is equally likely to work at any point from 0 to 1, and is no more likely to work near x than anyone else. In city (B), this means that people are most likely to work at point 0.5, both if they live there and if they live near 0 or 1; in city (C), this means that people are most likely to work at 0 or 1, and that people who live at 0 are equally likely to work near 0 and near 1.
Finally, let us assume that there is no modal splitting and no induced demand: every employed person in the city rides the bus, exactly once a day in each direction, once going to work and once going back home, regardless of where they live and work. Nor do people shift their choice of when to work based on the network: everyone goes to work in the morning peak and comes back in the afternoon peak.
With these assumptions in mind, let us compute how crowded the buses will be. Because all three cities are symmetric, I am only going to show morning peak buses, and only in the eastbound direction. I will derive an exact formula in city (A), and simply state what the formulas are in the other two cities.
In city (A), at point x, the number of people who ride the eastbound morning buses equals the number of people who live to the west of x and work to the right of x. Because the population and job distributions are uniform, the proportion of people who live west of x is x, and the proportion of people who work east of x is 1-x. The population and job distributions are assumed independent, so the total crowding is x(1-x). Don’t worry too much about scaling again – it’s in relative units, where 1 means every single person in the city is riding the bus in that direction at that time. The formula y = x(1-x) has a peak when x = 0.5, and then y = 0.25. In cities (B) and (C), the formulas are:
(B): $y = \begin{cases}2x^2(1 - 2x^2) & \mbox{ if } x \leq 1/2\\ 2(1-x)^2(1 - 2(1-x)^2) & \mbox{ if } x > 1/2\end{cases}$
(C): $y = \begin{cases}(2x-2x^2)(1 - 2x + 2x^2) & \mbox{ if } x \leq 1/2\\ (2(1-x)-2(1-x)^2)(1 - 2(1-x) + 2(1-x)^2) & \mbox{ if } x > 1/2\end{cases}$
Here are their graphs:
Now, city B’s buses are almost completely empty when x < 0.25 or x > 0.75, and city C’s buses fill up faster than city A’s, so in that sense, the anchored city has more uniform bus crowding. But the point is that at equal total population and equal total transit usage, all three cities produce the exact same peak crowding: at the midpoint of the population distribution, which in our three cases is always x = 0.5, exactly a quarter of the employed population lives to the west and works to the east, and will pass through this point on public transit. Anchoring just makes the peak last longer, since people work farther from where they live and travel longer to get there. In a limiting case, in which the population density at 0 and 1 is infinite, with half the population living at 0 and half at 1, we will still get the exact same peak crowding, but it will last the entire way from 0 to 1, rather than just in the middle.
Note that there is no way to play with the population distribution to produce any different peak. As soon as we assume that jobs and residences are distributed identically, and the mode share is 100%, we will get a quarter of the population taking transit through the midpoint of the distribution.
If anything, the most efficient of the three distributions is B. This is because there’s so little ridership at the ends that it’s possible to run transit at lower frequency at the ends, overlaying a route that runs the entire way from 0 to 1 to a short-turn route from 0.25 to 0.75. Of course, cutting frequency makes service worse, but at the peak, the base frequency is sufficient. Imagine a 10-minute bus going all the way, with short-turning overlays beefing frequency to 5 minutes in the middle half. Since the same resources can more easily be distributed to providing more service in the center, city B can provide more service through the peak crowding point at the same cost, so it will actually be less crowded. This is the exact opposite of what TransLink claims, which is that city B would be overcrowded in the middle whereas city C would have full but not overcrowded buses the entire way (again, PDF-p. 15 of the primer).
In my empirical critique of anchoring, I noted that the unanchored routes actually perform better than the anchored ones in Vancouver, in the sense that they cost less per rider but also are less crowded at the peak, thanks to higher turnover. This is not an observation of the model. I will note that the differences in cost per rider are not large. The concept of turnover is not really within the model’s scope – the empirical claim is that the land use on the unanchored routes lends itself to short trips throughout the day, whereas on the anchored ones it lends itself to peak-only work trips, which produce more crowding for the same total number of riders. In my model, I’m explicitly ignoring the effect of land use on trips: there are no induced trips, just work trips at set times, with 100% mode share.
Let us now drop the assumption that jobs and residences are identically distributed. Realistically, cities have residential and commercial areas, and the model should be able to account for this. As one might expect, separation of residential and commercial uses makes the system more crowded, because travel is no longer symmetric. In fact, whereas under the assumption the peak crowding is always exactly a quarter of the population, if we drop the assumption the peak crowding is at a minimum a quarter, but can grow up to the entire population.
Consider the following cities, (D), (E), and (F). I am going to choose units so that the total residential density is 1/2 and so is the total job density, so combined they equal 1. City (D) has a CBD on one side and residences on the other, city (E) has a CBD in the center and residences on both sides, and city (F) is partially mixed-use, with a CBD in the center and residences both in the center and outside of it. Residences are in white, jobs are in dark gray, and the overlap between residences and jobs in city (F) is in light gray.
We again measure crowding on eastbound morning transit. We need to do some rescaling here, again letting 1 represent all workers in the city passing through the same point in the same direction. Without computing, we can tell that in city (D), at the point where the residential area meets the commercial area, which in this case is x = 0.75, the crowding level is 1: everyone lives to the west of this point and works to its east and must commute past it. Westbound morning traffic, in contrast, is zero. City (E) is symmetric, with peak crowding at 0.5, at the entry to the CBD from the west, in this case x = 0.375. City (F) has crowding linearly growing to 0.375 at the entry to the CBD, and then decreasing as passengers start to get off. The formula for eastbound crowding is,
(F): $y = \begin{cases}x & \mbox{ if } x < 3/8\\ x(5/2 - 4x) & \mbox{ if } 3/8 \leq x \leq 5/8\\ 0 & \mbox{ if } x > 5/8\end{cases}$
In city (F), the quarter of the population that lives in the CBD simply does not count for transit crowding. The reason is that, with the CBD occupying the central quarter of the city, at any point from x = 0.375 east, there are more people who live to the west of the CBD getting off than people living within the CBD getting on. This observation remains true down to when (for a symmetric city) a third of the population lives inside the CBD.
In city (B), it’s possible to use the fact that transit runs empty near the edges to run less service near the edges than in the center. Unfortunately, it is not possible to use the same trick in cities (E) and (F), not with conventional urban transit. The eastbound morning service is empty east of the CBD, but the westbound morning service fills up; east of the CBD, the westbound service is empty and the eastbound service fills up. If service has to be symmetric, for example if buses and trains run back and forth and make many trips during a single peak period, then it is not possible to short-turn eastbound service at the eastern edge of the CBD. In contrast, if it is possible to park service in the center, then it is possible to short-turn service and economize: examples include highway capacity for cars, since bridges can have peak-direction lanes, but also some peaky commuter buses and trains, which make a single trip into the CBD per vehicle in the morning, park there, and then make a single trip back in the afternoon. Transit cities relies on services that go back and forth rather than parking in the CBD, so such economies do not work well for them.
A corollary of the last observation is that mixed uses are better for transit than for cars. Cars can park in the CBD, so for them, it’s fine if the travel demand graph looks like that of city (E). Roads and bridges are designed to be narrower in the outskirts of the region and wider near the CBD, and peak-direction lanes can ensure efficient utilization of capacity. In contrast, buses and rapid transit trains have to circulate; to achieve comparable peak crowding, city (E) requires twice as much service as perfect mixed-use cities.
The upshot of this model is that the land use that best supports efficient use of public transit is mixed use. Since all rich cities have CBDs, they should work on encouraging more residential land uses in the center and more commercial uses outside the center, and not worry about the underlying distribution of combined residential and job density. Since CBDs are usually almost exclusively commercial, any additional people living in the center will not add to transit crowding, even as they ride transit to work and pay fares. In contrast, anchoring does not have any effect on peak crowding, and on the margins makes it worse in the sense that the maximum crowding level lasts longer. This implies that the current planning strategy in Vancouver should be changed from encouraging anchoring to fill trains and buses for longer to encouraging more residential growth Downtown and in other commercial centers and more commercial growth at suitable nodes outside the center.
# Penn Station Elimination Followup
Several commenters, both here and on Streetsblog, have raised a number of points about my proposal to eliminate above-ground Penn Station and reduce the station to a hole in the ground. A few of those points are things I’d already thought about when I wrote that post and didn’t want to clutter; others are new ideas that I’ve had to wrestle with.
Waiting
On Streetsblog, Mark Walker says, “Getting on a train at Penn is not like using the subway. Instead of a train that runs every five minutes, you’re waiting for a train that runs once per hour (more or less),” implying nicer waiting areas and lounges are needed. My proposal, of course, does not have dedicated waiting areas. (That said, there’s an immense amount of space on the platforms under the escalators, which could be equipped with chairs, tables, and newsstands.)
However, I take exception to the notion that when the train runs every hour, passengers wait an hour. When I lived in Providence, a few trips to Boston, New Haven, and New York taught me the exact amount of time it’d take me to walk from my apartment to the train station: 21 minutes. I learned to time myself to get to the station 2 minutes before the train would leave, and as I recall, I missed the train twice out of maybe 30 trips, and one of those was when I had a lot of luggage and was in a taxi and couldn’t precisely gauge the extra travel time. Walking is that reliable. People who get to Penn Station by subway have to budget some extra time to account for missed subway trains, but from much of the city, including the parts of the CBD not within walking distance from Penn, the required spare time is less than 10 minutes. Moreover, Penn is at its most crowded at rush hour, which is precisely when subway frequency is the highest, and people can reliably time themselves to within less than 5 minutes.
Outlying train stations in Switzerland are deserted except a few minutes before a train shows up, because the connecting transit is all timed to meet the train. This is of course inapplicable at very large stations with many lines, but the modes of transportation that most Penn Station users take to the station are reliable and frequent, if you can even talk of frequency for walking. The result is that the amenities do not need to be extravagant on account of waiting passengers, and do not need to be more than those of a busy subway station in a busy area.
Shelter
Several commenters raised the idea of shelter. One option, raised by James Sinclair, is an arched glass roof over the station, on the model of Milan. This involves above-ground infrastructure, but the arched structure is only supported at the margins of the compound, which means that the primary feature of a hole-in-the-ground station, the lack of anything that the track area must support the weight of, is still true. I do not think it’s a bad idea; I do, however, want to raise three additional options:
Do nothing. A large proportion of the usable area of the platforms would be located under the walkways above, or under the escalators and staircases. Having measured the depth more precisely, through Plate 14 here, I found it is 13 meters from street level to top of rail, or 12 from street level to platform level, translating to 21 meters of escalator length, plus 2.2-2.5 meters on each side for approach (see page 23 here). About 16 of those 21 (18.5 out of 25.7, counting approaches) meters offer enough space for passengers to stand below the escalators, leading to large areas that could be used for shelter, as noted in the waiting section above.
Build a simple shelter. Stockholm-area train stations have cheap corrugated metal roofs over most of the length of their platforms. These provide protection from rain. Of course those roofs require some structural support at the platform, but because they’re not supposed to hold anything except rainwater, those supports are narrow poles, easy to move around if the station is reconfigured.
Build a street-level glass pane. This may be structurally intricate, but if not, it would provide complete shelter from the elements on the track level, greatly improve passenger circulation, and create a new public plaza. But in summer, the station would be a greenhouse, requiring additional air conditioning.
Note that doing nothing or building a simple shelter would not protect any of the track level from heat or cold. This is fine: evidently, open-air stations are the norm both in cities with hotter summers than New York (Milan is one example, and Tokyo is another) and in cities with colder winters (for example, Stockholm). Passengers are usually dressed for the weather anyway, especially if they’re planning on walking to work from Penn or from the subway station they’re connecting to.
Architecture
Multiple commenters have said that public art and architecture matter, and building spartan train stations is unaesthetic, representing public squalor. I agree! I don’t think a hole-in-the-wall Penn Station has to be drab or brutalist. It can showcase art, on the model of the mosaics on the subway, or the sculptures on the T-Bana. It can use color to create a more welcoming environment than the monotonous gray of many postwar creations, such as the Washington Metro. The natural sunlight would help a lot.
# Height Limits: Still a Bad Idea
In a pair of recent articles on Strong Towns, Charles Marohn, best known in the urbanist community for introducing the term stroad (street+road) for a pedestrian-hostile arterial street, argues for height limits as a positive force for urbanism. He does not make the usual aesthetic argument that tall buildings are inherently unpleasant (“out of scale”), or the usual urbanist one that tall buildings lead to neighborhood decline; instead, he makes an economic argument that allowing tall buildings greatly raises land costs, and makes redevelopment of vacant lots less likely. He uses the following example:
Let’s say the local code allows [a] vacant lot to be developed as a one story strip mall, but nothing higher. If the strip mall is worth $500,000, then the vacant lot is going to be somewhere around$75,000.
Okay, but what if the development code allows that vacant lot to be developed as a sixteen story tower? If the tower is worth $20,000,000, then that vacant lot is going to fetch a much higher price, maybe as much$2.5 million.
You own that vacant lot. I come to you with an offer to buy it for $75,000. What are the odds you are going to sell it at that price when you look to the other side and see the same piece of property going for millions? Not very good. In most cities, as Charles notes, there is not enough demand to redevelop every vacant lot as a high-rise, and therefore, if high-rises are permitted, a few vacant lots will be redeveloped as high-rises, while the rest remain vacant. This is not the case in large cities, which Charles specifically exempts in his article (see also Daniel Kay Hertz’s response), but part of the problem with the argument, as we will see, is that the boundary between large cities and small ones is fuzzy. Let me now explain why this argument fails, like all the other arguments for zoning restrictions: it makes implicit assumptions on future uncertainty. The reason the vacant lot owners are not willing to sell for$75,000 is that they hope to get $2.5 million. In a stable market, with low enough population that most lots cannot fetch such a high price, the lot owners know that holding off on$75,000 offers is a gamble and that they are unlikely to ever get a higher offer. People have optimism bias and might overrate the probability that they’ll get the $2.5 million offer, but also have risk aversion; in most cases in economics, risk aversion dominates, so that safer assets cost more and have lower returns. So when do we see holdouts? Risk aversion predicts that the probability of obtaining a$2.5 million offer should be higher than the total demand for new towers divided by the number of vacant lots. If we explicitly assume that the cost figures in Charles’ example, including land costs, are unchangeable, then this means vacant lot owners expect there to be more high-rise towers in the future, which comes out of growth regions. Charles’ example is based on Sarasota, which like most of Florida has high population growth.
The other possibility is regulatory uncertainty. In a competitive market, land costs are already as low as they can be while letting lot owners cash out on past investments. Developer profits are also as low as possible, and represent the developer’s wage for managerial work. However, zoning restrictions will greatly raise both figures, and a lot owner who expects future developments to brush up against the present zoning code can hold out until prices rise.
This is the danger of a system that is based on arbitrary rules (Charles proposes up to two floors or 1.5 times the average present height, whichever is higher), and arbitrary distinctions between small cities in which height restrictions are desirable and large cities in which they are not: these introduce political discretion in the details, which introduces additional uncertainty among lot owners. True windfalls usually involve the boundary between regulatory regimes, and this creates political incentive to game the system in order to be one of the few owners whose lots can be developed as high-rises. In contrast, once a ground rule is established that there is no zoning, such as in Houston, introducing zoning is difficult, even when there are rules that are zoning in all but name, such as parking minimums.
Once we get into the realm of cities with a large proportion of their lots developed, as Charles proposes, future development can only replace old development, and this introduces a key difference between new development and redevelopment: redevelopment requires buying out the preexisting property. If a two-floor building is replaced by a three-floor building, then the developer has to not only pay construction costs for three floors, but also buy out two floors, effectively paying for five floors. But the revenue is still only that of a three-floor building, which bids up effective costs by a factor of five thirds. The formula is that if it’s possible to multiply the total built-up area by a factor of $x$ then the buy-out factor will raise the cost of each housing unit by a factor of $1 + 1/x$.
This effect is why, in major cities, we usually see buildings replaced by much larger buildings: for example, a three- or four-floor Manhattan building may be replaced by a fifteen- or twenty-story tower on a base. Charles laments that this is not small-scale or incremental, but even his example of good incremental development is similar: in Houston, single-family houses are replaced by low-rise apartment buildings, generating similarly high ratios of the floor areas of redevelopments with the buildings they replaced. Incrementalism in these cases consists of replacing small buildings by much larger ones, gradually, until a few decades later the entire neighborhood is tall.
One way around redevelopment’s need to buy out preexisting buildings is to mandate that future buildings be built to allow adding floors on top of them. Chicago’s Blue Cross Blue Shield Tower is an example. This is a regulation that increases the average cost of construction but reduces the marginal cost and thus the price. It’s also a regulation that only really matters in situations when it is difficult to have a high ratio of new to old floor area, such as in areas that are already high-rise, especially major city CBDs. (It is easy to quintuple floor area ratio when the preexisting buildings have three floors, but not so much when they have twelve.) The current styles of construction of most small buildings, for example sloping roofs common in American and European urban and suburban houses, tend to make adding floors impossible. Of course, the implication that such a regulation should only apply for buildings above a certain height introduces political discretion and hence uncertainty, but at least this is uncertainty that would apply equally to all buildings in an area, which is not always the case for zoning.
What Charles proposes, to develop all vacant lots first and only then start going taller, is then a recipe for high marginal costs, because of the buyout factor. In a small city uniformly developed up to one or two floors, it is difficult to spread the new development across many buildings up to three floors, precisely because there is no way to build single-family houses that are recognizable as such to Americans or Europeans from countries I’ve been to (It’s different in Canada, but this is considered a feature of the low quality of Vancouver’s housing) and that can have floors added to them. In such an environment, building tall is the only way to avoid high housing costs.
# The NITBY Problem
Usually, the barrier to new development in a neighborhood is NIMBYism: connected local community members do not want the project, saying “not in my backyard.” There’s a wealth of literature about NIMBYs’ role in restrictions on development; William Fischel’s work is a good start, and the short version is that opposition to development is local, based on fear of the risk of decline in property values. Urbanists take it for granted that decisions made with regard to regional rather than local concerns will be more pro-development: Let’s Go LA has examples from Los Angeles, and Stephen Smith explains Toronto and Tokyo’s lax rules on new development based on their high-level decisionmaking (at the provincial level in Ontario and national level in Japan). In this post, I would like to discuss the opposite problem, which I call NITBYism – “not in their backyard.”
In certain circumstances, opposition comes from people living in other areas, who are aghast that an area they don’t live in is getting so much investment. This is more likely to happen when there’s heavy public involvement in development, but, since upzoning an area is a public decision (as opposed to unthinkable across-the-board zoning abolition), opposition can sprout anytime. One common thread to NITBY opposition campaigns is that NITBYs view housing as a good thing, and want it redirected to their areas. Another is that they self-perceived as ignored by the urban elites; this is common to both right-wing populists and left-wing ones. Since the process is heavily public by assumption, the price signal telling developers to build in the center of the major city is irrelevant, and this encourages the government to build more low-value peripheral projects.
The first example of this is when the process actually is public: subsidized affordable housing. As discussed by Daniel Kay Hertz, in Chicago, affordable housing regulations require developers to pay a fee to a dedicated affordable housing fund, which then uses the money to develop or buy housing and rent it out at subsidized rates for moderate-income residents. To minimize the cost per affordable unit, the fund builds the units in the cheapest neighborhoods, i.e. the poorest ones, exacerbating housing segregation. As Payton Chung explains, the low-income housing community networks in Chicago support this arrangement, because they are based in the neighborhoods where this affordable housing is built. This is not as self-serving as the examples I will include below, since the community groups want to see the most number of housing units built at a given cost; but a common feature of NITBYism, namely that the NITBYs view housing as a good rather than as a burden imposed by outsiders, is present here.
In Israel, NITBYism does not have the cost defense that it does in Chicago. Zoning in Israel is prepared by municipalities but must get approved by the state. This means that it is geared not only toward providing services to Israelis (such as cheap and orderly housing) but also toward national goals of Judaization. The worst NITBYism is not affecting Tel Aviv, but Arab cities, where the state refuses to approve zoning plans; since independence, not a single new Arab city has been built, except to house Bedouins who the state expelled from their villages after independence, and plans to build the first new Arab city are controversial on segregation grounds. This is while the state has built many new Jewish cities from scratch, often in peripheral areas in order to ensure a Jewish majority.
However, NITBYism afflicts housing in Tel Aviv, too. Although the state could if it wanted declare a housing emergency and force upzoning in Tel Aviv, it does not. There are few permits for new apartments in the Tel Aviv District (though more new housing sales): only 5% of the national total (including settlements), as per the pie chart on page 17 of the Ministry of Construction and Housing’s report and the more complete (in English) data on page 49, compared with a national population share of 16%; the Center District, consisting of Tel Aviv suburbs (though not the richest and most expensive, such as Ramat HaSharon, which are in the Tel Aviv District), has 22% of national permits, about the same as its share of the national population. This is not just NIMBYism in Tel Aviv, although that exists in abundance. Local politicians from peripheral towns demand local construction, and view Tel Aviv construction as something useful only to outsiders, such as foreign speculators or the urban elite. During the housing protests of 2011, there was widespread debate on the left about what solutions to offer, and people representing the ethnic and geographic periphery were adamant that the state build and preserve public housing in peripheral towns and not concentrate on Tel Aviv, which they identified with the secular Ashkenazi elite. A common thread in housing and infrastructure debates to both working-class Jews from the periphery and Arabs is the demand for a policy that would create jobs and housing in their hometowns, rather than build infrastructure that would put them in the Tel Aviv orbit.
Of the above examples, in Chicago the NITBYs self-identify as leftists, and in Israel, the NITBYs who want local housing rather than Tel Aviv housing either identify as leftists or identify as economic leftists and support the right on security and ethnic identity issues. However, the populist right is not immune from this. Right-wing supporters of suburbs who oppose cities for what they represent (diversity, usually left-wing politics of the kind they associate with the liberal elite) may also oppose urban upzoning. The best example of this kind is Joel Kotkin’s opposition to upzoning in Hollywood, which sounds like a criticism of government projects until one realizes that upzoning simply means developers are permitted to build more densely if they’d like. Now, Kotkin is pro-immigration, setting him apart from the main of right-wing populism, but in all other aspects, his paranoid fear of urban liberal elites imposing behavioral controls on ordinary people would be right at home at the UK Independence Party and its mainland European equivalents. Kotkin is also just one person, but his views mirror those of Tea Party activists who equate dense urbanism with an Agenda 21 conspiracy, to the point of conflating a phrase that means building new suburbs with a plan to forcibly relocate suburbanites to central cities.
I do not know Japan’s regional patterns of politics well, but I know Ontario’s. In Ontario, there is not much us-and-them politics regarding Toronto. There is such politics regarding the inner parts of Toronto – Rob Ford was elected on the heels of an outer-urban populist backlash to David Miller’s urbanism, including the perception that Miller was fighting a war on cars. But there’s none of the hatred of the central city and all that it represents that typifies politics in both Israel and the US. Hatred of the city in the US is right-wing (though within the city, hatred of the gentrified core is often tied to left-wing anti-gentrification activism), and hatred of Tel Aviv in Israel is generically populist, but in both cases, the us-and-them aspect encourages NITBYism.
In the most expensive American cities, this is not a major problem. Anti-urban populism does not have enough votes to win in New York and California, so state control of zoning in those states would not produce these problems. The Tea Party disruption of zoning meeting I brought up above happened in San Francisco suburbs, but did not have an effect on planning; I brought this example up to show that this political force exists, even if in that specific locality it is powerlessly weak. In those areas, local NIMBYism is a much bigger problem: many New York neighborhoods were actually downzoned in the Bloomberg era by local request. The primary problems that would plague state-level decisionmaking are corruption and power brokering, in which politicians hold even straightforward rule revisions hostage to their local pet projects. The us-and-them politics of Upstate and Downstate New York contributes heavily to power brokering, but Downstate’s demographic dominance precludes ideological choking of development.
Within the US, the risks of NITBYism are different. First, in the cost tier just below that of New York and California there are city regions in more moderate states, for examples Philadelphia and the Virginia suburbs of Washington, or possibly Miami (where the county-made rules have allowed aggressive new construction, mostly urban, which Stephen Smith credits to the political power of Cuban immigrants). And second, zooming in on different neighborhoods within each expensive city, the Chicago example suggests that if New York and other expensive cities begin a major program of public housing construction, the community organizations and the populists will demand to spread construction across many neighborhoods, especially poor ones, and not in the neighborhoods where there is the most demand.
As I noted two posts ago, there is a political economy problem, coming from the fact that the politically palatable amounts of construction are not transformative enough to let the working class live in market-rate city-center apartments, not in high-income major cities. Israel could semi-plausibly double the Tel Aviv housing stock; even that requires housing forms that Israelis associate with poverty, such as buildings that touch, without side setbacks. This would allow many more people to live in Tel Aviv, but they’d be drawn from the middle class, which is being priced out to middle-class suburbs or to working-class suburbs that it gentrifies. The working class in the periphery would be able to move into these closer-in suburbs, but this cascading process is not obvious. Worse, from the point of view of community leaders, it disrupts the community: it involves a churn of people moving, which means they end up in a different municipal fief, one with leadership the current suburb’s leaders may be hostile to.
For essentially the same reasons, subsidized housing in the center produces the same problems. If Israel builds a massive number of subsidized or rent-regulated apartments in Tel Aviv, there will be immense nationwide demand for them. Few would serve the residents of a given peripheral suburb, and there is no guarantee anyone would get them. On the contrary, in such a plan, priority is likely to go to downwardly-mobile children of established residents. At the 2011 protests, the people who were most supportive of plans to lower rents in Tel Aviv specifically were people from Tel Aviv or high-income suburbs who wanted to be able to keep living in the area. The community disruption effect of offering people the ability to live where they’d want would still be there. Thus, all the incentives line up behind periphery community leader support for building public housing in the periphery, where there is little demand for it, and not in the center. Even when housing is universally seen as a benefit and there’s no NIMBYism, politics dictates that housing is built in rough proportion to current population (since that’s where political power comes from) and not future demand.
Abolishing zoning is one way to cut this Gordian knot; it is also completely unpalatable to nearly everyone who is enfranchised in a given area. Allowing more private construction is the more acceptable alternative, but leads to the same problems, only on a smaller scale. It really is easier for community leaders to twist arms to demand veto rights and local resident priority than to push for sufficient citywide upzoning to alleviate the price pressure. But in an environment with weak NIMBYs and few NITBYs, fast growth in urban housing is possible.
# Dispersing Expensive Centers: Edge City Version
This is somewhat of an addendum to my post before about dispersal of urban networks toward cheaper cities. I addressed the question of dispersal from rich, expensive metro areas, especially San Francisco, to cheaper ones, as a way of dealing with high housing prices. But more common is dispersal within metro areas: gentrification spilling from a rebounding neighborhood to adjacent neighborhoods that remain cheaper, and office space spilling from the primary CBD to the edge cities. I am going to address the latter issue in this post.
CBDs are expensive. They have intense demand for office space, as well as high-end retail and hotels. In many cities, there’s demand for office space even at the construction costs of supertall skyscrapers, going up to about $5,000-6,000 per square meter in privately-built New York towers. Zoning regimes resist the height required to accommodate everyone, and this is worse in Europe than in North America and high-income East Asia. Paris proper has many towers just above the 100 meter mark, but only three above 120. On a list of the tallest buildings in Sweden, not a single one above 100 meters is in central Stockholm, and the tallest within the zone are not in the CBD but in Södermalm; compare this with Vancouver, a metro area of similar size. But in the US, too, expanding CBDs is difficult in the face of neighborhood opposition, even in Manhattan. The solution many cities have adopted is to put the skyscrapers in edge cities. Paris famously built La Defense, which has far more skyscrapers than the city proper does; Stockholm is building skyscrapers in Kista; London built Canary Wharf; Washington, the major US city with the tightest CBD height limits, sprouted skyscraper clusters in several suburbs in Maryland and Virginia. Ryan Avent proposed this as one solution to NIMBYism: in new-build areas, there are few residents who could oppose the new development. In contrast, near zoning-constrained CBDs, not only are there many residents, but also the land is so desirable that they are typically high-income, which means they have the most political power to oppose new development. The problem with this solution is that those secondary CBDs are not public transit hubs. In Paris, this has created an east-west disparity, in which people from (typically wealthy) western suburbs can easily reach La Defense, whereas people from poorer ones need to take long RER trips and often make multiple transfers. In every transit city, the CBD is unique in that it can be reached from anywhere. To give similar accessibility to a secondary center, massive investment is required; Paris is spending tens of billions of euros on circumferential regional rail lines to improve suburb-to-suburb connectivity, expand access in the eastern suburbs, and ameliorate the east-west imbalance (see for example isochrones on PDF-pp. 20-21 of the links here). Those lines are going to be well-patronized: the estimate is 2 million daily passengers. And yet, the east-west imbalance, if nothing else, would be a lesser problem if instead of building La Defense, Paris had built up Les Halles. The situation in other cities is similar. Kista is on one branch of one subway line, two stops away from its outer terminus. Living in Central Stockholm, my coworkers and I can get to KTH on foot or by bike, but a coworker who teaches at KTH’s satellite campus in Kista has a long commute involving circumferential buses (taking the subway and changing at T-Central would be even longer because of the detour). While many individual sub-neighborhoods of Central Stockholm are quite dense, the overall density in the center is not particularly high, certainly not by the standards of Paris or New York. A similar problem happens in Washington, where the biggest edge city cluster, Tysons Corner, is traditionally auto-oriented and was only just connected to Metro, on a branch. This always affects poorer people the worst, as they can’t afford to live in the CBD, where there is easy access to all secondary destination, and often are pushed to suburbs with long commutes. There is a political economy problem here, as is usually the case with zoning. (Although in the largest cities skyscraper heights are pushing beyond the point of constant marginal costs, purchase prices at least in New York are much higher than construction costs.) The people living near CBDs, as noted before, are usually rich. The displacement of office space to the suburbs affects them the least, for three reasons. First, if they desire work within walking distance or short subway distance, they can have it, since their firms typically make enough money to afford CBD office rents. Second, since they live in the transit hub, they can access suburban jobs in any direction. And third, if the transit options are lacking, they can afford cars, although of course traffic and parking remain problematic. Against their lack of incentive to support CBD office space, they have reasons to support the status quo: the high rents keep it exclusive and push poor people away, and often the traditional mid-rise buildings are genuinely more aesthetic than skyscrapers, especially ones built in modernist style. These concerns are somewhat muted in the US, where rich people decamped for the suburbs throughout the 20th century, and have supported zoning that mandates single-family housing in the suburbs, instead of staying in the city and supporting zoning that keeps the city mid-rise. This may have a lot to do with the formation of high-rise downtowns in American cities of such size that in Europe they’d be essentially skyscraper-free. However, what’s worse in the US is the possibility of short car-free commutes to the edge cities. Where La Defense is flanked by suburbs with high residential density, and Kista’s office blocks are adjacent to medium-density housing projects for working- and middle-class people, American edge cities are usually surrounded by low-density sprawl, where they are easily accessible by car but not by any other mode of transportation. This is because the American edge cities were usually not planned to be this way, but instead arose from intersections of freeways, and developed only after the residential suburbs did. As those edge cities are usually in rich areas, the residents again successfully resist new development; this is the point made in Edgeless Cities, which notes that, in major US metro areas, growth has been less in recognizable edge cities and more in lower-density edgeless cities. As with the possibility of dispersing innovation clusters from rich, expensive metro areas to poorer and cheaper ones, the already-occurring dispersal from city centers to edge and subsequently edgeless cities has negative effects. It lengthens transit commutes. Although in Tokyo, long commutes first arose as a problem of a monocentric CBD, and the city developed secondary CBDs as a solution, the situation in European cities an order of magnitude smaller is very different. It worsens housing segregation: the development of an edge city tends to be in the direction of the favored quarter, since that’s where the senior managers live, and conversely, higher-income workers can choose to move nearby for the short commute. Although nearly all metro areas have favored quarters, decentralization of jobs thus tends to lengthen the commutes of poor people more than those of rich people. This is not quite the same as what happens when entire metro areas are forced to disperse due to housing cost. The agglomerations generally stay intact, since an entire industry can move in the same direction: smaller cities have just one major favored quarter with edge cities, and larger ones still only have a few, so that industries can specialize, for example in New York, biotech and health care cluster in the Edison-Woodbridge-New Brunswick edge city. Moreover, the specialized workers are usually high-income enough that they can stay in the central city or migrate to the favored quarter. San Francisco’s programmers are not forced to move individually to faraway poor neighborhoods; they move in larger numbers to ones near already gentrifying ones, spurring a new wave of gentrification in the process; were they to move alone, they’d lose the access to the tech shuttles. The negative effects are predominantly not on richer people, but on poorer people. The problem is that even among the poor, there is little short-term benefit from supporting upzoning. If Paris, London, and Stockholm liberalize housing and office construction, the first towers built of both kinds will be luxury, because of the large backlogs of people who would like to move in and are willing to pay far in excess of construction costs. I am going to develop this point further in two posts, on what is best called NITBYism – Not In Their Backyard – but this means that the incentive for poor and peripheral populations is not to care too much about development in rich centers. The marginal additional building in a rich city center is going to go to the upper middle class; sufficient construction would trickle to the middle class; only extensive construction would serve the working class, and then not all of it. In the US, the marginal additional building may actually displace poor people, if no new construction is allowed, simply by removing low-income apartments. It may even create local demand for high-income housing, for example by signaling that the neighborhood has improved. In San Francisco, this is compounded by the tech shuttles, as a critical mass of Silicon Valley-bound residents can justify running shuttles, creating demand for more high-income housing. The amount of construction required to benefit the bottom half of the national income distribution is likely to be massive. This is especially true in France and the UK, which have sharp income differences between the capital and the rest of the country; their backlogs of people who would like to move to the capital are likely in the millions, possibly the high millions. Such massive construction is beyond the pale of political reality: the current high-income resident population is simply not going to allow it – when forced to share a building with the working class, it pushes for poor doors, so why would it want zoning that would reduce the market-rate rent to what the working class would afford? The only political possibility in the short run is partial plans, but these are not going to be of partial use to the working class, but of no use to it, benefiting the middle class instead. As a result, there is no push by the working class and its social democratic political organs to liberalize construction, nor by the small-is-beautiful green movement. Ultimately, the attempt to bypass restrictions on urban CBD formation by building edge cities, like every other kludge, is doomed to failure. The fundamental problem of rich people making it illegal to build housing nearby is not solved, and is often made even worse. The commutes get worse, and the inequality in commutes between the rich and the poor grows. Office space gets built, where otherwise it would spread along a larger share of the medium-rise CBD, but for most workers, this is not an improvement, and the environmental effects of more driving have negative consequences globally. And once city center is abandoned to the rich, there is no significant political force that can rectify the situation. What seems like a workaround and an acceptable compromise only makes the situation worse. # Zoning and Market Pricing of Housing The question of the effects of the supply restrictions in zoning on housing prices has erupted among leftist urbanist bloggers again. On the side saying that US urban housing prices are rising because of zoning, see anything by Daniel Kay Hertz, but most recently his article in the Washington Post on the subject. On the side saying that zoning doesn’t matter and the problem is demand (and by implication demand needs to be curbed), see the article Daniel is responding to in Gawker, and anything recent by Jim Russell of Burgh Diaspora, e.g. this link set and his Pacific Standard article on the subject. This is not a post about why rising prices really are a matter of supply. I will briefly explain why they are, but the bulk of this post is about why, given that this is the case, cities need to apportion the bulk of their housing via market pricing and not rent controls, as a matter of good political economy. Few do, which is also explainable in terms of political economy. But first, let us look at the anti-supply articles. Gawker claims that San Francisco prices are rising despite a building boom. We’ll come back to this point later, but let me note that in reality, growth in housing supply has been sluggish: Gawker links to a SPUR article about San Francisco’s housing growth, which shows there was high growth in 2012, but anemic growth in previous years. The Census put the city’s annual housing unit growth last decade at 0.8%. In New York, annual growth was 0.5%, as per a London study comparing London, Paris, New York, and Tokyo. In contrast, Tokyo, where zoning is relatively lax, growth was 2%, and rents have sharply fallen. The myth that there is a building boom in cities with very low housing unit growth is an important aspect of the non-market-priced system. Jim’s arguments are more interesting. He quotes a Fed study showing that housing vacancies in the most expensive US cities have not fallen, as we’d expect if price hikes came from lack of supply. (In San Francisco, vacancies went up last decade, at least if you believe that the Census did not miss anyone.) This is too not completely right, because in Los Angeles County, as noted on PDF-page 18 here, vacancies did recently fall. But broadly, it’s correct that e.g. New York’s vacancy rate has been 3% since the late 1990s, as per its housing surveys. But I do not think it’s devastating to the supply position at all. The best way to think about it is in analogy with natural rates of unemployment. Briefly: it’s understood in both Keynesian and neo-classical macroeconomics that an economy with zero employment will have high and rising inflation, because to get new workers, employers have to hire them away from existing jobs by offering higher wages. There is a minimum rate of unemployment consistent with stable inflation, below which even stable unemployment will trigger accelerating inflation. In the US, this is to my understanding about 4%; whether the recession caused structural changes that raised it is of course a critical question for macroeconomic policy. A similar concept can be borrowed into the more microeconomic concept of the housing market. There’s also the issue of friction, again borrowed from unemployment. There’s a minimum frictional vacancy, in which all vacant apartments are briefly between tenants, and if people move between apartments more, it rises. For what it’s worth, the breakdown of 2011 New York vacancies on pages 3-4 by borough and type of apartment suggests friction is at play. First, the lowest vacancy by borough is 2.61%, in Brooklyn, not far below city average. Second, the only type of apartment with much lower vacancy than the city average is the public housing sector, with 1.4% vacancy, where presumably people stay for decades so that friction is very low; rent-stabilized units have lower vacancy than market-rate units, 2.6% vs. 4.4%, which accords with what I would guess about how often people move. So if high rents are the result of supply restrictions, and it appears that they are, the way to reduce them should be to relax zoning restrictions. If this is done, then this allows living even in currently expensive areas without spending much on rent. Urban construction costs are lower than people think: New York’s condo average is$2,300 per square meter, and London’s is not much higher, entirely eaten by PPP conversions; Payton Chung notes the much higher cost of high-rises than that of low-rises, but the cost of high-rise apartment buildings is still only about $2,650/m^2 in Washington, and (using the same tool) about$3,100 in New York, and at least based on the same tool, mid-rises are barely any cheaper. For US-wide single-family houses, construction costs are 61.7% of sale prices, but the $3,100 figure already includes overheads and profit. Excluding land costs, which are someone else’s profit, construction, profit, and overheads are 92.5%; so let’s take our$3,100/m^2 New York high-rise and add the rest to get about $3,300, which is already more than most non-supertall office skyscrapers I have found data for in other major cities. The metro area appears to have a price-to-rent ratio of about 25, and with the caveat that this may go down slightly if the city gets more affordable, this corresponds to a monthly rent of$11 per square meter, at which point, a 100-m^2 apartment, sized for a middle-class family of four, becomes affordable, without subsidies, to families making about $44,000 a year and up, about twice the poverty line and well below the median for a family of that size. If we allow some compromises on construction costs – perhaps slightly smaller apartments, perhaps somewhat lower-end construction – we could cover most of the gap between this and the poverty line. But given that demand for housing at prices that match construction costs, there has to be a way of allocating apartments. Under market pricing, they’re allocated to the highest bidder. If there is a perfectly rigid supply of 2 million housing units and a demand for 4 million at construction costs, the top 2 million bidders get housing, at the rent that the 2 millionth bidder is willing to pay. I do not know of any expensive city with low home ownership that uses market pricing: too many existing residents would lose their homes. High home ownership has the opposite effect, of course – Tel Aviv may have rising rents, and high price-to-income ratios, but since home ownership is high, the local middle class is profiting rather than being squeezed, or at least its older and slightly richer members are. Instead, cities give preference to people who have lived in them for the longest time. Rent control, which limits the increase in annual rent, is one way to do this. City-states, i.e. Singapore and Monaco, have citizenship preference for public housing to keep rents down for their citizens. Other cities use regulations, including rent control but also assorted protections for tenants from eviction, to establish this preference. Instead of market pricing allocation, there is allocation based on a social hierarchy, depending on political connections and how long one has lived in the city. People who moved to San Francisco eight years ago, at age 23, organize to make it harder for other people to move to the city at this age today. Going to market pricing, which means weakening rent controls over the next few years until they’re dead letter, is the only way to also ensure there is upzoning. Although rent control and upzoning both seem to be different policies aimed at affordability, they’re diametrically opposed to each other: one makes it easy for people to move in, one makes it hard. As I mentioned years ago, rent-controlled cities tend to have parallel markets: one is protected for long-timers, and for the rest there is a market that’s unregulated and, because so much of the city’s housing supply is taken off it, very expensive. In exchange-rate dollars, I pay$1,000 for a studio of 30 square meters, of which maybe 20 are usable, the rest having low sloped ceilings. In PPP dollars it’s \$730, still very high for the size of the unit. If I put my name on a waiting list, I could get a similar apartment for a fraction of the price; to nearly all residents, rents are far lower than what I pay, because of tight rent controls. Stockholm at least has a relatively short waiting list for rent-controlled apartments, 1.5 years, for international visitors at my university; American cities (or perhaps American universities) never do foreigners such favors.
The problem here is entirely political. Cities have the power to zone. Thus, supply depends entirely on whether local community leaders accept more housing. This housing, almost invariably, goes to outsiders, who would dilute the community’s politics, forming alternative social networks and possibly caring about different political issues. It’s somewhat telling that ultra-Orthodox Jews in the New York areas support aggressive upzoning, since the new residents are their children and not outsiders; Stephen Smith has written before about the Brooklyn Satmars’ support for upzoning, and the resulting relatively low prices. In the vast majority of the first world, with its at- or below-replacement birth rates, this is not the case, and communities tend to oppose making it easier to build more housing.
There is a certain privilege to being organized here. We see the pattern when we compare how US minorities vote on zoning to what minority community leaders say. In San Francisco specifically, activists who oppose additional development have made appeals to white gentrification in nonwhite neighborhoods, primarily the Mission District. Actual votes on the subject reveal the exact opposite: see the discussion on PDF-pp. 13-15 of this history of Houston land use controls, which notes that low-income blacks voted against zoning by an overwhelming margin because of scare tactics employed by the zoning opponents. (Middle-income blacks voted for zoning, by a fairly large margin.) Polling can provide us with additional data, less dependent on voter turnout and mobilization, and in Santa Monica, Hispanics again favor new hotel development more than whites. In areas where being low-income or nonwhite means one is not organized, low-income minorities are not going to support restrictions that benefit community leaders.
The result is that organized communities are going to instead favor zoning, because it gives them more power, as long as they are insulated from the effect of rising prices. In suburbs with high home ownership, they actually want higher prices: my rents are their property values. In cities with low home ownership, rent controls provide the crucial insulation, ensuring that established factions do not have to pay higher rents. Zoning also ensures that, since the developers who do get variances can make great profits, community groups can extort them into providing amenities. This is of course the worst in high-income areas: every abuse of power is worse when committed by people who are already powerful. But the poor can learn to do it just the same, and this is what happens in San Francisco; TechCrunch has a comprehensive article about various abuses, by San Franciscans of all social classes, culminating in the violent protests against the Google shuttles, and in many cases, the key to the abuse was the community’s ability to veto private developments.
The risk, of course, is displacement. As the gap between the regulated and market rent grows, landlords have a greater incentive to harass regulated tenants into leaving. This is routine in New York and San Francisco. Community groups respond by attacking such harassment individually, which amounts to supporting additional tenant protections. In California, this is the debate over the Ellis Act. The present housing shortages are such that supporting measures that would lower the market rent has no visible short-term benefits, and may even backfire, if a small rent-controlled building is replaced by a large unregulated building.
So with rent controls, community groups have every incentive to support restrictive zoning, and none to support additional development. With market pricing, the opposite is the case. What of low-income city residents’ access to housing, then? Daniel mentions housing subsidies as a necessity for the poor. To be honest, I don’t see the purpose, outside land-constrained cities like Hong Kong and Singapore. If it is possible through supply saturation to cut rents to levels that are affordable to families making not much more than the poverty line, say 133% of the US poverty line, the Medicaid threshold, then direct cash benefits are better. In the ongoing debate over a guaranteed minimum income, the minimum should be slightly higher than the US poverty line, which is lower as a proportion of GDP per capita than most other developed countries’ poverty lines, as seen in the government programs with slightly higher limits, led by Medicaid.
Leftists have spent decades arguing for state involvement in health care and education – not just cash benefits, but either state provision, or state subsidies combined with some measure of cost control. There are many arguments, but the way I understand them, none applies to housing:
1. Positive externalities: Ed Glaeser has noted that if some people in a metro area get more education then there is higher income growth even for other people in the area. In health care, there are issues like herd immunity.
2. Very long-term benefits: if college is as expensive as it is in the US today, it takes many years for graduates’ extra incomes to be worth the debt. With health care, the equivalent is preventive care. When benefits take so much time to accrue, first some people face poverty traps and don’t have the disposable income today to invest in their own health and education, and second, the assumptions of rational behavior in classical economics are less true.
3. Natural monopolies outside large cities: hospitals, schools, and universities have high fixed capital costs, so there can only be sufficient competition in very large cities. The same is of course true of rail transit.
4. Asymmetric information: students and parents can’t know easily whether a school is effective, and patients face the same problem with doctors; short-term satisfaction surveys, such as student evaluations, may miss long-term benefits, and are as a result very unpopular in academia.
With housing, we instead have competitive builder markets everywhere, no appreciable benefits to having your neighbor get a bigger or better apartment, and properties that can be evaluated by viewing them.
The only question is what to do in the transition from the present situation to market pricing. This is where a limited amount of protection can be useful. For example, rent controls could be relaxed into a steady annual gain in the maximum allowed real rent. While market-rate housing remains expensive, public housing is a stopgap solution, and although it should be awarded primarily based on need rather than how long one has lived in the city, a small proportion should be set aside to people in rent-controlled small buildings that were replaced by new towers. None of this should be a long-term solution, but in the short run, this may guarantee the most vulnerable tenants a soft landing.
What this is not, however, is a workable compromise. Community organizations are not going to accept any zoning reform that lets in people who are members of out-groups. They have no real reason to negotiate in good faith; they can negotiate in bad faith as a delaying tactic, which has much the same effect as present zoning regimes. What they want is not just specific amenities, but also the power to demand more in the future; it’s precisely this power that ensures the neighborhoods that are desirable to outsiders are unaffordable to them. What they want is a system in which their political connections and social networks are real resources. A city that welcomes newcomers is the exact opposite. Expensive housing is ultimately not a market failure; it’s a political failure.
# Suburban Geography and Transit Modes
A post on Let’s Go LA from last year, about different suburban development patterns in different regions of the US, praises Los Angeles’s suburbs for having an arterial grid that allows some density and permits frequent bus service. The Northeast, in contrast, has a hierarchical system, of town centers surrounded by fractured streets and cul-de-sacs, at much lower density. This is how Los Angeles’s urban area has the highest standard density in the US, and one of the highest weighted densities, nearly tying San Francisco for second place after New York. It sounds like a point in favor of Los Angeles, but missing from the post is an analysis of how Rust Belt suburban development patterns reinforce prewar transit. Briefly, Western US grids are ideal for arterial buses, Northeastern town centers are ideal for commuter rail, which used to serve every town.
For a Northeastern example, the post brings up Attleboro as a historic town center. Look at the image and notice the walkable grid and development near the train station, although one quadrant of the station radius is taken up by parking. Attleboro is in fact the town with the oldest development on the Northeast Corridor between Boston and the Providence conurbation, and the only one that, when taking the train between Boston and Providence, I’d be able to see development in from the train. Sharon and Mansfield, both developed decades later, do not have as strong town centers. But conversely, many town centers similar to Attleboro’s exist in the Northeast: Framingham, Norwalk, Tarrytown/Sleepy Hollow, Huntington, Morristown, Paoli.
Now, a careful look at the specific examples of Norwalk and Huntington will show that the most walkable development is not necessarily at the train station. In both suburbs, the old town center is where the original road goes – Northern Boulevard and its eastern extensions in Long Island, the Boston Post Road in Connecticut. Huntington has a second center around the LIRR station; Norwalk has a much smaller second center around the South Norwalk Metro-North station. For the most part, the railroads went close enough to the older roads that the town center is the same, as is the case especially in Attleboro, Tarrytown, and Paoli, and in those cases, commuter rail can at least in principle serve jobs at the suburban town center.
This boils down to the difference between optimal bus and rail networks. Buses love grids: they typically serve the scale of a single city and its inner suburbs, and there it’s feasible to provide everywhere-to-everywhere service, which grids are optimal for. For the suburbs, this breaks down. Buses on uncongested arterial roads are still surface transit; an average speed of 30 km/h is aspirational, and that is for suburbs, not dense urban neighborhoods. On a road where the bus can average 30, cars can average 50, and cars can also use expressways without splitting frequency between different suburban destinations, speeding their journeys up greatly. Meanwhile, commuter rail can, depending on stop spacing, average 50-60 km/h easily, and an aggressive timetable can cross 80 if the stop spacing is relatively express.
There is no such thing as a rapid transit grid. Subway networks almost invariably look like a central mesh, often containing a circumferential line, with spokes radiating out of it in all directions. Mexico City has a larger mesh, approximating a subway grid, but its outer ends again look hub-and-spoke. Counting commuter rail, the hub-and-spoke system is as far as I can tell universal, with the exception of highly polycentric metro areas like the Ruhr. The spokes are rarely clean: they often cross each other (see for example the London Underground to scale). But looking at a city’s rail transit map, you’ll almost always be able to tell where the CBD is, where the inner-urban neighborhoods are, and where the outer-urban and suburban areas are.
At this distance, then, having a bus-friendly grid doesn’t matter much. What matters is having a good network of historical rights-of-way that can be used for regional rail, and a preexisting pattern of development following these lines and their junctions. In the US, the older cities have this, whereas the newer ones do not. In a suburb like Attleboro, good transit means good regional rail, with high all-day frequency, and a network of feeder buses timed to meet the trains. Grids aren’t especially useful for that.
And this is why, despite being so dense, Los Angeles has so little transit usage. Its street network is set up for bare-bones public transit, usable by people who can commute two hours in each direction and will never get cars. Because it was a medium-size city when its car ownership exploded, it doesn’t have as many town centers; its density is uniform. It has a higher weighted density than the Rust Belt outside New York, but its weighted-to-standard density ratio is much lower than those of Philadelphia, Boston, and Chicago. (It barely trails Washington, which has fewer town-center suburbs than the Rust Belt, but made an effort to actually build them around Metro; its Tarrytowns have Metro service rather than infrequent commuter rail.)
The optimal urban geography for urban transit is not the same as that for suburban transit, and the optimal street network for surface transit is not the same as that for rapid transit. Los Angeles could potentially excel at surface urban transit, but there’s only so much surface transit can provide the backbone of public transportation in a city. It has a handful of strong lines for rapid transit, and that’s a serious problem, which a grid won’t really solve. | 2016-10-25 06:36:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2786295413970947, "perplexity": 2330.9039904434903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719960.60/warc/CC-MAIN-20161020183839-00226-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://knowing.net/posts/2007/07/linux-multitouch-like-surface-but-probably-less-than-10k/ | # Linux Multitouch: Like Surface, But Probably Less than \$10K
Fri 13 July 2007
Microsoft's [Surface is estimated to cost \$$10K](https://www.microsoft.com/en-us/surface) in its first incarnation. I've already had an interesting conversation with an entrepreneur friend who has an application that he thinks would be fantastic on Surface, but \$$10K for the hardware makes it a non-starter. Although I'm sure Surface will eventually cost less, I'll be talking to my friend about \<a href="http://wearables.unisa.edu.au/mpx/"" target="_blank" rel="noopener noreferrer">Multi-Pointer X Server, which has just added multi-touch support. Heck, might be my first commercial Mono project... | 2022-05-22 14:50:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4509263336658478, "perplexity": 14069.511068863116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00446.warc.gz"} |
https://www.albert.io/learn/ap-physics-1-and-2/question/red-photon | Limited access
Consider the transitions shown in the energy level diagram.
If all of the transitions produce a photon in the visible band of the electromagnetic spectrum, which transition is most likely to produce a red photon?
A
$A$
B
$B$
C
$C$
D
$D$
Select an assignment template | 2017-04-30 05:03:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37575870752334595, "perplexity": 986.7526142892035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124299.47/warc/CC-MAIN-20170423031204-00367-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/640724/functions-of-unbounded-operators-do-they-commute-or-not | # Functions of unbounded operators: do they commute or not?
Given two unbounded commuting self-adjoint operators $A$ and $B$. Then all bounded Borel functions of $A$ and $B$ commute (in the sense that all the projections in their associated projection-valued measures commute). But what is when we have an unbounded function? More precise, for example, do $g(A)$ and $f(A)$ commute, where $f$ is a bounded Borel function and $g$ a unbounded one (e.g. $g(x)=x$).
• Have a look at Rudin's functional analysis text. I believe the answer is true even for unbounded functions (at least continuous ones). Rudin has a good treatment for the unbounded functional calculus. – voldemort Jan 16 '14 at 18:01
You should specify if the operators commute in strong or weak sense. By definition, s.a. operators commute strongly iff all spectral projections $E_A(\cdot)$ and $E_B(\cdot)$ commute.
But then every spectral projection of $f(A)$ $E_{f(A)}(M)=\chi_M(f(A))=\int\chi_M(f(\cdot))dE_A(\cdot)=\int\chi_{f^{-1}(M)}dE_A=E_A(f^{-1}(M))$ commutes with $E_{g(B)}(N)=E_B(g^{-1}(N))$ for every Borel $M,N\subseteq\mathbb C.$ | 2019-07-17 16:20:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883671402931213, "perplexity": 263.25804977511496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525355.54/warc/CC-MAIN-20190717161703-20190717183703-00347.warc.gz"} |
https://www.esaral.com/q/an-aluminium-vessel-of-mass-0-5-kg-29523 | # An aluminium vessel of mass 0.5 kg
Question:
An aluminium vessel of mass $0.5 \mathrm{~kg}$ contains $0.2 \mathrm{~kg}$ of water at $20^{\circ} \mathrm{C}$. A block of iron of mass $0.2 \mathrm{~kg}$ at $100^{\circ} \mathrm{C}$ is gently put into the water. Find the equilibrium temperature of the mixture. Specific heat capacities of aluminium, iron and water are $910 \mathrm{~J} / \mathrm{kg}-\mathrm{K}, 470 \mathrm{~J} / \mathrm{kg}-\mathrm{K}$ and $4200 \mathrm{~J} / \mathrm{kg}-\mathrm{K}$ respectively.
Solution: | 2023-01-31 10:28:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41901835799217224, "perplexity": 640.6833691111035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00530.warc.gz"} |
http://hackage.haskell.org/package/amazonka-iam-1.6.1/docs/Network-AWS-IAM-AddUserToGroup.html | amazonka-iam-1.6.1: Amazon Identity and Access Management SDK.
Copyright (c) 2013-2018 Brendan Hay Mozilla Public License, v. 2.0. Brendan Hay auto-generated non-portable (GHC extensions) None Haskell2010
Description
Adds the specified user to the specified group.
Synopsis
# Creating a Request
Arguments
:: Text autgGroupName -> Text autgUserName -> AddUserToGroup
Creates a value of AddUserToGroup with the minimum fields required to make a request.
Use one of the following lenses to modify other fields as desired:
• autgGroupName - The name of the group to update. This parameter allows (per its regex pattern ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
• autgUserName - The name of the user to add. This parameter allows (per its regex pattern ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
See: addUserToGroup smart constructor.
Instances
# Request Lenses
The name of the group to update. This parameter allows (per its regex pattern ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
The name of the user to add. This parameter allows (per its regex pattern ) a string of characters consisting of upper and lowercase alphanumeric characters with no spaces. You can also include any of the following characters: _+=,.@-
# Destructuring the Response
Creates a value of AddUserToGroupResponse with the minimum fields required to make a request.
See: addUserToGroupResponse smart constructor.
Instances | 2019-11-13 09:29:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2009459137916565, "perplexity": 4731.478805459628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00229.warc.gz"} |
https://math.stackexchange.com/questions/1716913/how-should-i-find-the-argument-when-a-equals-zero-with-complex-numbers | # How should I find the argument when a equals zero with complex numbers?
I know how to plot an Argand diagram and I think I know how to find the argument usually, but I've done some looking around to be certain of my answers and found what seems to be conflicting information regarding the argument of purely imaginary numbers (for example, $-2i$).
My textbook gives the example of $\arg(-5j) = -\pi/2$, which makes sense to me as I was told to take the angle as a negative, measured clockwise and in radians. On the diagram, a pure negative imaginary number would be $-90$ degrees ($-\pi/2$ radians) from the real numbers line.
However, I've done some looking online and found two sources which insist it's $3*\pi/2$, and I'm not sure why. Links provided: http://www.convertalot.com/complex_arithmetical_calculator.html http://www.mathamazement.com/Lessons/Pre-Calculus/06_Additional-Topics-in-Trigonometry/complex-numbers-in-polar-form.html (Example 1 d).
I'm guessing my textbook is correct as it seems to make the most sense and would be the more trustworthy source, but I have a feeling I'm missing something important considering two different sources came to the same conclusion.
• $\frac{3\pi}{2} = \frac{-\pi}{2}$ subtract $2 \pi$ from $\frac{3\pi}{2}$ – bigfocalchord Mar 28 '16 at 9:48
• Would there be a preferred answer? Wouldn't the different value of 3pi/2 and -pi/2 cause problems? – Dalekcaan1963 Mar 28 '16 at 9:55
• There really isn't that big of a problem. As long as we know we are working in modulo $2\pi$ per se, we know exactly which angle the argument corresponds to. It merely is a matter of convenience. The intervals $[0,2\pi)$ and $(-\pi,\pi]$ are much easier to work with than say, $[\pi/4,\pi/4+2\pi).$ – quasicoherent_drunk Mar 28 '16 at 10:17
We had a little chat about this in our class as well, because the $arg(z)$ function was defined as $\theta$ for which $\dfrac{z}{|z|}=e^{i\theta}$, but then we know that $e^{i\theta}$ is a periodic function, because it can be written as $isin(\theta)+cos(\theta)$, which are $2\pi$-periodic functions. So there is still a huge argument about this, and some textbooks like to work with $[0,2\pi)$ while others like to work with $[-\pi,\pi)$, because regardless of which interval you take, the value of $\theta$ is uniquely determined if the length of the interval is exactly $2\pi$. I,like you, normally work with the latter , but there are some others like your sources who work with the former. You think that $90^o$ counter clockwise is like rotating backward, hence you use the minus sign to denote that. On the other hand, the people who have designed the links you have visited must have thought that it's the same as going $270^o$ degrees forward, hence they gave it $3\pi/2$ argument. It's all a matter of taste, there is no problem with either approach otherwise. However I have noticed that the former of the two approaches is more popular. | 2020-04-02 11:02:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252900838851929, "perplexity": 192.90722660604345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506870.41/warc/CC-MAIN-20200402080824-20200402110824-00323.warc.gz"} |
https://socratic.org/questions/how-do-you-write-an-equation-for-a-cosine-function-with-an-amplitude-of-2-3-a-pe | # How do you write an equation for a cosine function with an amplitude of 2/3, a period of pi, and a vertical shift of 2 units up?
Jul 25, 2018
Equation is color(indigo)(y = 2/3 cos(2x) + 2
#### Explanation:
Standard form of the cosine function is $y = A \cos \left(B x - C\right) + D$
Amplitude $= | A | = \frac{2}{3}$
Period $= \frac{2 \pi}{|} B | = \pi$
$\therefore B = 2$
Phase Shift $= - \frac{C}{B} = 0$
Hence $C = 0$
Vertical Shift $= D = 2$
Hence the equation is color(indigo)(y = 2/3 cos(2x) + 2
graph{2/3 cos (2x) + 2 [-10, 10, -5, 5]} | 2021-04-13 08:10:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9388800859451294, "perplexity": 3203.526211291961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072175.30/warc/CC-MAIN-20210413062409-20210413092409-00393.warc.gz"} |
https://codereview.stackexchange.com/questions/236203/pythonic-attempt-at-conversion-of-length/236311 | # Pythonic attempt at conversion of length
I'm trying to learn Python, so I started with a program to convert length units, mixing imperial and metric. First I wrote a bunch of small simple functions, but then I wanted to solve the issue of having an intuitive way of calling one function with some easy to remember parameter values in order to convert from any unit to any other unit, as opposed to having to call a bunch of different function names.
Does each instance of Convert remain in memory when I am using __new__ and not actually returning an instance of the class? For example, since only a float is stored in 'x' at the end, is the instance of convert that was created destroyed?
Is this approach valid overall? It feels like I've done something a little strange here.
Note: The code works on both Python 3 and 2.
class Convert(object):
def __new__(self, val, unit_in, unit_out):
convertString = '%s_to_%s' % (unit_in, unit_out)
try:
convertFunction = getattr(self, convertString)
except:
return None
return convertFunction(self, val)
def mm_to_cm(self, millimeters):
return millimeters / 10.0
def mm_to_in(self, millimeters):
return ((millimeters / 1000.0) / 0.9144) * 36.0
def mm_to_ft(self, millimeters):
return ((millimeters / 1000.0) / 0.9144) * 3.0
def mm_to_m(self, millimeters):
return millimeters / 1000.0
def cm_to_mm(self, centimeters):
return centimeters * 10.0
def cm_to_in(self, centimeters):
return ((centimeters / 100.0) / 0.9144) * 36.0
def cm_to_ft(self, centimeters):
return ((centimeters / 100.0) / 0.9144) * 3.0
def cm_to_m(self, centimeters):
return centimeters / 100.0
def in_to_mm(self, inches):
return (((inches / 12.0) / 3.0) / (1.0 / 0.9144)) * 1000.0
def in_to_cm(self, inches):
return (((inches / 12.0) / 3.0) / (1.0 / 0.9144)) * 100.0
def in_to_ft(self, inches):
return inches / 12.0
def in_to_m(self, inches):
return (((inches / 12.0) / 3.0) / (1.0 / 0.9144))
def ft_to_mm(self, feet):
return (feet / 3.0) / (1.0 / 0.9144) * 1000.0
def ft_to_cm(self, feet):
return (feet / 3.0) / (1.0 / 0.9144) * 100.0
def ft_to_in(self, feet):
return feet * 12.0
def ft_to_m(self, feet):
return (feet / 3.0) / (1.0 / 0.9144)
def m_to_mm(self, meters):
return meters * 1000.0
def m_to_cm(self, meters):
return meters * 100.0
def m_to_in(self, meters):
return (meters / 0.9144) * 36.0
def m_to_ft(self, meters):
return (meters / 0.9144) * 3.0
print(Convert( 1.0, 'ft', 'in'))
print(Convert(12.0, 'in', 'ft'))
print(Convert( 0.3048, 'm', 'ft'))
x = Convert(3, 'ft', 'm')
print(x)
exit()
output:
12.0
1.0
1.0
0.9144
Firstly as highlighted by Tlomoloko using a class for this is really strange.
However I think the both of your are missing something rather important. If you wanted to hand write every SI prefix for metres alone then you'll rack up an unmaintainable amount of code. The table contains 21 prefixes, such as no prefix - m, centi - cm, and kilo - km. Whilst I'm sure many conversions you'll think are ridiculous. Who needs to convert from yoctometres to yottametres? But what's the harm in supporting it, if you allow something reasonable like zetta to yotta?
To write the conversions for all SI prefixes would require only $$\\binom{21+1}{2}\$$ or 231 bespoke functions. Which would be absolutely ridiculous to write by hand.
And so the solution to this is to have an intermarry value that you always convert to or from. And since all of your existing functions are nice simple multiplications or divisions we can simply assign a single value for each unit.
This may be a bit hard to visualize, and so we'll run through some examples.
### Example
• 1 cm to mm
1 cm = 0.01 m
1 mm = 0.001 m
First we convert 1 cm to metres. This is as simple as $$\1 \times 0.01\$$. Afterwards we convert from metres to millimetres simply as $$\\frac{1 \times 0.01}{0.001}\$$.
Which results in 10.
• 1' to inches
1' = 0.3048 m
1" = 0.0254 m
First we convert 1' to meters. This is as simple as $$\1 \times 0.3048\$$. Afterwards we convert from metres to foot simply as $$\\frac{1 \times 0.3048}{0.0254}\$$.
Which results in 12.
### Code
CONVERSIONS = {
'm': 1,
'cm': 0.01,
'mm': 0.001,
'in': 0.0254,
'ft': 0.3048,
}
def convert(value, unit_in, unit_out):
return value * CONVERSIONS[unit_in] / CONVERSIONS[unit_out]
print(convert( 1.0, 'ft', 'in'))
print(convert(12.0, 'in', 'ft'))
print(convert(0.3048, 'm', 'ft'))
print(convert(3, 'ft', 'm' ))
Now the code's not perfect. If you run it you should instantly notice that it outputs some ugly 12.000000000000002 rather than 12.0. Yuck.
And so we can convert your code to use fractions.Fraction. However this will print 1143/1250 rather than 0.9144. Since I dislike getting 12.0 rather than 12, we can fix these at the same time.
from typing import Dict, Union
from fractions import Fraction
Number = Union[int, float]
CONVERSIONS: Dict[str, Fraction] = {
'm': Fraction('1'),
'cm': Fraction('0.01'),
'mm': Fraction('0.001'),
'in': Fraction('0.0254'),
'ft': Fraction('0.3048'),
}
def to_number(value: Fraction) -> Number:
if value % 1:
return float(value)
else:
return int(value)
def convert(value: Number, unit_in: str, unit_out: str) -> Number:
if __name__ == '__main__':
print(convert( 1.0, 'ft', 'in'))
print(convert(12.0, 'in', 'ft'))
print(convert( 0.3048, 'm', 'ft'))
print(convert( 3, 'ft', 'm'))
• I was using a class as sort of a stand-in for a struct. My posted code did not entirely reflect my intent as I was still studying the notion of "data classes" docs.python.org/3/library/dataclasses.html I appreciate your criticisms and approach and will study the fractions module. – AlanK Feb 9 '20 at 0:04
• @AlanK I don't think using any of a class, a dataclass, or a named tuple would be beneficial here. Please understand that any deception in a question can lead to drastically different answers. In the future please post your finished code verbatim for the best experience. – Peilonrayz Feb 9 '20 at 20:35
• Deception? I think you should choose another word. I said my code did not entirely reflect my intent because I was still studying data classes. To put it another way, I didn't understand data classes yet, so my example code did not reflect the correct implementation of one. That is not "deception". Deception suggests an intent to mislead, and I certainly had no such intent. I don't know if English is your first language or not, but in common parlance, to call someone "deceptive" is about the same as calling them a liar. Harsh. – AlanK Feb 11 '20 at 9:08
• @AlanK At no point did I call you deceptive or a liar. – Peilonrayz Feb 11 '20 at 9:44
I think that it's a bit weird to use classes as a way to return values, I don't know if that's common in python, but anyways. The problem with your code, in my opinion, is that if you are converting a unit to another, ex: cm to in, you don't need to create a class that will store information about other units, like: inches to m. That ends up happening in your code whenever you do print(Convert(val, unit_in, unit_out)). Here's my solution:
class LengthUnit():
# This class stores the unit's ratio to all the others
def __init__(self, to_mm, to_cm, to_m, to_in, to_ft):
self.to_mm = to_mm
self.to_cm = to_cm
self.to_m = to_m
self.to_in = to_in
self.to_ft = to_ft
# This function returns the conversion of the value
# The value is divided by the first number in the ratio,
# and multiplied by the second number (it's a tuple).
def convert_unit(val, unit_in, unit_out):
unit_out = 'to_{}'.format(unit_out)
try:
return val / getattr(unit_in, unit_out)[0] * getattr(unit_in, unit_out)[1]
except AttributeError:
return 'Invalid Units'
if __name__ == '__main__':
# In this dictionary, you store all the unit classes
length_units = {'mm' : LengthUnit(to_mm= (1.0, 1.0), to_cm= (0.1, 1.0), to_m= (0.001, 1.0), to_in= (25.4, 1), to_ft= (305, 1)),
'cm' : LengthUnit(to_mm= (1.0, 10), to_cm= (1.0, 1.0), to_m= (100.0, 1.0), to_in= (2.54, 1), to_ft=(30.48, 1.0)),
'm' : LengthUnit(to_mm= (1.0, 1000), to_cm= (1.0, 100.0), to_m= (1.0, 1.0), to_in= (1.0, 39.37), to_ft=(1.0, 3.281)),
'in' : LengthUnit(to_mm= (1.0, 25.4), to_cm= (1.0, 2.54), to_m= (39.37, 1.0), to_in=(1.0, 1.0), to_ft= (12.0, 1)),
'ft' : LengthUnit(to_mm= (1.0, 305), to_cm= (1.0, 30.48), to_m= (3.281, 1.0), to_in= (1, 12.0), to_ft= (1.0, 1.0))
}
print(convert_unit(5, length_units['cm'], 'm'))
print(convert_unit(10, length_units['ft'], 'mm'))
print(convert_unit(23, length_units['in'], 'm'))
# If a non-defined unit is parsed, a error message is printed.
print(convert_unit(5, length_units['mm'], 'miles'))
This is all a rough sketch, you can adjust the values if you want them to be more exact, or change the function if you want it to print the values instead of returning them. Hope it was helpful! | 2021-06-24 17:38:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38856059312820435, "perplexity": 8849.241497472041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488556482.89/warc/CC-MAIN-20210624171713-20210624201713-00386.warc.gz"} |
http://www.pdmi.ras.ru/preprint/1998/98-25.html | # PREPRINT 25/1998
I. PANIN and A. SMIRNOV
## ON A THEOREM OF GROTHENDIECK
This preprint was accepted November 11, 1998
Contact: I. PANIN and A. SMIRNOV
ABSTRACT:
It is considered a smooth projective morphism $p:T\to S$
to a smooth variety $S$. It is proved, in particular, the
following result. The total direct image $Rp_*(\Bbb Z/n\Bbb Z)$
of the constant \'etale sheaf $\Bbb Z/n\BZ$ is locally for
Zarisky topology quasi-isomorphic to a bounded complex
$\L$ on $S$ consisting of locally constant constructible
\'etale $\Bbb Z/n\Bbb Z$-module sheaves.
Back to all preprints
Back to the Petersburg Department of Steklov Institute of Mathematics
| 2017-10-22 02:41:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702816009521484, "perplexity": 1982.700129093109}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825057.91/warc/CC-MAIN-20171022022540-20171022042540-00575.warc.gz"} |
https://www.techwhiff.com/issue/you-are-trying-to-decide-whether-to-repaint-your-house--546719 | ### Why might political parties write platforms before presidential elections
why might political parties write platforms before presidential elections...
### What is the volume of a cube with a side length of 11 inches? A. 33 in3 B. 10,648 in3 C. 1331 in3 D. 121 in3
What is the volume of a cube with a side length of 11 inches? A. 33 in3 B. 10,648 in3 C. ...
### What the square root of 65 rounded to the nearest tenth?
What the square root of 65 rounded to the nearest tenth?...
### The expression 3n^2+n+2. What is the coefficient of n?
the expression 3n^2+n+2. What is the coefficient of n?...
### Pls help Three same-sized cubes are glued together as shown below. If the side length of each cube is 8 inches, what is the surface area of the figure?
Pls help Three same-sized cubes are glued together as shown below. If the side length of each cube is 8 inches, what is the surface area of the figure?...
### Which of the following? Select one: a. Synergistic dominance b. Controlled instability c. Autogenic inhibition d. Functional flexibility
which of the following? Select one: a. Synergistic dominance b. Controlled instability c. Autogenic inhibition d. Functional flexibility...
### The Atlantic Provinces are like New England because _____. a.glaciers shaped the landscape b.they both have a warm climate c.rich mineral deposits lie across the area d.they are both sparsely populated
The Atlantic Provinces are like New England because _____. a.glaciers shaped the landscape b.they both have a warm climate c.rich mineral deposits lie across the area d.they are both sparsely populated...
### Why did Christopher Columbus and his crew run out of food on their voyage in 1492?
Why did Christopher Columbus and his crew run out of food on their voyage in 1492?...
### What is the answer to the inequality 3t + 9 ≥ 15
what is the answer to the inequality 3t + 9 ≥ 15...
### What color would a bromocresol green solution be at ph=7 ?
What color would a bromocresol green solution be at ph=7 ?...
### A scientist hypothesizes that the lack of chinook salmon is negatively affecting the orca population. Would this hypothesis need to be tested by another scientist in order to become a theory? Why or why not? A) It is not done because scientist do not make mistakes. B) It is not done because scientists do not read each other’s work. C) It is done because peer review and the consideration of an alternative explanation should be considered. D) It is done because peer review is not mandatory when de
A scientist hypothesizes that the lack of chinook salmon is negatively affecting the orca population. Would this hypothesis need to be tested by another scientist in order to become a theory? Why or why not? A) It is not done because scientist do not make mistakes. B) It is not done because scientis...
### Where are valence electrons located in an atom? A. In all energy levels B. In the outermost energy level C. In the lowest energy level D. In the nucleus
Where are valence electrons located in an atom? A. In all energy levels B. In the outermost energy level C. In the lowest energy level D. In the nucleus...
### Who won the Korean War? Give your opinion by checking any boxes that you agree with. The United States won because it stopped the spread of communism to South Korea. South Korea won because it was not conquered by North Korea. North Korea won because it showed its military strength. No one won. The Korean Peninsula had the same borders at the end of the war as it did at the beginning.
Who won the Korean War? Give your opinion by checking any boxes that you agree with. The United States won because it stopped the spread of communism to South Korea. South Korea won because it was not conquered by North Korea. North Korea won because it showed its military strength. No one won. ...
### Side effects can sometimes become
Side effects can sometimes become...
### What type of bond is found between the nitrogen bases of dna
what type of bond is found between the nitrogen bases of dna...
### There are 212.5 grams of sugar in a 2-liter bottle of soda. How much sugar is there in 4.5 bottles? please HELP ME
There are 212.5 grams of sugar in a 2-liter bottle of soda. How much sugar is there in 4.5 bottles? please HELP ME ...
### Which model represents 3x+26=2(3+2x)
which model represents 3x+26=2(3+2x)... | 2023-02-03 21:01:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2433348149061203, "perplexity": 2844.5550685930975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00383.warc.gz"} |
https://triangle.mth.kcl.ac.uk/?search=au:Pietro%20au:Longhi | Found 2 result(s)
### 01.10.2020 (Thursday)
#### BPS counting with exponential networks (email p.agarwal AT qmul.ac.uk for the zoom link))
Regular Seminar Pietro Longhi (ETH Zurich)
at: 14:00 QMWroom Zoom abstract: Spectral networks compute certain enumerative invariants associated with Hitchin systems, by focusing on the interplay of certain geometric and combinatorial data within them. In physics, spectral networks count BPS states of class S theories through 2d-4d wall crossing. I will describe a 3d-5d uplift of this based on exponential networks, that computes generalized Donaldson-Thomas invariants of toric Calabi Yau threefolds.
### 09.04.2015 (Thursday)
#### Spectral Networks: Extensions and Applications
Regular Seminar Pietro Longhi (Rutgers University)
at: 14:00 QMWroom G.O. Jones 610 abstract: The BPS spectra of Class S theories are among the best understood, thanks in part to a construction known as Spectral Networks. We will review this framework and recent developments of it, and present results obtained through their applications. | 2021-11-30 00:40:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43214792013168335, "perplexity": 6017.325122206879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358847.80/warc/CC-MAIN-20211129225145-20211130015145-00203.warc.gz"} |
https://forum.dynare.org/t/oo--mean-from-3rd-order-approximation/5830 | # Oo_.mean from 3rd-order approximation?
The 1st and 2nd order approximation give
but 3rd order approximation doesn’t. How to get the theoretical mean when I use 3rd order?
Most probably because you are requesting theoretical moments, which are not yet implemented at
If you use the
option, the field will be there.
Dear Prof. Pfeifer,
```stoch_simul(order = 3, periods=0, nograph, irf=0); ```
However, I still couldn’t find `oo_.mean`Did I make a mistake using the “periods” option? | 2022-05-18 23:46:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8508501052856445, "perplexity": 3663.5849788477344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00054.warc.gz"} |
https://isabelle.in.tum.de/repos/isabelle/rev/b3b61ea9632c | author paulson Wed, 11 Jul 2001 17:55:46 +0200 changeset 11410 b3b61ea9632c parent 11409 f0bcb3dfa7d6 child 11411 c315dda16748
indexing
--- a/doc-src/TutorialI/Sets/sets.tex Wed Jul 11 15:21:07 2001 +0200
+++ b/doc-src/TutorialI/Sets/sets.tex Wed Jul 11 17:55:46 2001 +0200
@@ -31,15 +31,15 @@
HOL's set theory should not be confused with traditional, untyped set
theory, in which everything is a set. Our sets are typed. In a given set,
all elements have the same type, say~$\tau$, and the set itself has type
-\isa{$\tau$~set}.
+$\tau$~\tydx{set}.
-We begin with \bfindex{intersection}, \bfindex{union} and
-\bfindex{membership relation}, there is a symbol for its negation. These
+We begin with \textbf{intersection}, \textbf{union} and
+\textbf{membership relation}, there is a symbol for its negation. These
points can be seen below.
-Here are the natural deduction rules for intersection. Note the
-resemblance to those for conjunction.
+Here are the natural deduction rules for \rmindex{intersection}. Note
+the resemblance to those for conjunction.
\begin{isabelle}
\isasymlbrakk c\ \isasymin\ A;\ c\ \isasymin\ B\isasymrbrakk\
\isasymLongrightarrow\ c\ \isasymin\ A\ \isasyminter\ B%
@@ -50,7 +50,8 @@
\rulename{IntD2}
\end{isabelle}
-Here are two of the many installed theorems concerning set complement.
+Here are two of the many installed theorems concerning set
+complement.\index{complement!of a set}%
Note that it is denoted by a minus sign.
\begin{isabelle}
(c\ \isasymin\ -\ A)\ =\ (c\ \isasymnotin\ A)
@@ -126,8 +127,8 @@
{\isasymLongrightarrow}\ A\ =\ B
\rulename{set_ext}
\end{isabelle}
-Extensionality is often expressed as
-$A=B\iff A\subseteq B\conj B\subseteq A$.
+Extensionality can be expressed as
+$A=B\iff (A\subseteq B)\conj (B\subseteq A)$.
The following rules express both
directions of this equivalence. Proving a set equation using
\isa{equalityI} allows the two inclusions to be proved independently.
@@ -145,8 +146,8 @@
\subsection{Finite Set Notation}
-\indexbold{sets!notation for finite}\index{*insert (constant)}
-Finite sets are expressed using the constant \isa{insert}, which is
+\indexbold{sets!notation for finite}
+Finite sets are expressed using the constant \cdx{insert}, which is
a form of union:
\begin{isabelle}
insert\ a\ A\ =\ \isacharbraceleft a\isacharbraceright\ \isasymunion\ A
@@ -232,8 +233,8 @@
\isasymand\ p{\isasymin}prime\ \isasymand\
q{\isasymin}prime\isacharbraceright"
\end{isabelle}
-The proof is trivial because the left and right hand side
-of the expression are synonymous. The syntax appearing on the
+The left and right hand sides
+of this equation are identical. The syntax used in the
left-hand side abbreviates the right-hand side: in this case, all numbers
that are the product of two primes. The syntax provides a neat
way of expressing any set given by an expression built up from variables
@@ -370,7 +371,7 @@
A\cup B$is replaced by$x\in A\vee x\in B$. The internal form of a comprehension involves the constant -\isa{Collect},\index{*Collect (constant)} +\cdx{Collect}, which occasionally appears when a goal or theorem is displayed. For example, \isa{Collect\ P} is the same term as \isa{\isacharbraceleft x.\ P\ x\isacharbraceright}. The same thing can @@ -383,25 +384,22 @@ \isa{Bex\ A\ P}\index{*Bex (constant)} is \isa{\isasymexists z\isasymin A.\ P\ x}. For indexed unions and intersections, you may see the constants -\isa{UNION}\index{*UNION (constant)} and -\isa{INTER}\index{*INTER (constant)}\@. -The internal constant for$\varepsilon x.P(x)$is -\isa{Eps}\index{*Eps (constant)}. - +\cdx{UNION} and \cdx{INTER}\@. +The internal constant for$\varepsilon x.P(x)$is~\cdx{Eps}. We have only scratched the surface of Isabelle/HOL's set theory. -One primitive not mentioned here is the powerset operator -{\isa{Pow}}. Hundreds of theorems are proved in theory \isa{Set} and its +Hundreds of theorems are proved in theory \isa{Set} and its descendants. \subsection{Finiteness and Cardinality} \index{sets!finite}% -The predicate \isa{finite} holds of all finite sets. Isabelle/HOL includes -many familiar theorems about finiteness and cardinality -(\isa{card}). For example, we have theorems concerning the cardinalities -of unions, intersections and the powerset:\index{cardinality} +The predicate \sdx{finite} holds of all finite sets. Isabelle/HOL +includes many familiar theorems about finiteness and cardinality +(\cdx{card}). For example, we have theorems concerning the +cardinalities of unions, intersections and the +powerset:\index{cardinality} % \begin{isabelle} {\isasymlbrakk}finite\ A;\ finite\ B\isasymrbrakk\isanewline @@ -426,9 +424,9 @@ \begin{warn} The term \isa{finite\ A} is defined via a syntax translation as an -abbreviation for \isa{A \isasymin Finites}, where the constant \isa{Finites} -denotes the set of all finite sets of a given type. There is no constant -\isa{finite}. +abbreviation for \isa{A \isasymin Finites}, where the constant +\cdx{Finites} denotes the set of all finite sets of a given type. There +is no constant \isa{finite}. \end{warn} \index{sets|)} @@ -454,7 +452,7 @@ \rulename{ext} \end{isabelle} -\indexbold{function updates}% +\indexbold{updating a function}% Function \textbf{update} is useful for modelling machine states. It has the obvious definition and many useful facts are proved about it. In particular, the following equation is installed as a simplification @@ -560,9 +558,11 @@ \rulename{expand_fun_eq} \end{isabelle} % -This is just a restatement of extensionality. Our lemma states -that an injection can be cancelled from the left -side of function composition: +This is just a restatement of +extensionality.\indexbold{extensionality!for functions} +Our lemma +states that an injection can be cancelled from the left side of +function composition: \begin{isabelle} \isacommand{lemma}\ "inj\ f\ \isasymLongrightarrow\ (f\ o\ g\ =\ f\ o\ h)\ =\ (g\ =\ h)"\isanewline \isacommand{apply}\ (simp\ add:\ expand_fun_eq\ inj_on_def)\isanewline @@ -618,10 +618,11 @@ \end{isabelle} \medskip - A function's \textbf{range} is the set of values that the function can +\index{range!of a function}% +A function's \textbf{range} is the set of values that the function can take on. It is, in fact, the image of the universal set under -that function. There is no constant {\isa{range}}. Instead, {\isa{range}} -abbreviates an application of image to {\isa{UNIV}}: +that function. There is no constant \isa{range}. Instead, +\sdx{range} abbreviates an application of image to \isa{UNIV}: \begin{isabelle} \ \ \ \ \ range\ f\ {\isasymrightleftharpoons}\ fUNIV @@ -665,7 +666,8 @@ \end{isabelle} \indexbold{composition!of relations}% -\textbf{Composition} of relations (the infix \isa{O}) is also available: +\textbf{Composition} of relations (the infix \sdx{O}) is also +available: \begin{isabelle} r\ O\ s\ \isasymequiv\ \isacharbraceleft(x,z).\ \isasymexists y.\ (x,y)\ \isasymin\ s\ \isasymand\ (y,z)\ \isasymin\ r\isacharbraceright \rulename{comp_def} @@ -715,20 +717,9 @@ \end{isabelle} It satisfies many similar laws. -%Image under relations, like image under functions, distributes over unions: -%\begin{isabelle} -%r\ \ -%({\isasymUnion}x\isasyminA.\ -%B\ -%x)\ =\ -%({\isasymUnion}x\isasyminA.\ -%r\ \ B\ -%x) -%\rulename{Image_UN} -%\end{isabelle} - - -The \bfindex{domain} and \bfindex{range} of a relation are defined in the +\index{domain!of a relation}% +\index{range!of a relation}% +The \textbf{domain} and \textbf{range} of a relation are defined in the standard way: \begin{isabelle} (a\ \isasymin\ Domain\ r)\ =\ (\isasymexists y.\ (a,y)\ \isasymin\ @@ -743,11 +734,10 @@ \end{isabelle} Iterated composition of a relation is available. The notation overloads -that of exponentiation: +that of exponentiation. Two simplification rules are installed: \begin{isabelle} R\ \isacharcircum\ \isadigit{0}\ =\ Id\isanewline R\ \isacharcircum\ Suc\ n\ =\ R\ O\ R\isacharcircum n -\rulename{relpow.simps} \end{isabelle} \subsection{The Reflexive and Transitive Closure} @@ -810,9 +800,9 @@ \subsection{A Sample Proof} -The reflexive transitive closure also commutes with the converse. -Let us examine the proof. Each direction of the equivalence is -proved separately. The two proofs are almost identical. Here +The reflexive transitive closure also commutes with the converse +operator. Let us examine the proof. Each direction of the equivalence +is proved separately. The two proofs are almost identical. Here is the first one: \begin{isabelle} \isacommand{lemma}\ rtrancl_converseD:\ "(x,y)\ \isasymin \ @@ -856,9 +846,10 @@ \end{isabelle} \begin{warn} -Note that \isa{blast} cannot prove this theorem. Here is a subgoal that -arises internally after the rules \isa{equalityI} and \isa{subsetI} have -been applied: +This trivial proof requires \isa{auto} rather than \isa{blast} because +of a subtle issue involving ordered pairs. Here is a subgoal that +arises internally after the rules +\isa{equalityI} and \isa{subsetI} have been applied: \begin{isabelle} \ 1.\ \isasymAnd x.\ x\ \isasymin \ (r\isasyminverse )\isactrlsup *\ \isasymLongrightarrow \ x\ \isasymin \ (r\isactrlsup *)\isasyminverse @@ -867,7 +858,7 @@ %\isasymLongrightarrow \ x\ \isasymin \ (r\isasyminverse )\isactrlsup * \end{isabelle} \par\noindent -We cannot use \isa{rtrancl_converseD}\@. It refers to +We cannot apply \isa{rtrancl_converseD}\@. It refers to ordered pairs, while \isa{x} is a variable of product type. The \isa{simp} and \isa{blast} methods can do nothing, so let us try \isa{clarify}: @@ -888,7 +879,7 @@ \index{relations!well-founded|(}% A well-founded relation captures the notion of a terminating process. -Each \isacommand{recdef}\index{recdef@\isacommand{recdef}} +Each \commdx{recdef} declaration must specify a well-founded relation that justifies the termination of the desired recursive function. Most of the forms of induction found in mathematics are merely special cases of @@ -914,7 +905,7 @@ well-founded induction only in \S\ref{sec:CTL-revisited}. \end{warn} -Isabelle/HOL declares \isa{less_than} as a relation object, +Isabelle/HOL declares \cdx{less_than} as a relation object, that is, a set of pairs of natural numbers. Two theorems tell us that this relation behaves as expected and that it is well-founded: \begin{isabelle} @@ -950,11 +941,12 @@ \rulename{wf_measure} \end{isabelle} -Of the other constructions, the most important is the \textbf{lexicographic -product} of two relations. It expresses the standard dictionary -ordering over pairs. We write \isa{ra\ <*lex*>\ rb}, where \isa{ra} -and \isa{rb} are the two operands. The lexicographic product satisfies the -usual definition and it preserves well-foundedness: +Of the other constructions, the most important is the +\bfindex{lexicographic product} of two relations. It expresses the +standard dictionary ordering over pairs. We write \isa{ra\ <*lex*>\ +rb}, where \isa{ra} and \isa{rb} are the two operands. The +lexicographic product satisfies the usual definition and it preserves +well-foundedness: \begin{isabelle} ra\ <*lex*>\ rb\ \isasymequiv \isanewline \ \ \isacharbraceleft ((a,b),(a',b')).\ (a,a')\ \isasymin \ ra\ @@ -976,10 +968,11 @@ discuss it. \medskip -Induction comes in many forms, including traditional mathematical -induction, structural induction on lists and induction on size. All are -instances of the following rule, for a suitable well-founded -relation~$\prec$: +Induction\index{induction!well-founded|(} +comes in many forms, +including traditional mathematical induction, structural induction on +lists and induction on size. All are instances of the following rule, +for a suitable well-founded relation~$\prec$: $\infer{P(a)}{\infer*{P(x)}{[\forall y.\, y\prec x \imp P(y)]}}$ To show$P(a)$for a particular term~$a$, it suffices to show$P(x)$for arbitrary~$x$under the assumption that$P(y)$holds for$y\prec x\$.
@@ -1001,7 +994,8 @@
For example, the predecessor relation on the natural numbers
is well-founded; induction over it is mathematical induction.
The tail of'' relation on lists is well-founded; induction over
-it is structural induction.
+it is structural induction.%
+\index{induction!well-founded|)}%
\index{relations!well-founded|)}
` | 2023-03-21 12:17:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.984844446182251, "perplexity": 13929.062795348342}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00363.warc.gz"} |
https://isabelle.in.tum.de/repos/isabelle/diff/74e8f703f5f2/doc-src/Inductive/ind-defs.tex | doc-src/Inductive/ind-defs.tex
changeset 6745 74e8f703f5f2 parent 6637 57abed64dc14 child 7829 c2672c537894
--- a/doc-src/Inductive/ind-defs.tex Thu May 27 20:49:10 1999 +0200
+++ b/doc-src/Inductive/ind-defs.tex Fri May 28 11:42:07 1999 +0200
@@ -219,7 +219,7 @@
\end{eqnarray*}
These equations are instances of the Knaster-Tarski theorem, which states
that every monotonic function over a complete lattice has a
-fixedpoint~\cite{davey&priestley}. It is obvious from their definitions
+fixedpoint~\cite{davey-priestley}. It is obvious from their definitions
that $\lfp$ must be the least fixedpoint, and $\gfp$ the greatest.
This fixedpoint theory is simple. The Knaster-Tarski theorem is easy to | 2021-06-18 12:29:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113717079162598, "perplexity": 5285.196230081685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00185.warc.gz"} |
https://www.physicsforums.com/threads/field-transformation-under-an-active-transformation.868267/ | # A Field transformation under an active transformation
1. Apr 22, 2016
### spaghetti3451
Under the infinitesimal translation $x^{\nu} \rightarrow x^{\nu}-\epsilon^{\nu}$,
the field $\phi(x)$ transforms as $\phi_{a}(x) \rightarrow \phi_{a}(x) + \epsilon^{\nu}\partial_{\nu}\phi_{a}(x)$.
I don't understand why the field transforms as above. Let me try to do the math.
The Taylor expansion of $f(x+\delta x)$, where the argument $x+\delta x$ is a $4$-vector and $f$ is a scalar, is $f(x+\delta x)=f(x)+\frac{\partial f}{\partial x^{\nu}}(\delta x)^{\nu} + \dots$
So, $\phi_{a}(x) \rightarrow \phi_{a}(x-\epsilon) = \phi_{a}(x) + (-\epsilon^{\nu})\partial_{\nu}\phi_{a}(x)$.
Now, where did I go wrong?
2. Apr 22, 2016
### JorisL
The translation you applied is $x^\nu\to x^\nu - \epsilon^\nu$.
That's the only difference.
The field transformation you are trying to prove is $\phi_a(x) \to \phi^{\prime}_a(x) \equiv \phi_a(x^\prime)=\phi_a(x+\epsilon)=\phi_a(x) +\epsilon^\nu\partial_\nu\phi_a(x)$.
I find the treatment in Chapter 1 of Freedman and Van Proeyen's book very clear. | 2018-01-22 09:04:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9769047498703003, "perplexity": 580.0610058779805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891196.79/warc/CC-MAIN-20180122073932-20180122093932-00494.warc.gz"} |
http://jasss.soc.surrey.ac.uk/18/4/3.html | ### Abstract
Cooperation is ubiquitous in biological and social systems. Previous studies revealed that a preference toward similar appearance promotes cooperation, a phenomenon called tag-mediated cooperation or communitarian cooperation. This effect is enhanced when a spatial structure is incorporated, because space allows agents sharing an identical tag to regroup to form locally cooperative clusters. In spatially distributed settings, one can also consider migration of organisms, which has a potential to further promote evolution of cooperation by facilitating spatial clustering. However, it has not yet been considered in spatial tag-mediated cooperation models. Here we show, using computer simulations of a spatial model of evolutionary games with organismal migration, that tag-based segregation and homophilic cooperation arise for a wide range of parameters. In the meantime, our results also show another evolutionarily stable outcome, where a high level of heterophilic cooperation is maintained in spatially well-mixed patterns. We found that these two different forms of tag-mediated cooperation appear alternately as the parameter for temptation to defect is increased.
Keywords:
Evolution of Cooperation, Tag, Spatial Structure, Migration, Segregation
### Introduction
1.1
Evolution of cooperation has attracted attention of both biological and social scientists. Cooperation benefits others while incurring cost to the actor. In contrast, defection allows the actor to receive the benefit without paying any cost. Thus, natural selection and social mimetism tend to naturally favor defection in a well-mixed population. It is known that some kind of mechanism to facilitate the regrouping of cooperators, either in a temporal sequence (tit-for-tat) or in a spatial distribution, is needed for cooperation to evolve (Nowak 2006). Among them, kin selection (Hamilton 1964) has the longest history. According to this theory, altruistic genes can survive and spread into a population by helping relatives who share the same genes despite the sacrifice of individuals. A phenotypical tag can be regarded as an indicator of such relatedness although it is a weak and initially arbitrary characteristic. The effect of such a tag has been studied in the context of the evolution of cooperation. Riolo et al. (2001) have shown how the existence of tags strongly promotes cooperation in one shot PD even in the absence of spatial structure, where every agent helps another agent if the distance between their tags is less than a given threshold. Their model assumed that agents with similar tags always cooperate, and therefore cooperation was greatly enhanced in their model. Roberts and Sherratt (2002) showed that, without this assumption, cooperation no longer evolves in the absence of spatial structure. In contrast, analytical studies showed that cooperation can be enhanced by a tag mechanism even without spatial structure, but it requires complicated conditions (Antal et al. 2009; Traulsen & Nowak 2007; Traulsen & Schuster 2003).
1.2
Some studies showed that the presence of a spatial structure can help cooperation to spread when combined with the tag mechanism (Axelrod et al. 2004; Hammond & Axelrod 2006; Hartshorn et al. 2013; Jansen & van Baalen 2006; Shutters & Hales 2013; Spector & Klein 2006; Traulsen & Claussen 2004). Intuitively, restricting cooperation to agents sharing the same tag decreases the exposure to defectors (compared to unconditional cooperation) and increases the frequency of cooperation (compared to a cooperation restricted to the opposite tag). For instance, Spector and Klein (2006) showed in a one-dimensional population structure how cooperation evolves in the presence of the tag mechanism. Other models revealed the same fact by assuming a two-dimensional population structure (Axelrod et al. 2004; Hammond & Axelrod 2006; Hartshorn et al. 2013; Jansen & van Baalen 2006; Shutters & Hales 2013; Traulsen & Claussen 2004). Especially, Traulsen and Claussen (2004) observed a strong segregation between agents with different tags.
1.3
In all of those previous studies reviewed above, agents were embedded in a spatial structure a priori and forced to play games with their local neighbors without any active mobility. In contrast, here we propose a new model in which we let agents to migrate spatially under certain conditions, and allow the strategy for migration to evolve together with the tag-based strategies for game play. This model modification is inspired by the recent finding that contingent migration generally promotes cooperation (Aktipis 2004; Chen et al. 2012; Helbing & Yu 2009; Ichinose et al. 2013; Jiang et al. 2010; Roca & Helbing 2011). To our knowledge, the effect of such migration on tag-mediated cooperation in spatial models has never been discussed before, except in our recent study (Bersini 2014). Furthermore, in most of the earlier tag models, the semantic role of the tags was always fixed beforehand (e.g., an agent cooperates with another agent whose tag is identical or similar), although it would be more natural to assume that such a role should also arise as the result of evolution. The model we propose assumes that the semantic role of tags is subject to evolution, together with the tag threshold for migration and strategies for game play. With this model, we aim to find necessary conditions for the emergence of cooperation and segregation, and also to show how cooperation and segregation co-evolve hand in hand.
### Model
2.1
We developed an agent-based model in which individual agents are distributed over a two-dimensional square lattice and play the PD game with their neighbors. The square lattice is composed of $$L\times L$$ sites with periodic boundary conditions. Each site is either empty or occupied by one agent. Agents can migrate to empty sites which represent spatial regions. Initially, agents are randomly distributed over the square lattice. There are two tags (green and red) for the agents. Half of the randomly selected agents are assigned green and the other half are assigned red. As for the PD game, each agent has two cooperation levels, one is cooperation probability, $$p_{Cs} \in [0, 1]$$, to the opponent whose tag is the same as the focal agent, and the other is that probability, $$p_{Cd} \in [0, 1]$$, to the opponent whose tag is different from the focal agent. As the initial setting, an equal number of agents which respectively have $$(p_{Cs}, p_{Cd}) = (1, 0)$$ or $$(p_{Cs}, p_{Cd}) = (0, 1)$$ is distributed over the space unless otherwise noted. Thus, at the beginning of the simulation half of the agents always cooperate with the identical tag agents and the other half always cooperate with the different tag agents. Moreover, each agent is allowed to migrate to another site depending on the tag states of the neighbors. The preference for the tag is represented by $$\eta\in [-1, 1]$$, which is randomly assigned as the initial setting. $$\eta=-1$$ means the agent is completely heterophilic i.e. the agent is satisfied when surrounded by agents with the other tag. $$\eta = 1$$ means the agent is completely homophilic i.e. the agent is satisfied when surrounded by agents with the same tag. Thus, $$\eta = 0$$ means that the agent has no preference about the tag. Therefore, each agent has three genetic values $$p_{Cs}$$, $$p_{Cd}$$, and $$\eta$$, which are all subject to evolution.
2.2
The population density is given by $$\rho$$ (i.e., the fraction of empty sites is $$1 - \rho$$). Thus, the population size is represented by $$N = L\cdot L\cdot\rho$$. The population density remains constant throughout a simulation run, since agents will never die or born.
2.3
Agents are updated asynchronously in a randomized order. The algorithm for updating an agent consists of the following three phases:
1. Game play.
2.4
A randomly selected agent plays the PD game with its neighbors (within the Moore neighborhood) and accumulates the payoffs resulting from the games. If there are no other agent within the neighborhood, no game is played. In each game, two agents probabilistically decide whether to cooperate or defect simultaneously based on their current strategies. They both obtain payoff $$R$$ for mutual cooperation while $$P$$ for mutual defection. If one selects cooperation while the other selects defection, the former receives the sucker's payoff $$S$$ while the latter receives the highest payoff $$T$$, the temptation to defect. The relationship of the four payoffs is usually $$T > R > P > S$$ in PD games. Following the parameters setting used in the model by Nowak and May (1992), we used $$P = 0$$, $$R = 1$$, and $$S = 0$$, while $$T = 1 + b$$ ($$b > 0$$), and $$b$$ was varied as a key experimental parameter. Since the cooperation levels of each agent are defined by continuous values, the following expected payoff, $$p_i$$, is used instead of the actual payoff of the agent $$i$$, playing with an agent $$j$$.
$$\begin{array}{ll} p_i = (1 + b - b\cdot p_{Csi}) p_{Csj} & \quad \mathrm{if}~(Tag_i=Tag_j)~\mathrm{and}\\ p_i = (1 + b - b\cdot p_{Cdi}) p_{Cdj} & \quad (Tag_i\neq Tag_j)\\ \end{array}$$ (1)
2. Strategy updating.
2.5
After the randomly selected agent plays the PD game with its neighbors, the neighbors also play the game with their own neighbors. Once all the games, including the neighbors' games, have taken place, the focal agent imitates the strategy of the agent that achieved the highest total payoff among its neighbors, including itself (if there is a tie one agent is randomly selected). At the strategy updating, a small mutation occurs to the original three values. The new values are picked up from the Gaussian distribution where the means are the original $$p_{Cs}$$, $$p_{Cd}$$, and $$\eta$$ values and the standard deviation is $$\sigma_s$$ for the strategies and $$\sigma_m$$ for $$\eta$$. If there are no other agents within the neighborhood, the agent inherits its original strategies with the mutations.
3. Migration.
2.6
The agent decides to move or not depending on parameter $$\tau$$. $$\tau$$ represents the threshold for the migration and is defined as follows: $$\tau = 1 - |\eta|$$. As described above, the positive and negative $$\eta$$ means the different preference for the tag. Thus, there are two different criteria for the migration. Each agent moves to a randomly selected empty site if the following condition is satisfied.
$$\begin{array}{ll} \tau_i < n_d / (n_s + n_d) & \quad \mathrm{if}~(\eta>0)~\mathrm{and}\\ \tau_i < n_s / (n_s + n_d) & \quad (\eta < 0)\\ \end{array}$$ (2)
where $$n_s$$ ($$n_d$$) is the number of the neighbors that have the same (different) color as the agent $$i$$. If there are no other agents within the neighborhood, the agent stays at the same site.
2.7
We regard $$N$$ time steps as one generation, in which all agents are selected once, on average, for the above three phases. The parameters set used in the simulations is $$L = 50$$, $$\rho = 0.9$$, and $$\sigma_s = 0.0001$$, $$\sigma_m = 0.05$$ and $$b$$ is varied unless otherwise noted. The model code is available at https://www.openabm.org/model/4635/version/1/view.
### Results
#### Evolution of cooperation against the temptation to defect
3.1
We conducted computer simulations of the agent-based model described in the Model. Each simulation was run for 10,000 generations, and the results were collected from the last 1,000 generations unless otherwise noted. We conducted 100 independent simulation runs for each experimental condition, and used their averages as the final results. The segregation level was quantified by calculating
$$s = (1/N) \sum_{i\in N} (n_s/n_i)$$ (3)
where $$N$$ is the number of agents, $$n_i$$ the number of nearby agents around individual $$i$$, and $$n_s$$ the number of such neighbors whose tag is identical to $$i$$ 's tag. If $$n_i$$ is 0, then $$n_s / n_i$$ is defined as 0. As described in the Model, each agent has two real values as interaction strategies. $$p_{Cs}$$ is the cooperation probability to the opponent whose tag is the same as the focal agent while $$p_{Cd}$$ is the cooperation probability to an agent with the different tag. In the results, the two values for all agents are averaged. Since we use continuous strategy sets for $$p_{Cs}$$ and $$p_{Cd}$$ rather than discrete strategies, the agents cannot be described simply as "cooperator" or "defector" any more. Instead, they can be described to have a "high cooperation level" when their values of $$p_{Cs}$$ and $$p_{Cd}$$ are close to 1. In contrast, they are described to have a "low cooperation level" when these values are close to 0. Hereafter, we use these terms in the results.
3.2
The key finding we obtained from the simulations is that agents with high cooperation levels can self-organize in two different ways, depending on the parameter for temptation to defect, which we call $$b$$. Figure 1 shows the evolution of cooperation levels ($$p_{Cs}$$ and $$p_{Cd}$$) as a function of $$b$$ together with the screen shots of simulation runs obtained at their final generations. At the beginning of the simulations, the two game play strategies, $$(p_{Cs}, p_{Cd}) = (1, 0)$$ and $$(p_{Cs}, p_{Cd}) = (0, 1)$$, were equally present in the population. When $$b = 0$$, there is no incentive for agents to defect, while there is still an incentive for cooperation to avoid defection (low cooperation levels) because playing the game with agents with low cooperation levels would yield no payoff. Therefore, there is an evolutionary pressure for agents with high cooperation levels to form clusters. For such agents with high cooperation levels, it is structurally easy to form clusters with agents with the same tag agents rather than doing so with agents with the different tag at $$b = 0$$. This is because, as shown next, clustering with the different tag agents needs more complex spatial arrangements of agents. Therefore the strategy $$(p_{Cs}, p_{Cd}) = (1, 0)$$ simply wins when $$b = 0$$ (Fig. 2(A)). As $$b$$ is increased ($$0 < b \leq 0.26$$), agents with low cooperation levels increase their dominance by exploiting the same tag clusters that were seen at $$b = 0$$. As a response, agents with high cooperation levels try to avoid the exploitation by cooperating with agents with a different tag. When their evolutionary strategy reaches the point to fully cooperate with agents with a different tag ($$b=0.26$$), a unique, "maze-like" spatially well-mixed pattern arises (Fig. 2(B)) as seen in the earlier study (Traulsen & Claussen 2004). As b is further increased ($$0.26 < b \leq 0.42$$), a more complex evolutionary dynamics emerges. In this parameter regime, agents with high cooperation levels initially try to cooperate with agents with a different tag, but since this strategy cannot completely eliminate defective behaviors, the agents with high cooperation levels "switch" their strategies in the middle of evolutionary processes (see the evolutionary trajectory in Fig. 2(C)) to cooperate with agents with the same tag as theirs, resulting again in the formation of segregated patterns. Once such segregated patterns are established, agents with low cooperation levels can survive only at the edges of clusters of cooperators (see the behaviors in Fig. 2(C) ($$b = 0.34$$). When the temptation to defect is too large ($$b< 0.42$$), there is no merit for cooperation to cluster any more, resulting in random spatial patterns (Fig. 2(D)).
Figure 1. Evolution of cooperation levels ($$p_{Cs}$$ and $$p_{Cd}$$) plotted against $$b$$ together with the screen shots obtained at the 3,000th generation. See also Fig. 2 for visualization of cooperation levels and their evolutionary trajectories.
Figure 2. The screen shots at the final generation (3,000) and their evolutionary trajectories of each parameter for different $$b$$ values. In the screen shots, the left panel corresponds to the tags (green and red), the center panel corresponds to $$p_{Cs}$$, and the right panel corresponds to $$p_{Cd}$$. The blue corresponds to 1, the red corresponds to 0, and the intermediate values are represented by the mixed color.
3.3
In these results presented above, the temptation to defect b was the only parameter being varied. As seen in Fig. 1, cooperation survives even when the temptation to defect is quite high ($$p_{Cs} > 0.39$$, $$p_{Cd} > 0.37$$ at $$b = 1.0$$). This is due to the spatial effect that allows cooperative agents to cooperate with each other locally to sustain themselves (regardless of their tags). Figure 3 shows the segregation level $$s$$ and the tolerance to the tag composition of the neighborhood $$\eta$$ when varying $$b$$. The two outcomes, i.e., segregation and spatially well-mixed, can be clearly distinguished.
3.4
We also notice that these two outcomes can be obtained even for the same $$b$$ value, depending on the initial distributions of agents. For example, with $$b = 0.1$$, segregation was obtained 91 times out of 100 simulation runs.
3.5
The other property $$\eta$$ (the tolerance to the tag composition of the neighborhood) also evolves differently depending on $$b$$ values. In all segregated outcomes, $$\eta$$ was above 0.4 at least, meaning that each agent moves to another empty site when surrounded by more than 60% of agents with a different tag, on average. This result has an interesting similarity with the seminal work of Schelling (1971), in which he observed that even a small value of $$\eta$$ (around 0.33) could eventually lead to a segregated state (even though his original model was not an evolutionary one). Our results indicate that, when segregation occurs, $$\eta$$ tends to converge towards a similar value as the critical value obtained by the Schelling's segregation model.
3.6
When spatially well-mixed states arise, $$\eta$$ evolves around −0.15, meaning that each agent moves when surrounded by more than 85% of agents with the same tag as its own, on average. This indicates that, in such cases, agents have little incentive in moving and definitely prefer to stay in a very heterogeneous neighborhood.
Figure 3. Evolution of the segregation level $$s$$ and the tolerance to the tag composition of the neighborhood $$\eta$$ against $$b$$.
3.7
As in the Nowak's original spatial evolutionary game (Nowak & May 1992), the defection pressure parameter $$b$$ appears to be quite critical for clustering of cooperators. For small values, all agents cooperate, in contrast with the high values for which they all defect. The most interesting and unexpected phenomena such as clusters of cooperators can be observed for the intermediary values, as also shown in our model (in addition we also study the effect of tag and migration).
#### Comparison with unconditional migration
3.8
One important question to be asked is how effective the tag-based migration is in promoting the evolution of cooperation compared to an unconditional, or tag-blind, migration. To address this question, we conducted another set of simulations. Figure 4(A) shows the results obtained using a revised model with agents migrating in a random fashion (irrespective of the tags) named unconditional migration ("UM") and compares it with our previous simulation results (labeled "TM", for tag-based migration, shown as horizontal reference lines in the plots), with $$b$$ fixed to 0.1. For the unconditional migration cases, the probability of unconditional migration was varied from 0 to 1. To compare UM and TM fairly, the expected cooperation level, $$p_C = p_{Cs} \times s + p_{Cd} \times (1 - s)$$, is used since the encounter to other agents with same or different tags is different in each model (See Fig. 4(B)). Cooperation was slightly enhanced in the unconditional migration case for a small migration probability, but it is well below the result of the tag-based migration case. Overall, tag-based migration is shown to be much more effective than unconditional migration in promoting the evolution of cooperation.
3.9
Focusing on the small probability of unconditional migration (0.0 to 0.3) (Fig. 4(B)), $$p_{Cd}$$ is much higher than $$p_{Cs}$$. This indicates that the strategy $$(p_{Cs}, p_{Cd}) = (0, 1)$$ always wins in this range. This is because, with $$b = 0.1$$ in the unconditional migration, encountering an agent with a different tag occurs more frequently than encountering an agent with a same tag. Therefore, cooperation is enhanced by interacting with agents with a different tag.
Figure 4. (A) Comparison with unconditional migration (labeled "UM"). "TM" is tag-based migration. In all cases, $$b = 0.1$$. $$p_C$$ of "TM" is much higher than for the random migration. (B) $$p_{Cs}$$, $$p_{Cd}$$, and $$s$$ in UM and TM are shown, respectively. Tag-based migration can maintain high level of cooperation by cooperating with the identical tag agents more frequently. Compare the red line marked with crosses (TM) and the red line marked with circles (UM). The vertical bars of UM indicate standard deviations.
#### Robustness of cooperation and segregation
3.10
To study the effect of the other model parameters, we also varied the other three parameters one by one while keeping the rest constant. Figure 5(A) shows the dependence of cooperation and segregation levels on the population density $$\rho$$ ($$0.10 \leq \rho \leq 0.99$$). When the space is relatively sparse (e.g., $$0.3 < \rho < 0.7$$), it is difficult for agents to form clusters. Moreover, when $$\rho$$ is too low (e.g., $$\rho = 0.1$$), agents simply cannot find each other, and hence interaction among agents and migration hardly occur. Such sparseness prevents the appearance of identical tag groups. Therefore, the high population density (e.g., $$0.80 \leq \rho \leq 0.99$$) is necessary for high cooperation and high segregation to take place.
3.11
Figure 5(B) shows the dependence of cooperation and segregation levels on the mutation rate of strategies $$\sigma_s (10^{-5} \leq \sigma_s \leq 10^{-1})$$. As $$\sigma_s$$ increases, segregation levels slightly decrease whereas the cooperation with the different tag ($$p_{Cd}$$) increases. This is because each strategy needs to be stably linked to each tag in order to make the tag-mediated cooperation to work out effectively. Therefore, the strategies must be mutationally stable enough to maintain the homogeneous tag groups in which agents cooperate with each other.
3.12
Figure 5(C) shows the dependence of cooperation and segregation levels on the mutation for tag threshold $$\sigma_m (0.01 \leq \sigma_m \leq 0.20)$$. When $$\sigma_m$$ is large ($$0.09 \leq \sigma_m \leq 0.20$$), migration occurs too frequently, resulting in the instability and destruction of homogeneous groups. Therefore, the tag threshold must also be mutationally stable to maintain homogeneous tag groups.
Figure 5. Sensitivity analysis. Each of the three parameters ($$\rho$$, $$\sigma_s$$, and $$\sigma_m$$) is varied while keeping the other parameters constant. The vertical bars indicate standard deviations. $$b = 0.1$$.
3.13
Finally, in order to explore the robustness of segregation, we have extended the original model in two ways: increasing the number of tags and enlarging the neighborhood.
#### Increasing the number of tags
3.14
While the number of tags was 2 in our original model, this number was increased to 4, allowing more "tribes" to emerge in the model. Figure 6 shows the $$p_{Cs}$$, $$p_{Cd}$$, and $$s$$ against $$b$$, for three different numbers of tags (2, 3 and 4). A final screen shot of a simulation with $$b = 0.3$$ and 4 tags is also shown. Interestingly, the intermediate $$b$$ values ($$0.3 \leq b \leq 0.5$$) yield higher segregation $$s$$ for three- and four-tag cases than the original two-tag case. This is because the possibility that agents meet the different tag agents increases as the number of tags increases. When cooperation is dominant ($$0.0 \leq b \leq 0.2$$) and in the presence of more tags, cooperating with the different tags is more beneficial to the agents. However, once defection becomes more attractive ($$0.3 \leq b$$), the only way for cooperation to survive is by favoring clusters composed of identical agents. Thus, communitarian cooperation and segregation co-evolve hand in hand in the range $$0.3 \leq b \leq 0.5$$. For further larger values of $$b$$ ($$0.6 \leq b$$), defection becomes extremely attractive destroying easily any attempt of cooperative cluster. In the two-tag cases, clustering with the same tag easily takes place compared to the different tag when $$b$$ is small. In contrast, as the number of tags is increased, since the possibility to meet the different tag is also increased, $$(p_{Cs}, p_{Cd}) = (0, 1)$$ simply wins in the three-tag and four-tag cases. Therefore, segregation for the small $$b$$ is not observed and is only observed for the intermediate value of $$b$$ ($$0.3 \leq b \leq 0.5$$) to prevent clusters from defection invasion in those cases.
Figure 6. Increasing the number of tags. $$p_{Cs}$$, $$p_{Cd}$$, and $$s$$ for the different tag number are shown. The screen shot shows the situation at the final generations (10,000) where $$b$$ is 0.3 and the tag number is 4. Interestingly, when the number of tags is increased, the intermediate $$b$$ values ($$0.3 \leq b \leq 0.5$$) yield more segregation.
#### Increasing neighborhood size
3.15
The second extension was to increase the neighborhood size. The original model used a Moore neighborhood (i.e., $$3 \times 3$$ sites around a focal agent), which is represented as ns-1 in Fig. 7. We increased the neighborhood size to $$5 \times 5$$ sites, which is represented as ns-2 in the figure. By increasing the size and for lower values of $$b$$, $$p_{Cs}$$ decreases while $$p_{Cd}$$ increases, showing that it is harder for segregation to occur. If the neighborhood size is increased, agents can sense more further agents' tags. In this case, the sensitivity to the identical (or different) tags is decreased giving less relevance to the "meaning" of these tags. We thus found that small neighborhood size is better to maintain high level of communitarian cooperation and segregation.
Figure 7. Increasing neighborhood size. ns-1 means the original $$3 \times 3$$ sites centered on a focal agent (Moore neighborhood). ns-2 means the $$5 \times 5$$ sites as the extension. More spatially well-mixed configurations take place for any $$b$$ value.
### Discussion
4.1
In this paper we have shown the necessary conditions for the emergence of high level of tag-based cooperation and segregation in an evolutionary spatial social game.
4.2
It is a first study allowing to assess the effect of all parameters needed for the emergence of cooperation and segregation, although some of the conditions have been previously covered. For example, Spector and Klein (2006) have shown, as we have, that lower mutation rates for the strategy promote tag mediated cooperation. For the coevolution of segregation and cooperation, such genetic stability is critical since tag-mediated cooperation frequently collapses in the case that a tag is not linked to a strategy.
4.3
In other studies, although it is already known that the combination of tag and spatial structure promotes cooperation, we showed that this effect is greatly enhanced when migration is also incorporated. This is because the identical tag groups are easily created by the movement of agents. Hammond et al. (2006) incorporated "immigrants", new agents showing up over time, but such immigrants could not move through the space. Since migration is a fundamental trait of animals and humans, we believe the situation described in this paper to be much more reminiscent of what happens in the real world.
4.4
The main original result of merging the migration and grouping strategy of Schelling together with the cooperative/defective evolution of May/Nowak is to facilitate a new route for cooperation favorable to agents which choose to move and to restrict their cooperative attitude to others sharing their same identity.
4.5
This communitary-restricted regrouping and cooperation is obviously well known to be a definitive salient trait of human nature. While the Schelling's model never really justified why even very tolerant agents could move to other random places, the simulation discussed in this paper shows that the cooperative gain and the increase in cooperative opportunities (against the prisoner dilemma defective trap) might be the real pressure force that encourages people to assemble according to some distinctive traits (color, religion, social classes, etc.).
4.6
One possible application of our model is to be used for collective actions of small autonomous robots. Think about land developments of remote areas by such robots. The robots have two modes: one is random movements and the other is cluster actions. Under normal conditions, each robot develops the land alone. However, the developments may sometimes need cooperative works of the robots. In that case, if the robots can use tag-like information, then they can gather efficiently to do the works.
4.7
In the present study, all traits of agents are free to evolve. Nevertheless, the co-evolution of communitarian segregation and cooperation was observed even for a wide range of parameters.
### References
AKTIPIS, C.A. (2004). Know when to walk away: Contingent movement and the evolution of cooperation. Journal of Theoretical Biology, 231, 249–260. [doi:10.1016/j.jtbi.2004.06.020]
ANTAL, T., Ohtsuki, H., Wakeley, J., Taylor, P. D. & Nowak M. A. (2009). Evolution of cooperation by phenotypic similarity. Proceedings of the National Academy of Sciences U.S.A., 106, 8597–8600. [doi:10.1073/pnas.0902528106]
AXELROD, R., Hammond, R. A., & Grafen, A. (2004). Altruism via kin-selection strategies that rely on arbitrary tags with which they coevolve. Evolution, 58, 1833–1838. [doi:10.1111/j.0014-3820.2004.tb00465.x]
BERSINI, H. (2014). In between Schelling and Maynard-Smith. In: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems (ALIFE 14), pp 360–364.
CHEN, X., Szolnoki, A. & Perc, M. (2012). Risk-driven migration and the collective-risk social dilemma. Physical Review E, 86, 036101. [doi:10.1103/physreve.86.036101]
HAMILTON, W. D. (1964). The genetical evolution of social behavior. Journal of Theoretical Biology, 7, 1–52. [doi:10.1016/0022-5193(64)90038-4]
HAMMOND, R. A. & Axelrod, R. (2006). Evolution of cooperation when cooperation is expensive. Theoretical Population Biology, 69, 333–338. [doi:10.1016/j.tpb.2005.12.002]
HARTSHORN, M., Kaznatcheev, A., & Shultz, T. R. (2013). The evolutionary dominance of ethnocentric cooperation. Journal of Artificial Societies and Social Simulation, 16 (3), 7 http://jasss.soc.surrey.ac.uk/16/3/7.html.
HELBING, D. & Yu, W. (2009). The outbreak of cooperation among success-driven individuals under noisy conditions. Proceedings of the National Academy of Sciences U.S.A., 106, 3680–3685. [doi:10.1073/pnas.0811503106]
ICHINOSE, G., Saito, M., Sayama, H. & Wilson, D. S. (2013). Adaptive long-range migration promotes cooperation under tempting conditions. Scientific Report, 3, 2509. [doi:10.7551/978-0-262-31709-2-ch187]
JANSEN, V. A. & van Baalen, M. (2006). Altruism through beard chromodynamics. Nature, 440, 663–666. [doi:10.1038/nature04387]
JIANG, L. L., Zhao, M., Yang, H. X., Wakeling, J., Wang, B. H. & Zhou, T. (2010). Role of adaptive migration in promoting cooperation in spatial games. Physical Review E, 80, 036108. [doi:10.1103/physreve.81.036108]
NOWAK, M. A. (2006). Five rules for the evolution of cooperation. Science, 314, 1560–1563. [doi:10.1126/science.1133755]
NOWAK, M. A. & May, R. M. (1992). Evolutionary games and spatial chaos. Nature, 359, 826–829. [doi:10.1038/359826a0]
RIOLO, R. L., Cohen, M. D. & Axelrod, R. (2001). Evolution of cooperation without reciprocity. Nature, 414, 441–443. [doi:10.1038/35106555]
ROBERTS, G. & Sherratt, T.N. (2002). Does similarity breed cooperation? Nature, 418, 499–500. [doi:10.1038/418499b]
ROCA, C. P. & Helbing, D. (2011). Emergence of social cohesion in a model society of greedy, mobile individuals, Proceedings of the National Academy of Sciences U.S.A., 108, 11370–11374. [doi:10.1073/pnas.1101044108]
SCHELLING, T. C. (1971). Dynamic Models of Segregation. Journal of Mathematical Sociology, 1, 143–186. [doi:10.1080/0022250X.1971.9989794]
SHUTTERS, S. & Hales, D. (2013). Tag-mediated altruism is contingent on how cheaters are defined. Journal of Artificial Societies and Social Simulation, 16 (1), 4 http://jasss.soc.surrey.ac.uk/16/1/4.html.
SPECTOR, L. & Klein, J. (2006). Genetic stability and territorial structure facilitate the evolution of tag-mediated altruism, Artificial Life, 12, 553–560. [doi:10.1162/artl.2006.12.4.553]
TRAULSEN, A. & Claussen, J. C. (2004). Similarity based cooperation and spatial segregation. Physical Review E, 70, 046128. [doi:10.1103/PhysRevE.70.046128]
TRAULSEN, A. & Nowak M. A. (2007). Chromodynamics of cooperation in finite populations. PLoS ONE, 2, e270. [doi:10.1371/journal.pone.0000270]
TRAULSEN, A. & Schuster, H. G. (2003). Minimal model for tag-based cooperation. Physical Review E, 68, 046129. | 2018-05-26 04:29:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7302941679954529, "perplexity": 1207.5510092060858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867309.73/warc/CC-MAIN-20180526033945-20180526053945-00590.warc.gz"} |
http://clay6.com/qa/51619/which-of-the-halogens-is-found-in-ddt- | # Which of the halogens is found in DDT?
$\begin{array}{1 1}\text{Fluorine}\\\text{Chlorine}\\\text{Bromine}\\\text{Iodine}\end{array}$
Chlorine is found in DDT
answered Jul 29, 2014 | 2017-12-13 20:36:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4066380262374878, "perplexity": 4369.285497571668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948530841.24/warc/CC-MAIN-20171213201654-20171213221654-00499.warc.gz"} |
http://mathoverflow.net/questions/100265/not-especially-famous-long-open-problems-which-anyone-can-understand/100781 | # Not especially famous, long-open problems which anyone can understand
Question: I'm asking for a big list of not especially famous, long open problems that anyone can understand. Community wiki, so one problem per answer, please.
Motivation: I plan to use this list in my teaching, to motivate general education undergraduates, and early year majors, suggesting to them an idea of what research mathematicians do.
Meaning of "not too famous" Examples of problems that are too famous might be the Goldbach conjecture, the $3x+1$-problem, the twin-prime conjecture, or the chromatic number of the unit-distance graph on ${\Bbb R}^2$. Roughly, if there exists a whole monograph already dedicated to the problem (or narrow circle of problems), no need to mention it again here. I'm looking for problems that, with high probability, a mathematician working outside the particular area has never encountered.
Meaning of: anyone can understand The statement (in some appropriate, but reasonably terse formulation) shouldn't involve concepts beyond (American) K-12 mathematics. For example, if it weren't already too famous, I would say that the conjecture that "finite projective planes have prime power order" does have barely acceptable articulations.
Meaning of: long open The problem should occur in the literature or have a solid history as folklore. So I do not mean to call here for the invention of new problems or to collect everybody's laundry list of private-research-impeding unproved elementary technical lemmas. There should already exist at least of small community of mathematicians who will care if one of these problems gets solved.
I hope I have reduced subjectivity to a minimum, but I can't eliminate all fuzziness -- so if in doubt please don't hesitate to post!
To get started, here's a problem that I only learned of recently and that I've actually enjoyed describing to general education students.
http://en.wikipedia.org/wiki/Union-closed_sets_conjecture
Edit: I'm primarily interested in conjectures - yes-no questions, rather than classification problems, quests for algorithms, etc.
-
You might get more success if you sampled certain open problem lists and indicated which ones fit your list and which ones did not. I could mention various combinatorial problems such as integer complexity, determinant spectrum, covering design optimization, but I can't tell from your description if they would be suitable for you. Gerhard "They Are Suitable For Me" Paseman, 2012.06.21 – Gerhard Paseman Jun 21 '12 at 19:11
Here is some collection of some other "collect open problems" quests. on MO: mathoverflow.net/questions/96202/… PS Nice question ! PSPS may be add tag "open-problems" – Alexander Chervov Jun 21 '12 at 20:53
Nice question!! – Suvrit Jun 22 '12 at 3:25
To save the search for explanation of cryptic acronyms for those of us outside US, K-12 means high school. @Mahmud: You are using a wrong meaning of the word “problem”. The TSP is not an unproved mathematical statement, it is a computational task. – Emil Jeřábek Jun 22 '12 at 12:05
More precisely, K-12 means anything up to high school (K = Kindergarten, 12 = 12th grade, and K-12 covers this range). – Henry Cohn Jun 22 '12 at 13:05
Is there a triangle that can be cut into $7$ congruent triangles? (no)
-
Nice problems but what are the sources? – Alexander Chervov Jul 24 '12 at 20:23
I heard this in a personal communication. But it turns out this is already settled negatively in 2008: michaelbeeson.com/research/papers/SevenTriangles.pdf – Vladimir Reshetnikov Jul 28 '12 at 20:34
Ore's odd Harmonic number conjecture
-
Well, now it's so obscure that it requires a context/explanation. – Victor Protsak Jan 6 '14 at 20:45
• Is the Ring of Periods actually a field? (most likely, no)
• Is the equality of periods decidable? (hopefully, yes)
-
problems which anyone can understand ? Uhhh – Denis Serre Sep 25 '12 at 7:50 | 2015-05-25 12:15:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5172488689422607, "perplexity": 740.695501653639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928486.86/warc/CC-MAIN-20150521113208-00249-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/254297/double-page-headings-with-fancyhdr-and-fancyplain-style-in-toc-bibliography | # Double page headings with fancyhdr and fancyplain style in TOC/bibliography
I noticed the bibliography and TOC, list of figures etc have the "section title" twice on each header, on the left and the right -- I have defined in the main body just the right part of the header containing the current section name (or chapter name if there is not yet a section name). How can I make this consistent among the remainder of the document, i.e. rather than bibliography at the left and right, just at the right?
MWE (shows the problem with bibliography, but it also exists with the toc and list of figures/tables parts)
\begin{filecontents*}{test.bib}
@misc{test,
title= "The title",
howpublished= "Publisher"
}
@misc{test1,
title= "The title",
howpublished= "Publisher"
}
@misc{test2,
title= "The title",
howpublished= "Publisher"
}
@misc{test3,
title= "The title",
howpublished= "Publisher"
}
@misc{test4,
title= "The title",
howpublished= "Publisher"
}
@misc{test5,
title= "The title",
howpublished= "Publisher"
}
@misc{test6,
title= "The title",
howpublished= "Publisher"
}
@misc{test7,
title= "The title",
howpublished= "Publisher"
}
@misc{test8,
title= "The title",
howpublished= "Publisher"
}
@misc{test9,
title= "The title",
howpublished= "Publisher"
}
@misc{test10,
title= "The title",
howpublished= "Publisher"
}
@misc{test11,
title= "The title",
howpublished= "Publisher"
}
@misc{test12,
title= "The title",
howpublished= "Publisher"
}
@misc{test13,
title= "The title",
howpublished= "Publisher"
}
@misc{test14,
title= "The title",
howpublished= "Publisher"
}
@misc{test15,
title= "The title",
howpublished= "Publisher"
}
@misc{test16,
title= "The title",
howpublished= "Publisher"
}
@misc{test17,
title= "The title",
howpublished= "Publisher"
}
@misc{test18,
title= "The title",
howpublished= "Publisher"
}
@misc{test19,
title= "The title",
howpublished= "Publisher"
}
@misc{test20,
title= "The title",
howpublished= "Publisher"
}
@misc{test21,
title= "The title",
howpublished= "Publisher"
}
@misc{test22,
title= "The title",
howpublished= "Publisher"
}
@misc{test23,
title= "The title",
howpublished= "Publisher"
}
@misc{test24,
title= "The title",
howpublished= "Publisher"
}
@misc{test25,
title= "The title",
howpublished= "Publisher"
}
\end{filecontents*}
\documentclass{report}
\usepackage{setspace}
\usepackage{fancyhdr}
\begin{document}
\pagestyle{fancyplain}
\nocite{*}
\doublespacing
\bibliographystyle{IEEEtran}
\bibliography{test}
\end{document}
Here is a suggestion but I am not sure if I understand what the desired result is
\documentclass{report}
\usepackage{setspace}
\usepackage{fancyhdr}
\pagestyle{fancy}% before redefining \chaptermark
\renewcommand{\chaptermark}[1]{%
\markboth{\MakeUppercase{\chaptername\ \thechapter.\ #1}}{\MakeUppercase{\chaptername\ \thechapter.\ #1}}%
}
\usepackage{blindtext}% for dummy text
\begin{document}
\doublespacing
%\pagestyle{fancyplain}% <- removed
\tableofcontents
\chapter{Chapter One}
\Blindtext
\chapter{Chapter Two}
\section{Section One}
\Blindtext
\Blinddocument
\Blinddocument
\Blinddocument
\end{document}
But start of contents is still without header
Or with package scrlayer-scrpage
\documentclass{report}
\usepackage{setspace}
\clearpairofpagestyles
\cfoot*{\pagemark}
\usepackage{blindtext}% for dummy text
\begin{document}
\doublespacing
\tableofcontents
\chapter{Chapter One}
\Blindtext
\chapter{Chapter Two}
\section{Section One}
\Blindtext
\Blinddocument
\Blinddocument
\Blinddocument
\end{document}
• Thanks for your answer. The first one was on the right track. \rhead{\rightmark} makes the heading mark appear to the right even when its a page with no headers (e.g. at the start of contents), however \lhead{} alone did what I needed. – user1207217 Jul 8 '15 at 14:07
• In my example there is no header at the start of contents. Note that I have removed \pagestyle{fancyplain}. – esdd Jul 8 '15 at 14:34 | 2021-05-17 04:37:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477624297142029, "perplexity": 14130.257450434228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00042.warc.gz"} |
http://math.stackexchange.com/questions/55686/a-compactness-problem-for-model-theory | # A compactness problem for model theory
I'm working on the following problem:
Assume that every model of a sentence $\varphi$ satisfies a sentence from $\Sigma$. Show that there is a finite $\Delta \subseteq \Sigma$ such that every model of $\varphi$ satisfies a sentence in $\Delta$.
The quantifiers in this problem are throwing me off; besides some kind of compactness application I'm not sure where to go with it (hence the very poor title). Any hint?
-
You can produce from that a very neat proof of why $ZFC$ is not finitely axiomatizable by setting $\Sigma$ to be: $\varphi_0=2^{\aleph_0}>\aleph_\omega; \varphi_n=2^{\aleph_0}=\aleph_n$ for $n>0$. Since we can produce a model in which each of these happen, there is no single sentence in which we can write $ZFC$. – Asaf Karagila Aug 4 '11 at 23:35
Cute, in a twisted sort of way. You are right, the quantifier structure is the main hurdle to solving the problem.
We can assume that $\varphi$ has a model, else the result is trivially true.
Suppose that there is no finite $\Delta\subseteq \Sigma$ with the desired property.
Then for every finite $\Delta \subseteq \Sigma$, the set $\{\varphi, \Delta'\}$ has a model. (For any set $\Gamma$ of sentences, $\Gamma'$ will denote the set of negations of sentences in $\Gamma$.)
By the Compactness Theorem, we conclude that $\{\varphi, \Sigma'\}$ has a model $M$.
This model $M$ is a model of $\varphi$ in which no sentence in $\Sigma$ is true, contradicting the fact that every model of $\varphi$ satisfies a sentence from $\Sigma$.
- | 2016-07-26 08:53:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9581506252288818, "perplexity": 108.1278339432876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.62/warc/CC-MAIN-20160723071024-00071-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://proxies-free.com/tag/software/ | software – Managing kanban workflow with Gantt charts and tasks
So I’m working on an undergraduate project and I decided to choose kanban methodology. I am arranging the tasks in Gantt chart according to SDLC phases like plan, design, development, testing. How can I display in such a way that it can reflect my Kanban board?
Best sales Management software of 2020
well guy’s we are living in 21st century a world where technology is evolving every second and is at it’s best every day. There are lots of … | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1829586&goto=newpost
package management – How do i install focal fossa software on groovy gorilla from ppa
I have Ubuntu 20.10. I am trying to install Handbrake from its ppa. when i added handbrake repositories and ran `sudo apt update` i got the following message
E: The repository ‘http://ppa.launchpad.net/stebbins/handbrake-releases/ubuntu groovy Release’ does not have a Release file.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
I went to the repository and tried to manually search for groovy packages and found none but there was a focal fossa package.What should i do?
BTW
• snap packages are outdated
• I am facing the same issue for other software also.(eg – f.lux…many more)
How can product managers ensure software quality?
I am a software engineer and I care deeply about my software quality, while I assume my product manager only cares about the final product. I can’t imagine my product manager would care how I get my work done, only that it is done. How can a product manager ensure the work I do as an engineer is quality work?
A few thoughts:
Schedule Testing
For this it seems like something obvious to just ask that all my code is tested. But while I may test my code it doesn’t account for the quality of the code. I may write terrible code and test it, and we’d only pay the penalty in the future when we add new features.
Small Scope – Catch Time Delays Early
One other idea is to have small chunks of work with estimated times, and if the estimate is too much then it may be a sign of code quality and we can plan to address that early. However, as an engineer I usually already know if my code is good or I’m rushing through it. While this seems to make sense it’s also seems like a difficult idea to sell to the business team.
Hire Better Engineers
This is the only idea I know of. Start with good engineers and make them leads. I’m putting this idea here as a baseline. But not everyone is the best. In my experience any requests from a lead seems like time taken out of the final product, then it’s convincing the product that we need more time to have the same the exact same output except with more reliable code base.
Unless the business team fully understands and budgets the need for quality code, it seems like every project will bloat until it’s too late. What else can be done to have code quality be part of the business?
computational number theory – Software for \$S\$-unit equation
Is there any implementation available of an algorithm which solves in full generality the $$S$$-unit equation $$x+y=1$$ in a number field? It seems that Magma solves $$ax+by=c$$ but only in the algebraic integers, while Sage solves $$x+y=1$$ with $$x,y$$ $$S$$-integers, but only for $$S$$ the set of primes over a fixed rational primes. Is that true? Is there any other implementation available?
How to mount on Mac by SOFTWARE a USB SSD that shows not mounted?
How to mount on Mac by SOFTWARE a USB SSD that shows not mounted?
System: macOS 10.12.6 (16G2136) Sierra. If the USB SSD is connected before rebooting or cold booting the Mac, then it shows. If the Mac is booted from an external USB SSD, no problem (after booting or rebooting, the afterwards plugged USB SSD show). But if the Mac is booted from the internal SSD, then plugging afterwards an external USB SSD does not mount it. Unplugging and plugging it again (or plugging other USB SSD in other USB port; sometimes requiring more than one trial) fixes it for such SSD and any other until the Mac is rebooted or booted, in which case the same issue arises.
The USB disk not mounted does not show in Disk Utility or Mountain. Using Terminal “diskutil list” does not show it. It does not show in “Apple – About This Mac – System Report – Hardware – Storage”, but shows on the latter selecting “…–Hardware – USB”. Other formatting, maintenance, repair or recovery utilities do not show such unmounted USB SSD until it shows mounted on the Desktop.
software – First time having users – how to deal with backwards compatability?
Well first, this is a good use for regression tests. Whenever you make a change to the file format, have saved and documented examples of files in the old format. Then make unit tests that try and load in the old format and assert that they load in correctly. This can be as simple as
``````def assert_can_read_v1():
with open("legacy/v1/example_1.wtvr") as f:
assert data_structure.title == "abc"
assert data_structure.stuff == "..."
``````
But that’s from a process perspective, how to make sure you don’t mess up. How do you actually write the code?
Well, its gonna depend on a lot of factors including on how different the files are and what kind of format things are stored in.
If things are stored in a text based, or otherwise generic, format like JSON or XML or in a data format that is built to allow extra fields in data like ProtocolBuffers or CapnProto then adding fields should be fairly simple. You just take all the places where you read the field and add some default value if the field is not there.
If you are using a format that doesn’t allow for that kind of extension or where removing fields is backwards incompatible or where you are making larger incompatible changes, you can write some predicate that tells you what “version” of file you have and dispatch to the right functionality. You can make this easier on yourself by adding explicit “version” fields to things, but that is on you.
``````def read_file(file):
contents = parse_format(file)
if contents.version == 1:
return parse_contents_v1(contents)
else:
return parse_contents_v2(contents)
``````
As a corollary, this is easier to do if you have some model in your code that is “separate” from what your config file reads into. That way you have some place to put this massaging logic.
``````import dataclasses
import json
@dataclasses.dataclass(frozen=True)
class Project:
name: str
@staticmethod
with open(file_path, "r") as f:
return Project(name=contents("name"))
``````
And do some minor upgrades to get this
``````import dataclasses
import json
from typing import Optional
@dataclasses.dataclass(frozen=True)
class Project:
name: str
description: Optional(str)
@staticmethod
with open(file_path, "r") as f:
return Project(
name=contents("name"),
description=contents.get("description", None)
)
``````
And then maybe need to do a major overhaul and end up with
``````import dataclasses
import json
from typing import Optional
@dataclasses.dataclass(frozen=True)
class Project:
name: str
description: Optional(str)
@staticmethod
with open(file_path, "r") as f:
version = contents.get("version", None)
if version is None:
return Project(
name=contents("name"),
description=contents.get("description", None)
)
else:
name, description = contents("stuff").split(":::", 1)
return Project(name=name, description=description)
``````
Does that sorta make sense? There are no hard set rules, but having the very first thing – regression tests – can help a lot.
c# – Spoofing a UDP stream from a piece of software, to a third party app, in order to integrate with that app
I am working on a project where i am attempting to spoof a gps command from X-Plane 11 to allow a flight simulator that i am working on to communicate with an electroic flight bag app (Oz Runways)
The X-Plane software outputs a GPS stream that is sent to the app, which the app then interprets.
it is sent via UDP as an ascii Stream.
When the stream is sent, it shows the gps position of the plane in Xplane on a moving mapp in the App, on an Ipad.
An Example Stream: “XGPS1,-122.578640,47.264560,87.9936,8.2152,68.5358
At this stage i have tried taking the exact udp message (from Xplane, intercepted by Wireshark) and sending it from another program called Packet Sender. Sending the stream does not work, but sending the entire packet (which seems wrong) seems to give me the desired result.
This makes the entire packet of this UDP message: “>ûÿÛEe®
À¨À¨ò¿i¿jQ«>ûÿÛEP{ÍÀ¨À¨ò¿i¿j<XGPS1,-122.307752,47.463464,124.7732,180.0498,0.0000
it seems like it doubles up on the leading data, like ip and port adresses etc.
So while this seems to work, when using my piece of software, which will change the numbers to send through the correct position and speed, but right now is just a simple c# udp sender it refuses to work.
`````` void openUDP()
{
udpClient = new UdpClient(49001);
}
void DoUDPThings()
{
Byte() sendBytes = Encoding.UTF8.GetBytes(textBoxmsg.Text);
try
{
udpClient.Send(sendBytes, sendBytes.Length);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
``````
This is the code for my Sender, in order to mimic the Xplane software as much as possible i have set it to use the exact output port, and the text boxes in my win form feed in the message, ip address, and remote port number.
The actual question i suppose i am asking here, is why would copying the start of packet data work in Packet sender in the first place, and not in my c# program?
Below are two packets from wireshark one of which is from Xplane, the other is from my piece of software.
``````My software:
183eef87a307001cbffbffdb08004500004ea95d000080110000c0a80006c0a800f2bf69bf6a003a829458475053312c2d3132322e3537383634302c34372e3236343536302c38372e393933362c382e323135322c36382e35333538
``````
``````From Xplane:
183eef87a307001cbffbffdb080045000050a94d000080110000c0a80006c0a800f2bf69bf6a003c829658475053312c2d3132322e3330323539322c34372e3436303236382c3133362e363139332c3236382e353936362c302e30373137 | 2020-11-25 08:48:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24538634717464447, "perplexity": 2536.938398739639}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141181482.18/warc/CC-MAIN-20201125071137-20201125101137-00484.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/125674-matrix-exponential-print.html | # matrix exponential
I'd start by noting that $e^A = I + A + \frac{A^2}{2!} + ....$ and so $e^Ax = Ix + Ax + \frac{A^2}{2!}x + .... = ....$ | 2015-04-18 08:11:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150978326797485, "perplexity": 448.95926781967876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633972.52/warc/CC-MAIN-20150417045713-00034-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/109410-finding-equation.html | 1. ## Finding the equation
How do I find the equation of the tangent to the graph of f(x)=Sqrt x at the point x=9.
I know the gradient of the tangent which is 1/6.
But I have no idea how to do this.
Any help will be appreciated.
THANKS
2. Originally Posted by Awsom Guy
How do I find the equation of the tangent to the graph of f(x)=Sqrt x at the point x=9.
I know the gradient of the tangent which is 1/6.
But I have no idea how to do this.
Any help will be appreciated.
THANKS
$f(x)=\sqrt{x}$ .. this is the equation of the curve .
$f(9)=3$
so y=3
3. Hello Awsom Guy
Originally Posted by Awsom Guy
How do I find the equation of the tangent to the graph of f(x)=Sqrt x at the point x=9.
I know the gradient of the tangent which is 1/6.
But I have no idea how to do this.
Any help will be appreciated.
THANKS
As you say, $f'(9) = \frac{1}{2\sqrt9}=\frac16$.
You now need $f(9) = \sqrt9 = 3$
So the tangent is at the point $(9,3)$.
Now use $y - y_1 = m(x-x_1)$, the equation of the line with gradient $m$ through the point $(x_1,y_1)$.
When simplified, this comes to $6y = x+9$.
Can you fill in the details?
4. i dont understand how you got 1/6 for m
5. Hello sjara
Originally Posted by sjara
i dont understand how you got 1/6 for m
The value of $m$ is the gradient of the curve at the point where $x = 9$. We get this by differentiating $f(x)$, and then plugging in that value, $x = 6$.
$f(x) = \sqrt{x}=x^{\frac12}$
$\Rightarrow f'(x)=\tfrac12x^{-\frac12}=\frac{1}{2\sqrt{x}}$
So when $x = 9, f'(9)= \frac{1}{2\sqrt{9}}=\frac16$
And this is the value of $m$, the gradient of the tangent when $x = 9$. | 2017-08-16 20:19:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.874875545501709, "perplexity": 224.64253854415443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102393.60/warc/CC-MAIN-20170816191044-20170816211044-00501.warc.gz"} |
http://planetmath.org/closedsetinacompactspaceiscompact | # closed set in a compact space is compact
## Primary tabs
Major Section:
Reference
Type of Math Object:
Proof
## Mathematics Subject Classification
### mention article 4691
article 4691 called "closed subsets of a compact set are compact" deals with almost the same subject matter an should be mentioned. | 2014-03-09 03:57:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 22, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25953686237335205, "perplexity": 3543.214325386763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999671521/warc/CC-MAIN-20140305060751-00084-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://engineering.virginia.edu/events/phd-proposal-presentation-beilun-wang | Computer Science Location: Rice 504
#### Fast and Scalable Joint Estimators for Learning Sparse Gaussian Graphical Models from Heterogeneous Data
Abstract: Estimating multiple sparse Gaussian Graphical Models (multi-sGGMs) jointly from heterogeneous data is an important problem, often arising in bioinformatics and neuroimaging. The resulting tools can effectively help researchers translate aggregated data from multiple contexts into knowledge of multiple connectivity graphs. Most current studies of multi-sGGMs, however, involve expensive and difficult non-smooth optimizations, making them difficulty to scale up to many dimensions (large $p$) and/or with many contexts (large $K$).
The proposed research aims to design a novel category of estimators that can achieve fast and scalable joint structure estimation of multiple sGGMs in large-scale settings. There exist three possible formulations of multi-sGGMs. Targeting each, this research work introduces methods that are both computationally efficient and theoretical guaranteed. In details, (1) To estimate one sGGM per context and push all learned graphs towards a common pattern. For this formulation, we propose the estimator FASJEM and solve it in an entry-wise manner that is parallelizable. The entry-wise solution improves the computational efficiency and reduces the memory requirement from $O(Kp^2)$ to $O(K)$. (2) To only estimate the changes in the dependency graphs when estimating two sGGMs. We propose the estimator DIFFEE and obtain a closed-form solution. DIFFEE reduces its entire computational cost to $O(p^3)$, enabling the estimator to a much larger $p$ compared to the state-of-the-art estimators. (3) To learn both the shared and the context-specific sub-graphs explicitly. We propose a novel weighted-$\ell_1$ formulation WSIMULE and its faster variant that elegantly incorporate a flexible prior, along with a parallelizable formulation. Lastly, we propose to conduct rigorous statistical analysis to verify that the proposed estimators can achieve the same statistical convergence rates as the state-of-art methods that are much more difficult to compute. | 2019-10-20 06:57:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43710243701934814, "perplexity": 1449.0572570308275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00297.warc.gz"} |
https://tex.stackexchange.com/tags/formatting/hot | Tag Info
5
\documentclass{article} \usepackage{graphicx} \usepackage{lmodern} \newdimen\zz \zz=25pt \begin{document} \makeatletter \def\usezz{\ifcase\numexpr15-\@multicnt\relax Something About This\or Text about that\or MY NAME\or some Other text\or Random Words HERE\or Some Name\or This\or THAT\or ZZZZZZZ\or \LaTeX\ text\or \textit{Italic text}\or Mathematics \$x^2+...
4
The "AHA!" moment for this was realizing that the indentation for each line is equal to a multiple of the prevailing \baselineskip -- the optional argument to \cdcov allows you to alter this at will. The entry of the information is relatively straightforward (semicolon-separated entries), though the final assembly must be undertaken by the user. I ...
2
I adjusted the markup in a few places to arrive at Basically you should not worry about the float placement until the text is done and then at the end you could add \clearpage if needed before a section heading to flush any floats that are floating too far. % no 12 point option \documentclass[12 point, titlepage]{article} \documentclass[12pt, titlepage]{...
1
I suspect this is a tikzpicture inside another tikzpicture problem. \documentclass[dvipsnames,11pt,a4paper]{book} \usepackage[margin=3.5cm,marginparwidth=3cm, marginparsep=4mm]{geometry} \usepackage{lipsum} \usepackage{marginnote} % Margin notes \usepackage[many]{tcolorbox} \newsavebox{\mybox} % Highlighted notes \newcounter{mynote} \newtcolorbox[use ...
1
You want three columns with S[table-format=1.5(5)], but your input actually specifies SSS[table-format=1.5(5)] because you have a misplaced }. You want *{3}{S[table-format=1.5(5)]} Full code: \documentclass{article} \usepackage{booktabs} \usepackage[separate-uncertainty=true, tight-spacing=true]{siunitx} \begin{document} \begin{table} \centering \...
1
You can use the following template: \documentclass{article} \usepackage[margin=1in,left=2in]{geometry} \newcommand{\newheading}[1]{% Insert a heading \par% End off the previous paragraph, if there was one (and enter vertical mode) \addvspace{.5\baselineskip}% Insert a gap \leavevmode% Leave vertical mode (to start setting the paragraph \llap{\...
1
\documentclass[12pt, a4paper]{article} \begin{document} I want them on the same row, side by side. \vspace{0.5cm} \begin{minipage}[t]{0.05\textwidth} \line(1,0){50}\\ \end{minipage} \hfill \begin{minipage}[t]{0.55\textwidth} \line(1,0){250}\\ \end{minipage} \end{document} EDIT --for bold lines \documentclass[12pt, a4paper]{article} \begin{document} I want ...
1
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-03-08 05:48:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715771675109863, "perplexity": 13218.01036114099}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381989.92/warc/CC-MAIN-20210308052217-20210308082217-00049.warc.gz"} |
https://www.ademcetinkaya.com/2023/03/imu-imugene-limited.html | Outlook: IMUGENE LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating.
Time series to forecast n: 10 Mar 2023 for (n+4 weeks)
Methodology : Modular Neural Network (Market Volatility Analysis)
## Abstract
IMUGENE LIMITED prediction model is evaluated with Modular Neural Network (Market Volatility Analysis) and Wilcoxon Sign-Rank Test1,2,3,4 and it is concluded that the IMU stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy
## Key Points
1. Buy, Sell and Hold Signals
2. Is now good time to invest?
3. Trust metric by Neural Network
## IMU Target Price Prediction Modeling Methodology
We consider IMUGENE LIMITED Decision Process with Modular Neural Network (Market Volatility Analysis) where A is the set of discrete actions of IMU stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Wilcoxon Sign-Rank Test)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Market Volatility Analysis)) X S(n):→ (n+4 weeks) $\begin{array}{l}\int {e}^{x}\mathrm{rx}\end{array}$
n:Time series to forecast
p:Price signals of IMU stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## IMU Stock Forecast (Buy or Sell) for (n+4 weeks)
Sample Set: Neural Network
Stock/Index: IMU IMUGENE LIMITED
Time series to forecast n: 10 Mar 2023 for (n+4 weeks)
According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for IMUGENE LIMITED
1. Fluctuation around a constant hedge ratio (and hence the related hedge ineffectiveness) cannot be reduced by adjusting the hedge ratio in response to each particular outcome. Hence, in such circumstances, the change in the extent of offset is a matter of measuring and recognising hedge ineffectiveness but does not require rebalancing.
2. For the purpose of applying paragraphs B4.1.11(b) and B4.1.12(b), irrespective of the event or circumstance that causes the early termination of the contract, a party may pay or receive reasonable compensation for that early termination. For example, a party may pay or receive reasonable compensation when it chooses to terminate the contract early (or otherwise causes the early termination to occur).
3. When an entity designates a financial liability as at fair value through profit or loss, it must determine whether presenting in other comprehensive income the effects of changes in the liability's credit risk would create or enlarge an accounting mismatch in profit or loss. An accounting mismatch would be created or enlarged if presenting the effects of changes in the liability's credit risk in other comprehensive income would result in a greater mismatch in profit or loss than if those amounts were presented in profit or loss
4. A single hedging instrument may be designated as a hedging instrument of more than one type of risk, provided that there is a specific designation of the hedging instrument and of the different risk positions as hedged items. Those hedged items can be in different hedging relationships.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
IMUGENE LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. IMUGENE LIMITED prediction model is evaluated with Modular Neural Network (Market Volatility Analysis) and Wilcoxon Sign-Rank Test1,2,3,4 and it is concluded that the IMU stock is predictable in the short/long term. According to price forecasts for (n+4 weeks) period, the dominant strategy among neural network is: Buy
### IMU IMUGENE LIMITED Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCC
Balance SheetCaa2Ba1
Leverage RatiosBa3Ba3
Cash FlowCC
Rates of Return and ProfitabilityBa2Baa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 76 out of 100 with 662 signals.
## References
1. Hornik K, Stinchcombe M, White H. 1989. Multilayer feedforward networks are universal approximators. Neural Netw. 2:359–66
2. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, et al. 2018a. Double/debiased machine learning for treatment and structural parameters. Econom. J. 21:C1–68
3. Ashley, R. (1983), "On the usefulness of macroeconomic forecasts as inputs to forecasting models," Journal of Forecasting, 2, 211–223.
4. E. Altman. Constrained Markov decision processes, volume 7. CRC Press, 1999
5. D. S. Bernstein, S. Zilberstein, and N. Immerman. The complexity of decentralized control of Markov Decision Processes. In UAI '00: Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence, Stanford University, Stanford, California, USA, June 30 - July 3, 2000, pages 32–37, 2000.
6. Bessler, D. A. S. W. Fuller (1993), "Cointegration between U.S. wheat markets," Journal of Regional Science, 33, 481–501.
7. Bessler, D. A. T. Covey (1991), "Cointegration: Some results on U.S. cattle prices," Journal of Futures Markets, 11, 461–474.
Frequently Asked QuestionsQ: What is the prediction methodology for IMU stock?
A: IMU stock prediction methodology: We evaluate the prediction models Modular Neural Network (Market Volatility Analysis) and Wilcoxon Sign-Rank Test
Q: Is IMU stock a buy or sell?
A: The dominant strategy among neural network is to Buy IMU Stock.
Q: Is IMUGENE LIMITED stock a good investment?
A: The consensus rating for IMUGENE LIMITED is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of IMU stock?
A: The consensus rating for IMU is Buy.
Q: What is the prediction period for IMU stock?
A: The prediction period for IMU is (n+4 weeks) | 2023-03-26 13:06:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6293609738349915, "perplexity": 7991.550201105688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00343.warc.gz"} |