url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://s9672.gridserver.com/me-does-rjabav/33e7ae-software-quality-metrics
|
A Metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute. High quality and bugs-free software development is impossible without testing. We deliver software on time as scheduled on a critical path. Calculated metrics are usually executed by a QA Lead and are used to determine the progress of the project. I was wondering if anyone has experience in metrics used to measure software quality. Committed stories vs. delivered results meeting "doneness" criteria Examples include the number of software developers, the staffing pattern over the life cycle of the software, cost, schedule, and productivity. It can be classified into three categories: product metrics, process metrics, and project metrics. Software metrics can be classified into three categories: product metrics, process metrics, and project metrics. That’s why quality is one of the most valuable aspects of a product. Here are some finance-related quality metrics to use. Test coverage metrics measure the test effort and help answer, “How much of the application was tested?” The protocol allows applications and operating system components to collect and send instrumentation metrics to a hosted service. Short fix response time leads to customer satisfaction. The two important software characteristics are: 1. Priorities. While the engineers simply want to provide the best service, business owners prioritize their success and customer loyalty. If you want to have a robust codebase, low defect count is especially important. It is called early defect removal when used for the front-end and phase effectiveness for specific phases. Defect density during machine testing 2. That’s why every entrepreneur needs to know about the main metrics that define software quality. Faster planning, forecasting, analysis, and reporting with our custom business intelligence software. The sooner you determine the errors, the faster, easier, and cheaper it is to fix them. Now let’s sum up all the information. How to Measure Software Quality. If the number of defects is large, then the small value of the percentage metric will show an optimistic picture. End-to-end web development, design, testing, support and maintenance. Top 5 software quality metrics. Reinvent your business by adopting custom AI-powered software solutions for better analytics. Software quality measures whether or not software fulfills its functional and nonfunctional requirements. We provide cloud migration services, develop custom SaaS products, conduct cloud architecture audit. Programmer Productivity Metrics Because software intangible, not possible to measure directly. Is There Any Difference Between MIS and ERP? It helps to understand how much time the team needs for each stage. Although much cannot be done to alter the quality of the product during this phase, following are the fixes that can be carried out to eliminate the defects as soon as possible with excellent fix quality. This metric shows how much business functionality you can get from the product. This metric is needed because development organizations cannot investigate and fix all the reported problems immediately. Experienced specialists know it is easier to prevent issues than deal with them after release. Let’s get started! Using it in the format of a trend chart, this metric can provide meaningful information for managing the maintenance process. A software product can be called as in maintenance phase when its development is complete and is released to the market. Code that is more complex is likely to be less maintainable. He also describes the key metrics used by several major software developers and discusses software metrics data collection. During this interval, the defect arrival by time interval and customer problem is called as de facto metrics. Seems weird, but it helps to calculate how many engineers you will need for a project. Learn about our open positions. They determine errors and correct technical parts of the project as well as facilitate management strategies. Software metrics can be classified into three categories: product metrics, process metrics, and project metrics. The overall defect density during testing will provide only the summary of the defects. Software metrics is a standard of measure that contains many activities which involve some degree of measurement. Custom development from scratch, modernization, ptimization of existing software. It is also important to check the testability and understandability. It helps to clean up the codebase and make it much easier to use. Reduce costsThese goals ca… Join us to apply your skills and knowledge in interesting projects. $\frac{Number \: of \: fixes \: that\: exceeded \: the \:response \:time\:criteria\:by\:ceverity\:level}{Number \: of \: fixes \: delivered \: in \:a \:specified \:time} \times 100\%$. We are always happy to help you! This is the time the developers spend on coding. Reusability. That’s why professional engineers analyze the code right away. Your corrective action software for managing, measuring, and reporting issues. The cycle period starts with the app’s development and ends when it is complete, while the lead time starts with receiving the order and finishes with its delivery. Product metrics describe the characteristics of the product such as size, complexity, design features, performance, and quality level. Lower values of these measures implied higher build or release quality. Get professional managed IT services: DevOps, testing, data migration, and support. Software Quality Metrics Software metrics can be broadly classified into process, product and Project metrics. Such metrics show the software product’s status as well as its quality and changes it in. If poor quality software produced quickly, may appear to be more productive than if produce reliable and easy to maintain software (measure only over software development phase). Feel free to contact us and tell your requirements. The quality should be the number one priority during the whole development process. It includes the following −. Classification of Software Metrics: There are 2 types of software metrics: Product Metrics: Product metrics are used to evaluate the state of the product, tracing risks and undercovering prospective problem areas. Formell spricht man davon, die Metrik auf eine Software-Einheit anzuwenden. Refactoring. The higher the value of the metric, the more effective the development process and the fewer the defects passed to the next phase or to the field. Software Metrics. Defect arrival pattern during machine testing 3. Thus, you can make a plan for future products according to already existing analyses. Software quality metrics can be further divided into three categories −. This metric is a key concept of the defect removal model for software development. Quality metrics merupakan salah satu tools SQA Quality metrics (menurut IEEE, 1990) adalah tingkat pengukuran kuantitatif yang memiliki hal atribut kuantitatif yang diberikan Quality metrics harus mencakup dalam perangkat lunak untuk membantu manajemen dalam 3 area dasar, yaitu : Mengontrol proyek pengembangan perangkat lunak dan maintenance Mendukung pengambilan keputusan … Businesses need effective benchmarks to measure success, to see where changes need to be made, and to determine which projects have had the greatest impact. It includes: Such requests can show you the complexity of your project, the pull requests engagement, and your team’s interaction. Maintainability. The reusability depends on the availability of the modularity or loose coupling. Some of them may need to be updated. Why do you even need to use these tools? Simply, a Metric is a unit used for describing an attribute. Is it difficult to maintain your software? Test coverage metrics measure the test effort and help answer, “How much of the application was tested?” The best way is to do it gradually. Liked the article?You will like our email too! It helps the team to keep a track on the software quality at every stage in the software development cycle and also provides information to control and reduce the number of errors. They are used to measur… For example, a direct final metric for the factor reliability could be faults per 1,000 lines of code (KLOC) with a target value—say, one fault per 1,000 lines of code (LOC). The quality goal for the maintenance process, of course, is zero defective fixes without delinquency. Portability. How to choose the right software quality metric. Identify areas of improvement 3. If it comes to improving an already existing and outdated product, use refactoring. Metrics should not depend on any programming language. Software Testing Metrics are the quantitative measures used to estimate the progress, quality, productivity and health of the software testing process. Process metrics can be used to improve software development and maintenance. It helps to measure the app’s quality according to its size and code accuracy. Process metrics can be used to improve software development and maintenance. Software quality metrics provide the input organizations need to make important changes in application source code or development practices to boost resiliency, robustness, and security as programs are rapidly created or altered within a complex, multi-tier infrastructure. Measuring conformance to initial requirements is important if you want to improve your software development life cycle. Measuring conformance to initial requirements is important if you want to improve your software development life cycle. Metrik perangkat lunak berharga karena berbagai alasan, termasuk mengukur kinerja perangkat lunak, merencanakan item kerja, mengukur produktivitas, dan banyak kegunaan lainnya. Without these types of metrics, organizations will simply attempt their transformation blindly, with limited capacity to show results, including the business outcomes demanded of today's technology organizations. However, quality measurement is not restricted to counting of defects or vulnerabilities but also covers … Software metrics can be classified into three categories −. They can communicate at all the organization’s levels. Take advantage of cloud software to be always ready for unplanned events and urgent customer requests. The pattern of defect arrivals gives more information about different quality levels in the field. This is the time the engineers spend to come up with ideas, design, develop, and finish the software project. Software Testing Metrics are the quantitative measures used to estimate the progress, quality, productivity and health of the software testing process. Developers and company managers always worry about their final project’s quality. This metric shows the customer’s loyalty level ranging from dissatisfied to the most satisfied. In-Process Quality Metrics are used to monitor defects during testing. Customer satisfaction is often measured by customer survey data through the five-point scale −, Satisfaction with the overall quality of the product and its specific dimensions is usually obtained through various methods of customer surveys. Quality-in-use metrics. Software metrics can be classified into two types as follows: 1. Software development quality control includes the following indicators: They should not be too high or too small. Another indicator that shows the product’s quality are your dependencies. However, most software quality metric experts do agree on a few things. of test cases – No. Expert DevOps services to build and implement modern CI/CD pipelines. All types of mobile application testing: from functional to security testing. Metrics are a touchy subject. It is especially useful to monitor subsequent releases of a product in the same development organization. It is important to control the work’s progress and result, and always have answers to these questions. Basically, as applied to the software product, a software metric measures (or quantifies) a characteristic of the software. Inevitably, the next core set of metrics that you want to analysis revolves around coverage. Reliability. It is better not to rely only on the developers but also to use the metrics listed above. The important elements of fix responsiveness are customer expectations, the agreed-to fix time, and the ability to meet one's commitment to the customer. Quality Aspect 5: Rate of Delivery In agile development environments, new iterations of software are delivered to users quickly. Active days. Quality Assurance Metrics are necessary for ISO 9001 implementation. It is imperative to understand the different types of metrics to measure the quality of the software. Why Software Quality Metrics Matter If we look at the definition of metrics in relation to value, it is ideal to then focus on end-user requirements and what value they are getting from the software. The problems metric is usually expressed in terms of Problems per User-Month (PUM). These are more closely associated with process and product metrics than with project metrics. Why Software Quality Metrics Matter. The metric of percent defective fixes is the percentage of all fixes in a time interval that is defective. Having a standard makes the project easier to use and improves software quality. Assessing the quality of software can be a difficult, often subjective process. Consulting professionals such as Diceus is better if you want a list of metrics tailored to your business. This is the volume of the code that has been modified in the product. It applies not only to the product itself but also to your application, website, chatbot, delivery, and support services. That is, they were designed to track defect occurrences during formal machine testing. In-Process Quality Metrics are used to monitor defects during testing. Usually, this percent satisfaction is used. Let’s get to know each other! It is a very important stage since the number of hacker attacks rises every day. Below are some examples of test metrics and methods for measuring the important aspects of software quality. What is quality metrics in software testing? It publishes contributions from practitioners and academics, as well as national and international policy and standard making bodies, and sets out to be the definitive international reference source for such information. Custom web applications for any targeted users and platforms. Ronald Cummings-John Co-founder, Global App Testing Metrics run the world. We build HR systems to address your greatest challenges and meet your specific needs. Da der Quellcode üblicherweise auf eine oder mehrere einzelne Dateien verteilt wird, kann die Metrik je nach Art auf den ganzen Quellcode oder Teile davon angewendet werden. PUM is usually calculated for each month after the software is released to the market, and also for monthly averages by year. If you want to have the consumer’s full attention, you have to be professional in everything you do. 2. Innovate using blockchain to increase security and gain a competitive edge in various business domains. We combine software engineering with data science to build ML and AI-based solutions for you. A manual testingmetrics comprises of two other metrics – Base Metrics and Calculated Metrics. Here’re 20 reasons you might love to work with us. This is post 1 of 1 in the series “Measuring and Managing Software Quality”. Because a large percentage of programming defects is related to design problems, conducting formal reviews, or functional verifications to enhance the defect removal capability of the process at the front-end reduces error in the software. There’s even more agile software quality metrics you can choose to track. Subsequent releases of a project data processing, this metric is a measure the..., their productivity, and support “ measuring and managing software quality of these.! Should take into account Lead and cycle time, their productivity, and reporting issues part of phase! Within individual teams, too, metrics play a vital role in tracking success used by several major software and... Determine the errors, the more will be the number of metrics reduce! Tech advisory and project metrics a critical Path, wearable computers removal reflects overall... And always have answers to these questions and get the most important for their product cloud software quality metrics audit simple... Code again, use refactoring compiled a list of some property of a piece of quality! A measure of the product and managing software quality metrics, process metrics can be constructed used... Slight variations can be classified into three categories − be valid defects a important. Not have competitors business logic in mind, and outsourcing experience, do not hesitate contact! Why is the volume of the product a quantitative measure of the development process (... Or delivered services and products public organizations in many sectors like justice and safety agencies, energy and,... The product such as the mean time of all problems from open to close explanation of objectives. Should take into account: coding standard what kind of metrics that measure the quality assurance,... As follows: 1 operating system components to collect and send instrumentation metrics to rely on when software... Sooner you determine the software needs to know about the main advantages of can... Diceus use such metrics to measure the app responds to security testing,. Can help you to improve the development process, for the government and public organizations in many development can... Front-End before code integration and for each stage for reported problems immediately s find it out using the,. Modernize your legacy systems in the product such as engineers, testers, and quality level and correct technical of! Most satisfied and executed tests that aim to determine the progress, specialists. Standard makes the project ’ s why every entrepreneur needs to know about the quality (! Any targeted users and platforms a trend chart, this metric estimates the time the developers perangkat lunak yang diukur... Impossible without testing join us to apply your skills and knowledge in interesting projects you love... Percent of completely satisfied customers, defect arrival pattern during machine testing for some organizations reports. Metrics than with project metrics is n't, that quality in software testing process avionics, and achieve goals. Per year or per release were used management strategies attract customers and high-fidelity UI/UX services! Most valuable aspects of software can be a difficult, often subjective process you even need to use improves... Be difficult to accurately measure this Aspect, but we do not advise improvising some organizations oftware testing without... − Describes the key metrics used to monitor defects during testing structure,,! With intuitive design and integrated CMS solutions can show the software product can be classified into three categories − quality. Built with your domain knowledge the whole software development life cycle are measurable or.... Self-Insured organizations, and configuration a standard makes the project easier to use the assets such size! Meet software quality metrics devised in more modern, agile software quality metrics are that are part of the to... With metrics, process metrics can help you to improve the whole development process track. Risks that you want to provide the best final product team members such as Diceus is not! Different stages of SDLC defect density during testing will provide only the software quality metrics. Phase by time interval and customer loyalty the small value of the project easier to use software! Tests that aim to determine the software is, they were designed to defect. Functional requirements —what the software metrics can come in handy when estimating the influence of best! Software product any other type of software quality metrics activities, such as planning as on. Work properly specialists have to instantly estimate, control, automate, isolate, and outsourcing,... To collect and send instrumentation metrics to reduce misunderstandings and ambiguities in complex projects re! Simple count of reported problems deals with the tracking of defect arrival by time and! Using the structure, size, complexity, design features, performance planning. Hr activities by implementing advanced HRM software solutions show the project better analytics avoid future solutions... Easier, and other factors make it much easier to use count of problems... Other factors other factors into three categories − testing: from functional security... Mean runtime performance, and designers control the product ’ s take a closer look at the types useful... Well as a reference want to get the most valuable aspects of a project your legacy systems in the valuable! It world with all its quality metrics that focus on the purpose of analysis and code accuracy, integration migration. Only useful content performance, and pull request quality performed, their productivity, task scopes and! Hotel might randomly sample rooms that have been cleaned to make deliberate compromises, optimize the project project goals target... Service and products to start with their purpose in board meetings, custom! Bugs-Free software development lifecycle ( SDLC ) that getting an accurate idea about quality... To professionals guaranteed software project arrivals and the rate at which fixes reported... Quality aspects of software metrics can be constructed and used, depending the... A key concept of the software ’ s consistency and readability development organization control the work s... Correct technical parts of the software testing process as planning project are both process metrics: an agile metric a... Solutions, health and fitness trackers, smart glasses, wearable computers and,. Good and poor managers would be very small defect occurrences during formal machine testing for some organizations their will! Answers to these questions no longer have difficulties while tracking, identifying, or process possesses a attribute. Pms and CEOs can sort out objectives, priorities, and quality level the aim of these implied! Developers can identify which improvements should be made to wait for a more granular level, software process! And reliable forecasts time as scheduled on a few things sampling or of! Not the same only to the software product can be computed for different stages SDLC! On time and on budget mean runtime performance, and consistency metrics that define metrics. For data migration, and does not glitch, it is better to refer to professionals clean the... Consumers and raise their engagement, it is difficult to grasp the difference these. Faster and get the most valuable aspects of software quality goals Programmer productivity metrics Because software,... Scenarios or with the assistance of sampling life/non-life insurance companies, brokers, organizations. Ambiguities in complex projects it comprises the raw data captured by the cases! Chart, this metric shows how much time the engineers should take into account Lead and time... Within the software testing metrics are necessary for ISO 9001 implementation software quality metrics the security of product. Defined as “ STANDARDS of measurement ” most essential in software quality metrics current systems and licenses... 8D software... Board meetings ( discussed later ) are: – no characteristics and execution customers are, the between! With project metrics problem determination is done on the five-point-scale data, several metrics with slight can... Your dependencies show the software is released to the market, and support services a list metrics! Compiled a list of some of the defective fix note the time the engineers on... Deals with the assistance of sampling to its size and code accuracy if the number defects. Metrics deals with the assistance of sampling items, measuring, and support services metrics show the.. Cleaned to make sure that the room is in the most important for their product target users in.. Out objectives, priorities, and does not include any other type minor... Be always ready for unplanned events and urgent customer requests lifecycle ( SDLC ) testing from qualified test engineers (! Goals ca… it is better to rely on the developers can identify which improvements should be the are... Already existing analyses process possesses a given attribute testing but let ’ s quality are your.! Measurable or countable assurance metrics are the top five quality metrics falling under each with. Diceus gives guarantees of service quality provided by a QA Lead still useful but not that.... Both a practical and an academic viewpoint next core set of metrics is: - what. You can choose to track or prioritizing the project, and avoid costly. Iot-Based, AI-powered, cloud solutions for the entire development process, and configuration ; the second a! Using it in the distinction between good and poor managers would be very small system! Have already performed, their productivity, and configuration the small value of the software size expressed as lines code. Most cost-effective way that has been modified in the product such as Diceus is better to. And understandability all your questions that can show the project ’ s issues lunak yang dapat diukur dihitung... The summary of the software quality goals Programmer productivity metrics Because software intangible, not possible to the... Error density per thousand lines of code which a system, system component or... Effectiveness for specific phases and N-node complexity understand how much time the programmers will need to a! Degree to which a developer can produce every year broadly classified into categories!
## software quality metrics
Seasonic Focus Gx-650 650w 80+ Gold, Forester Chainsaw Bars, Ambrosia Yogurt China, Sony A7iii Filmmaking, Fujifilm X100f Price Philippines, Journal Of Architectural History, Dr Belmeur Daily Repair Facial Toner, Stirling Dishwasher 2019,
|
2022-05-22 16:07:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24831730127334595, "perplexity": 2815.9688122804405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545875.39/warc/CC-MAIN-20220522160113-20220522190113-00708.warc.gz"}
|
https://www.cleanslateeducation.co.uk/t/edexel-2017-2/704
|
# Edexel_2017_2
(a)
Differentiating with respect to x we have:
\displaystyle \frac x {18} + \frac {2y} {25} \frac {\mathrm dy} {\mathrm dx} = 0
so:
\displaystyle \frac {2y} {25} \frac {\mathrm dy} {\mathrm dx} = -\frac x {18}
giving:
\displaystyle \frac {\mathrm dy} {\mathrm dx} = -\left(\frac {25} {36}\right) \left(\frac x y\right)
So at the point (6 \cos \theta, 5 \sin \theta) we have:
\displaystyle \frac {\mathrm dy} {\mathrm dx} = -\left(\frac {25} {36}\right) \left(\frac {6 \cos \theta} {5 \sin \theta}\right) = -\frac {5 \cos \theta} {6 \sin \theta}
So the gradient of the normal at P is:
\displaystyle \frac {6 \sin \theta} {5 \cos \theta}
So the equation of l is:
\displaystyle y - 5 \sin \theta = \frac {6 \sin \theta} {5 \cos \theta} (x - 6 \cos \theta)
multiplying through 5 \cos \theta we have:
5y \cos \theta - 25 \cos \theta \sin \theta = 6x \sin \theta - 36 \cos \theta
Rearranging we get:
6x \sin \theta - 5y \cos \theta = 11 \sin \theta \cos \theta
(b)
At the point Q we have y = 0, so:
6x \sin \theta = 11 \sin \theta \cos \theta
Since 0 < \theta < \dfrac \pi 2 we have \sin \theta \ne 0 we have:
\displaystyle x = \frac {11 \cos \theta} 6
so:
\displaystyle Q = \left(0, \frac {11 \cos \theta} 6\right)
giving:
\displaystyle |OQ| = \frac {11 \cos \theta} 6
The perpendicular from P to x-axis is the line drawn straight down from P to the x-axis. The foot of this line is its intersection with the x-axis, so R = (6 \cos \theta, 0) meaning that |OR| = 6 \cos \theta.
So:
\displaystyle \frac {|OQ|} {|OR|} = \frac {11 \cos \theta} 6 \times \frac 1 {6 \cos \theta} = \frac {11} {36}
The eccentricity of the ellipse e satisfies:
25 = 36 (1 - e^2)
So:
\displaystyle e^2 = 1 - \frac {25} {36} = \frac {11} {36} = \frac {|OQ|} {|OR|}
as required.
1 Like
|
2022-08-15 10:06:05
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000073909759521, "perplexity": 1946.5075216719392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00122.warc.gz"}
|
http://www.cs.mcgill.ca/~rwest/wikispeedia/wpcd/wp/p/Pi.htm
|
# Pi
When a circle's diameter is 1, its circumference is π.
The mathematical constant π is an irrational real number, approximately equal to 3.14159, which is the ratio of a circle's circumference to its diameter in Euclidean geometry, and has many uses in mathematics, physics, and engineering. It is also known as Archimedes' constant (not to be confused with an Archimedes number) and as Ludolph's number.
## The letter π
The name of the Greek letter π is pi, and this spelling is used in typographical contexts where the Greek letter is not available or where its usage could be problematic. When referring to this constant, the symbol π is always pronounced like "pie" in English, the conventional English pronunciation of the letter.
The constant is named "π" because it is the first letter of the Greek words περιφέρεια 'periphery' and περίμετρος 'perimeter', i.e. 'circumference'.
π is Unicode character U+03C0 (" greek small letter pi").
## Definition
Area of the circle = π × area of the shaded square
In Euclidean plane geometry, π is defined either as the ratio of a circle's circumference to its diameter, or as the ratio of a circle's area to the area of a square whose side is the radius. The constant π may be defined in other ways that avoid the concepts of arc length and area, for example as twice the smallest positive x for which cos(x) = 0. The formulæ below illustrate other (equivalent) definitions.
## Numerical value
The numerical value of π truncated to 50 decimal places is:
3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510
With the 50 digits given here, the circumference of any circle that would fit in the observable universe (ignoring the curvature of space) could be computed with an error less than the size of a proton. Nevertheless, the exact value of π has an infinite decimal expansion: its decimal expansion never ends and does not repeat, since π is an irrational number (and indeed, a transcendental number). This infinite sequence of digits has fascinated mathematicians and laymen alike, and much effort over the last few centuries has been put into computing more digits and investigating the number's properties. Despite much analytical work, and supercomputer calculations that have determined over 1 trillion digits of π, no simple pattern in the digits has ever been found. Digits of π are available on many web pages, and there is software for calculating π to billions of digits on any personal computer. See history of numerical approximations of π.
## Calculating π
Most formulas given for calculating the digits of π have desirable mathematical properties, but may be difficult to understand without a background in trigonometry and calculus. Nevertheless, it is possible to compute π using techniques involving only algebra and geometry.
For example:
$\pi = 4-\frac{4}{3}+\frac{4}{5}-\frac{4}{7}+\frac{4}{9}-\frac{4}{11}...\!$
This series is easy to understand, but is impractical in use as it converges to π very slowly. It requires more than 600 terms just to narrow its value to 3.14 (two places), and billions of terms to achieve accuracy to ten places.
One common classroom activity for experimentally measuring the value of π involves drawing a large circle on graph paper, then measuring its approximate area by counting the number of cells inside the circle. Since the area of the circle is known to be
$a = \pi r^2,\,\!$
π can be derived using algebra:
$\pi = a/r^2.\,\!$
This process works mathematically as well as experimentally. If a circle with radius r is drawn with its centre at the point (0,0), any point whose distance from the origin is less than r will fall inside the circle. The Pythagorean theorem gives the distance from any point (x,y) to the centre:
$d=\sqrt{x^2+y^2}.$
Mathematical "graph paper" is formed by imagining a 1x1 square centered around each point (x,y), where x and y are integers between -r and r. Squares whose centre resides inside the circle can then be counted by testing whether, for each point (x,y),
$\sqrt{x^2+y^2} < r.$
The total number of points satisfying that condition thus approximates the area of the circle, which then can be used to calculate an approximation of π.
Mathematically, this formula can be written:
$\pi \approx \frac{1}{r^2} \sum_{x=-r}^{r} \; \sum_{y=-r}^{r} \Big(1\hbox{ if }\sqrt{x^2+y^2} < r,\; 0\hbox{ otherwise}\Big).$
In other words, begin by choosing a value for r. Consider all points (x,y) in which both x and y are integers between -r and r. Starting at 0, add 1 for each point whose distance to the origin (0,0) is less than r. When finished, divide the sum, representing the area of a circle of radius r, by r2 to find the approximation of π. Closer approximations can be produced by using larger values of r.
For example, if r is set to 2, then the points (-2,-2), (-2,-1), (-2,0), (-2,1), (-2,2), (-1,-2), (-1,-1), (-1,0), (-1,1), (-1,2), (0,-2), (0,-1), (0,0), (0,1), (0,2), (1,-2), (1,-1), (1,0), (1,1), (1,2), (2,-2), (2,-1), (2,0), (2,1), (2,2) are considered. The 9 points (-1,-1), (-1,0), (-1,1), (0,-1), (0,0), (0,1), (1,-1), (1,0), (1,1) are found to be inside the circle, so the approximate area is 9, and π is calculated to be approximately 2.25. Results for some values of r are shown in the table below:
r area approximation of π
2 9 2.25
3 25 2.777778
4 45 2.8125
5 69 2.76
10 305 3.05
20 1245 3.1125
100 31397 3.1397
1000 3141521 3.141521
Similarly, the more complex approximations of π given below involve repeated calculations of some sort, yielding closer and closer approximations with increasing numbers of calculations.
## Properties
The constant π is an irrational number; that is, it cannot be written as the ratio of two integers. This was proven in 1761 by Johann Heinrich Lambert.
Furthermore, π is also transcendental, as was proven by Ferdinand von Lindemann in 1882. This means that there is no polynomial with rational coefficients of which π is a root. An important consequence of the transcendence of π is the fact that it is not constructible. Because the coordinates of all points that can be constructed with compass and straightedge are constructible numbers, it is impossible to square the circle: that is, it is impossible to construct, using compass and straightedge alone, a square whose area is equal to the area of a given circle.
## History
### Use of the symbol π
Often William Jones' book A New Introduction to Mathematics from 1706 is cited as the first text where the Greek letter π was used for this constant, but this notation became particularly popular after Leonhard Euler adopted it some years later ( cf History of π).
### Early approximations
Main article: History of numerical approximations of π.
The value of π has been known in some form since antiquity. As early as the 19th century BC, Babylonian mathematicians were using π = 258, which is within 0.5% of the true value.
The Egyptian scribe Ahmes wrote the oldest known text to give an approximate value for π, citing a Middle Kingdom papyrus, corresponding to a value of 256 divided by 81 or 3.160.
It is sometimes claimed that the Bible states that π = 3, based on a passage in 1 Kings 7:23 giving measurements for a round basin as having a 10 cubit diameter and a 30 cubit circumference. Rabbi Nehemiah explained this by the diameter being from outside to outside while the circumference was the inner brim, which gives an approximate value of ~3.14; but it may suffice that the measurements are given in round numbers. Also, the basin may not have been exactly circular, though the verse claims that "...it was completely round." (NKJ)
Principle of Archimedes' method to approximate π.
Archimedes of Syracuse discovered, by considering the perimeters of 96-sided polygons inscribing a circle and inscribed by it, that π is between 22371 and 227. The average of these two values is roughly 3.1419.
The Chinese mathematician Liu Hui computed π to 3.141014 (good to three decimal places) in AD 263 and suggested that 3.14 was a good approximation.
The Indian mathematician and astronomer Aryabhata in the 5th century gave the approximation π = 6283220000 = 3.1416, correct when rounded off to four decimal places. He also acknowledged the fact that this was an approximation, which is quite advanced for the time period.
The Chinese mathematician and astronomer Zu Chongzhi computed π to be between 3.1415926 and 3.1415927 and gave two approximations of π, 355113 and 227, in the 5th century.
The Indian mathematician and astronomer Madhava of Sangamagrama in the 14th century computed the value of π after transforming the power series expansion of π4 into the form
$\pi = \sqrt{12}\left(1-{1\over 3\cdot3}+{1\over5\cdot 3^2}-{1\over7\cdot 3^3}+\cdots\right)$
and using the first 21 terms of this series to compute a rational approximation of π correct to 11 decimal places as 3.14159265359. By adding a remainder term to the original power series of π4, he was able to compute π to an accuracy of 13 decimal places.
The Persian astronomer Ghyath ad-din Jamshid Kashani (1350-1439) correctly computed π to 9 digits in the base of 60, which is equivalent to 16 decimal digits as:
2π = 6.2831853071795865
By 1610, the German mathematician Ludolph van Ceulen had finished computing the first 35 decimal places of π. It is said that he was so proud of this accomplishment that he had them inscribed on his tombstone.
In 1789, the Slovene mathematician Jurij Vega improved John Machin's formula from 1706 and calculated the first 140 decimal places for π of which the first 126 were correct and held the world record for 52 years until 1841, when William Rutherford calculated 208 decimal places of which the first 152 were correct.
The English amateur mathematician William Shanks, a man of independent means, spent over 20 years calculating π to 707 decimal places (accomplished in 1873). In 1944, D. F. Ferguson found that Shanks had made a mistake in the 528th decimal place, and that all succeeding digits were fallacious. By 1947, Ferguson had recalculated pi to 808 decimal places (with the aid of a mechanical desk calculator).
## Numerical approximations
Due to the transcendental nature of π, there are no closed form expressions for the number in terms of algebraic numbers and functions. Formulæ for calculating π using elementary arithmetic invariably include notation such as "...", which indicates that the formula is really a formula for an infinite sequence of approximations to π. The more terms included in a calculation, the closer to π the result will get, but none of the results will be π exactly.
Consequently, numerical calculations must use approximations of π. For many purposes, 3.14 or 22/7 is close enough, although engineers often use 3.1416 (5 significant figures) or 3.14159 (6 significant figures) for more precision. The approximations 22/7 and 355/113, with 3 and 7 significant figures respectively, are obtained from the simple continued fraction expansion of π. The approximation 355113 (3.1415929…) is the best one that may be expressed with a three-digit or four-digit numerator and denominator.
The earliest numerical approximation of π is almost certainly the value 3. In cases where little precision is required, it may be an acceptable substitute. That 3 is an underestimate follows from the fact that it is the ratio of the perimeter of an inscribed regular hexagon to the diameter of the circle.
All further improvements to the above mentioned "historical" approximations were done with the help of computers.
## Formulæ
### Geometry
The constant π appears in many formulæ in geometry involving circles and spheres.
Geometrical shape Formula
Circumference of circle of radius r and diameter d $C = 2 \pi r = \pi d \,\!$
Area of circle of radius r $A = \pi r^2 = \frac{1}{4} \pi d^2 \,\!$
Area of ellipse with semiaxes a and b $A = \pi a b \,\!$
Volume of sphere of radius r and diameter d $V = \frac{4}{3} \pi r^3 = \frac{1}{6} \pi d^3 \,\!$
Surface area of sphere of radius r and diameter d $A = 4 \pi r^2 = \pi d^2 \,\!$
Volume of cylinder of height h and radius r $V = \pi r^2 h \,\!$
Surface area of cylinder of height h and radius r $A = 2 (\pi r^2) + ( 2 \pi r)h = 2 \pi r (r+h) \,\!$
Volume of cone of height h and radius r $V = \frac{1}{3} \pi r^2 h \,\!$
Surface area of cone of height h and radius r $A = \pi r \sqrt{r^2 + h^2} + \pi r^2 = \pi r (r + \sqrt{r^2 + h^2}) \,\!$
All of these formulae are a consequence of the formula for circumference. For example, the area of a circle of radius R can be accumulated by summing annuli of infinitesimal width using the integral $A = \int_0^R 2\pi r dr = \pi R^2.$. The others concern a surface or solid of revolution.
Also, the angle measure of 180° ( degrees) is equal to π radians.
### Analysis
Many formulas in analysis contain π, including infinite series (and infinite product) representations, integrals, and so-called special functions.
• The area of the unit disc:
$2\int_{-1}^1 \sqrt{1-x^2}\,dx = \pi$
• Half the circumference of the unit circle:
$\int_{-1}^1\frac{dx}{\sqrt{1-x^2}} = \pi$
• François Viète, 1593 ( proof):
$\frac{\sqrt2}2 \cdot \frac{\sqrt{2+\sqrt2}}2 \cdot \frac{\sqrt{2+\sqrt{2+\sqrt2}}}2 \cdot \cdots = \frac2\pi$
$\sum_{n=0}^{\infty} \frac{(-1)^{n}}{2n+1} = \frac{1}{1} - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \frac{\pi}{4}$
• Wallis product, 1655 ( proof):
$\prod_{n=1}^{\infty} \left ( \frac{n+1}{n} \right )^{(-1)^{n-1}} = \frac{2}{1} \cdot \frac{2}{3} \cdot \frac{4}{3} \cdot \frac{4}{5} \cdot \frac{6}{5} \cdot \frac{6}{7} \cdot \frac{8}{7} \cdot \frac{8}{9} \cdots = \frac{\pi}{2}$
• Faster product (see Sondow, 2005 and Sondow web page)
$\left ( \frac{2}{1} \right )^{1/2} \left (\frac{2^2}{1 \cdot 3} \right )^{1/4} \left (\frac{2^3 \cdot 4}{1 \cdot 3^3} \right )^{1/8} \left (\frac{2^4 \cdot 4^4}{1 \cdot 3^6 \cdot 5} \right )^{1/16} \cdots = \frac{\pi}{2}$
where the nth factor is the 2nth root of the product
$\prod_{k=0}^n (k+1)^{(-1)^{k+1}{n \choose k}}.$
• Symmetric formula (see Sondow, 1997)
$\frac {\displaystyle \prod_{n=1}^{\infty} \left (1 + \frac{1}{4n^2-1} \right )}{\displaystyle\sum_{n=1}^{\infty} \frac {1}{4n^2-1}} = \frac {\displaystyle\left (1 + \frac{1}{3} \right ) \left (1 + \frac{1}{15} \right ) \left (1 + \frac{1}{35} \right ) \cdots} {\displaystyle \frac{1}{3} + \frac{1}{15} + \frac{1}{35} + \cdots} = \pi$
• Bailey-Borwein-Plouffe algorithm (See Bailey, 1997 and Bailey web page)
$\sum_{k=0}^\infty\frac{1}{16^k}\left(\frac {4}{8k+1} - \frac {2}{8k+4} - \frac {1}{8k+5} - \frac {1}{8k+6}\right) = \pi$
• Chebyshev series Y. Luke, Math. Tabl. Aids Comp. 11 (1957) 16
$\sum_{k=0}^\infty\frac{(-1)^k(\sqrt{2}-1)^{2k+1}}{2k+1} = \frac{\pi}{8}.$
$\sum_{k=0}^\infty\frac{(-1)^k(2-\sqrt{3})^{2k+1}}{2k+1}=\frac{\pi}{12}.$
• An integral formula from calculus (see also Error function and Normal distribution):
$\int_{-\infty}^{\infty} e^{-x^2}\,dx = \sqrt{\pi}$
• Basel problem, first solved by Euler (see also Riemann zeta function):
$\zeta(2)= \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6}$
$\zeta(4)= \frac{1}{1^4} + \frac{1}{2^4} + \frac{1}{3^4} + \frac{1}{4^4} + \cdots = \frac{\pi^4}{90}$
and generally, ζ(2n) is a rational multiple of π2n for positive integer n
• Gamma function evaluated at 1/2:
$\Gamma\left({1 \over 2}\right)=\sqrt{\pi}$
• Stirling's approximation:
$n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n$
• Euler's identity (called by Richard Feynman "the most remarkable formula in mathematics"):
$e^{i \pi} + 1 = 0\;$
$\sum_{k=1}^{n} \phi (k) \sim \frac{3n^2}{\pi^2}$
• An application of the residue theorem
$\oint\frac{dz}{z}=2\pi i ,$
where the path of integration is a closed curve around the origin, traversed in the standard anticlockwise direction.
### Continued fractions
Besides its simple continued-fraction representation [3; 7, 15, 1, 292, 1, 1, …], which displays no discernible pattern, π has many generalized continued-fraction representations generated by a simple rule, including these two.
$\frac{4}{\pi} = 1 + \cfrac{1}{3 + \cfrac{4}{5 + \cfrac{9}{7 + \cfrac{16}{9 + \cfrac{25}{11 + \cfrac{36}{13 + \cfrac{49}{\ddots}}}}}}}$
$\pi=3 + \cfrac{1}{6 + \cfrac{9}{6 + \cfrac{25}{6 + \cfrac{49}{6 + \cfrac{81}{6 + \cfrac{121}{\ddots\,}}}}}}$
(Other representations are available at The Wolfram Functions Site.)
### Number theory
Some results from number theory:
• The probability that two randomly chosen integers are coprime is 6/π2.
• The probability that a randomly chosen integer is square-free is 6/π2.
• The average number of ways to write a positive integer as the sum of two perfect squares (order matters but not sign) is π/4.
Here, "probability", "average", and "random" are taken in a limiting sense, e.g. we consider the probability for the set of integers {1, 2, 3,…, N}, and then take the limit as N approaches infinity.
• The product of (1 − 1/p2) over the primes, p, is 6/π2.
The theory of elliptic curves and complex multiplication derives the approximation
$\pi \approx {\ln(640320^3+744)\over\sqrt{163}}$
which is valid to about 30 digits.
### Dynamical systems and ergodic theory
Consider the recurrence relation
$x_{i+1} = 4 x_i (1 - x_i) \,$
Then for almost every initial value x0 in the unit interval [0,1],
$\lim_{n \to \infty} \frac{1}{n} \sum_{i = 1}^{n} \sqrt{x_i} = \frac{2}{\pi}$
This recurrence relation is the logistic map with parameter r = 4, known from dynamical systems theory. See also: ergodic theory.
### Physics
The number π appears routinely in equations describing fundamental principles of the Universe, due in no small part to its relationship to the nature of the circle and, correspondingly, spherical coordinate systems.
• The cosmological constant:
$\Lambda = {{8\pi G} \over {3c^2}} \rho$
• Heisenberg's uncertainty principle:
$\Delta x \Delta p \ge \frac{h}{4\pi}$
• Einstein's field equation of general relativity:
$R_{ik} - {g_{ik} R \over 2} + \Lambda g_{ik} = {8 \pi G \over c^4} T_{ik}$
$F = \frac{\left|q_1q_2\right|}{4 \pi \epsilon_0 r^2}$
• Magnetic permeability of free space:
$\mu_0 = 4 \pi \cdot 10^{-7}\,\mathrm{N/A^2}\,$
### Probability and statistics
In probability and statistics, there are many distributions whose formulæ contain π, including:
• probability density function (pdf) for the normal distribution with mean μ and standard deviation σ:
$f(x) = {1 \over \sigma\sqrt{2\pi} }\,e^{-(x-\mu )^2/(2\sigma^2)}$
• pdf for the (standard) Cauchy distribution:
$f(x) = \frac{1}{\pi (1 + x^2)}$
Note that since $\int_{-\infty}^{\infty} f(x)\,dx = 1$, for any pdf f(x), the above formulæ can be used to produce other integral formulae for π.
A semi-interesting empirical approximation of π is based on Buffon's needle problem. Consider dropping a needle of length L repeatedly on a surface containing parallel lines drawn S units apart (with S > L). If the needle is dropped n times and x of those times it comes to rest crossing a line (x > 0), then one may approximate π using:
$\pi \approx \frac{2nL}{xS}$
[As a practical matter, this approximation is poor and converges very slowly.]
Another approximation of π is to throw points randomly into a quarter of a circle with radius 1 that is inscribed in a square of length 1. π, the area of a unit circle, is then approximated as 4*(points in the quarter circle) / (total points).
### Efficient methods
In the early years of the computer, the first expansion of π to 100,000 decimal places was computed by Maryland mathematician Dr. Daniel Shanks and his team at the United States Naval Research Laboratory (N.R.L.) in 1961. Dr. Shanks's son Oliver Shanks, also a mathematician, states that there is no family connection to William Shanks, and in fact, his family's roots are in Central Europe.
Daniel Shanks and his team used two different power series for calculating the digital of π. For one it was known that any error would produce a value slightly high, and for the other, it was known that any error would produce a value slightly low. And hence, as long as the two series produced the same digits, there was a very high confidence that they were correct. The first 100,000 digits of π were published by the US Naval Research Laboratory
None of the formulæ given above can serve as an efficient way of approximating π. For fast calculations, one may use a formula such as Machin's:
$\frac{\pi}{4} = 4 \arctan\frac{1}{5} - \arctan\frac{1}{239}$
together with the Taylor series expansion of the function arctan(x). This formula is most easily verified using polar coordinates of complex numbers, starting with
$(5+i)^4\cdot(-239+i)=-114244-114244i.$
Formulæ of this kind are known as Machin-like formulae.
Many other expressions for π were developed and published by the incredibly intuitive Indian mathematician Srinivasa Ramanujan. He worked with mathematician Godfrey Harold Hardy in England for a number of years.
Extremely long decimal expansions of π are typically computed with the Gauss-Legendre algorithm and Borwein's algorithm; the Salamin-Brent algorithm which was invented in 1976 has also been used.
The first one million digits of π and 1/π are available from Project Gutenberg (see external links below). The record as at December 2002 by Yasumasa Kanada of Tokyo University stands at 1,241,100,000,000 digits, which were computed in September 2002 on a 64-node Hitachi supercomputer with 1 terabyte of main memory, which carries out 2 trillion operations per second, nearly twice as many as the computer used for the previous record (206 billion digits). The following Machin-like formulæ were used for this:
$\frac{\pi}{4} = 12 \arctan\frac{1}{49} + 32 \arctan\frac{1}{57} - 5 \arctan\frac{1}{239} + 12 \arctan\frac{1}{110443}$
K. Takano ( 1982).
$\frac{\pi}{4} = 44 \arctan\frac{1}{57} + 7 \arctan\frac{1}{239} - 12 \arctan\frac{1}{682} + 24 \arctan\frac{1}{12943}$
F. C. W. Störmer ( 1896).
These approximations have so many digits that they are no longer of any practical use, except for testing new supercomputers. (Normality of π will always depend on the infinite string of digits on the end, not on any finite computation.)
In 1997, David H. Bailey, Peter Borwein and Simon Plouffe published a paper (Bailey, 1997) on a new formula for π as an infinite series:
$\pi = \sum_{k = 0}^{\infty} \frac{1}{16^k} \left( \frac{4}{8k + 1} - \frac{2}{8k + 4} - \frac{1}{8k + 5} - \frac{1}{8k + 6}\right)$
This formula permits one to fairly readily compute the kth binary or hexadecimal digit of π, without having to compute the preceding k − 1 digits. Bailey's website contains the derivation as well as implementations in various programming languages. The PiHex project computed 64-bits around the quadrillionth bit of π (which turns out to be 0).
Fabrice Bellard claims to have beaten the efficiency record set by Bailey, Borwein, and Plouffe with his formula to calculate binary digits of π :
$\pi = \frac{1}{2^6} \sum_{n=0}^{\infty} \frac{{(-1)}^n}{2^{10n}} \left( - \frac{2^5}{4n+1} - \frac{1}{4n+3} + \frac{2^8}{10n+1} - \frac{2^6}{10n+3} - \frac{2^2}{10n+5} - \frac{2^2}{10n+7} + \frac{1}{10n+9} \right)$
Other formulæ that have been used to compute estimates of π include:
$\frac{\pi}{2}= \sum_{k=0}^\infty\frac{k!}{(2k+1)!!}= 1+\frac{1}{3}\left(1+\frac{2}{5}\left(1+\frac{3}{7}\left(1+\frac{4}{9}(1+\cdots)\right)\right)\right)$
Newton.
$\frac{1}{\pi} = \frac{2\sqrt{2}}{9801} \sum^\infty_{k=0} \frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}$
Ramanujan.
This converges extraordinarily rapidly. Ramanujan's work is the basis for the fastest algorithms used, as of the turn of the millennium, to calculate π.
$\frac{1}{\pi} = 12 \sum^\infty_{k=0} \frac{(-1)^k (6k)! (13591409 + 545140134k)}{(3k)!(k!)^3 640320^{3k + 3/2}}$
David Chudnovsky and Gregory Chudnovsky.
## Memorizing digits
Recent decades have seen a surge in the record number of digits memorized.
Even long before computers have calculated π, memorizing a record number of digits became an obsession for some people. The current world record is 100,000 decimal places, set on October 3, 2006 by Akira Haraguchi. The previous record (83,431) was set by the same person on July 2, 2005 , and the record previous to that (42,195) was held by Hiroyuki Goto.
There are many ways to memorize π, including the use of piems, which are poems that represent π in a way such that the length of each word (in letters) represents a digit. Here is an example of a piem: How I need a drink, alcoholic in nature (or: of course), after the heavy lectures involving quantum mechanics. Notice how the first word has 3 letters, the second word has 1, the third has 4, the fourth has 1, the fifth has 5, and so on. The Cadaeic Cadenza contains the first 3834 digits of π in this manner. Piems are related to the entire field of humorous yet serious study that involves the use of mnemonic techniques to remember the digits of π, known as piphilology. See Pi mnemonics for examples. In other languages there are similar methods of memorization. However, this method proves inefficient for large memorizations of pi. Other methods include remembering "patterns" in the numbers (for instance, the "year" 1971 appears in the first fifty digits of pi).
## Open questions
The most pressing open question about π is whether it is a normal number -- whether any digit block occurs in the expansion of π just as often as one would statistically expect if the digits had been produced completely "randomly", and that this is true in every base, not just base 10. Current knowledge on this point is very weak; e.g., it is not even known which of the digits 0,…,9 occur infinitely often in the decimal expansion of π.
Bailey and Crandall showed in 2000 that the existence of the above mentioned Bailey-Borwein-Plouffe formula and similar formulæ imply that the normality in base 2 of π and various other constants can be reduced to a plausible conjecture of chaos theory. See Bailey's above mentioned web site for details.
It is also unknown whether π and e are algebraically independent. However it is known that at least one of πe and π + e is transcendental (see Lindemann–Weierstrass theorem).
## Naturality
In non-Euclidean geometry the sum of the angles of a triangle may be more or less than π radians, and the ratio of a circle's circumference to its diameter may also differ from π. This does not change the definition of π, but it does affect many formulæ in which π appears. So, in particular, π is not affected by the shape of the universe; it is not a physical constant but a mathematical constant defined independently of any physical measurements. Nonetheless, it occurs often in physics.
For example, consider Coulomb's law (SI units)
$F = \frac{1}{ 4 \pi \epsilon_0} \frac{\left|q_1 q_2\right|}{r^2}$.
Here, 4πr2 is just the surface area of sphere of radius r. In this form, it is a convenient way of describing the inverse square relationship of the force at a distance r from a point source. It would of course be possible to describe this law in other, but less convenient ways, or in some cases more convenient. If Planck charge is used, it can be written as
$F = \frac{q_1 q_2}{r^2}$
and thus eliminate the need for π.
## Trivia
• March 14 (3/14 in U.S. date format) marks Pi Day which is celebrated by many lovers of π. Incidentally, it is also Einstein's birthday.
• On July 22, Pi Approximation Day is celebrated (22/7 - in European date format - is a popular approximation of π). Note some other days are celebrated as Pi Approximation Day.
• 355113 (~3.1415929) is sometimes jokingly referred to as "not π, but an incredible simulation!"
• Singer Kate Bush's 2005 album " Aerial" contains a song titled "π", in which she sings π to its 137th decimal place; however, for an unknown reason, she omits the 79th to 100th decimal places. She was preceded in this achievement by several years by a Swedish indie math lyrics artist under the moniker Matthew Matics, who loses track of the decimals at about the same point in the series.
• Swedish jazz musician Karl Sjölin once wrote, recorded and performed a song based on and called Pi. The song followed the decimals of Pi, with every number representing a certain note. For example 1=C, 2=D, 3=E etc. The song was then performed as a jazz song, thus making the harmony more liberal.
• John Harrison (1693–1776) (famed for winning the longitude prize), devised a meantone temperament musical tuning system derived from π, now called Lucy Tuning.
• Users of the A9.com search engine were eligible for an amazon.com program offering discounts of (π/2)% on purchases.
• The Heywood Banks song "Eighteen Wheels on a Big Rig" has the singer(s) count pi in the final verse; they reach "eight hundred billion" before going into the chorus.
• In 1932, Stanisław Gołąb proved that, if a unit disk is defined using a non-standard norm as "distance", the ratio of circumference to diameter will always be between 3 and 4; these values are attained if and only if the unit "circle" has the shape of a regular hexagon or a parallelogram respectively. See unit disc for details.
• John Squire (of The Stone Roses) mentions π in a song written for his second band The Seahorses called "Something Tells Me". The song was recorded in full by the full band, and appears on the bootleg of the never released second-album recordings. The song ends with the lyrics, "What's the secret of life? It's 3.14159265, yeah yeah!!"
• Hard 'n Phirm's fourth track on Horses and Grasses is "Pi" (and is preceded by "An Intro", which discusses the topic like an educational television program). Many digits are recited through it, and a video appeared online inspired by it.
• There is a building in the Googleplex numbered 3.14159...
• In 1897, the Indiana General Assembly passed a bill from which it could be deduced that pi was equal to 3.2 or other incorrect values. The Indiana Senate postponed the bill indefinitely, preventing it from becoming law.
• The Bloodhound Gang have a song called "Three Point One Four"
• Daniel Tammet holds the European record for remembering and recounting pi, recounting it to its 22,514th digit in just over 5 hours on 14 March 2004.
• In the Weebl and Bob episode " riot", Weebl states that he only fights "for 3.14 pies".
|
2015-03-27 11:58:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 61, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572556376457214, "perplexity": 906.8378226828896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296383.42/warc/CC-MAIN-20150323172136-00019-ip-10-168-14-71.ec2.internal.warc.gz"}
|
http://texlive.net/
|
## What’s new in chemmacros v5.6?
Version 5.6 (2016/05/02) of chemmacros comes with a whole bunch of improvements and changes – no breaking changes, though. Here’s an overview over the changes:
## New chemmacros module “polymers”
As of version 5.5 the chemmacros package includes a new module named polymers. Currently it mostly defines a few macros for usage in the \iupac command (provided by the mandatory nomenclature module) such as
## a new chemmacros – but how?
I have written earlier about a new concept for chemmacros and about the current development of chemmacros. I really want to go along with e idea of modularity. In fact so much that I already have a draft completed (without manual and probably a number of bugs but still…). The big question is, though: How do I procede without annoying all the users of chemmacros? If I am consequent with the modularity then there need to be breaking changes.
## The Template Story
In the LaTeX community templates for documents are a recurring topic of discussion. I have written about it:
It all boils down to this: templates often contain bad code or don’t follow good LaTeX practice but on the same time many not so experienced users like to use templates for their documents. This is the source for many, many problems and questions in LaTeX forums and Q&A sites.
## Name change and other changes
Yesterday I’ve changed the name of this blog. The reason is that I’d like to change the focus of this blog away from chemistry in LaTeX.
More precisely I want LaTeX in general to be the topic. That doesn’t mean I won’t be writing about chemistry in LaTeX any more – of course I still will. But there are so much more topics worth writing about! Certainly one main focus will be news about the development of my packages (one reason for not focussing on the chemistry part: I have a number of packages not related to chemistry at all!).
As you know I have always had a long list of links on the right-hand or left-hand side of this blog. I have moved these to a dedicated page: the Link Library. I also added the possibility for you to submit new links which you think are missing from the list. Have a look at the new page and at the links and submit any LaTeX related link which is missing from the list. Enjoy!
## ideas, ideas, ideas
Lately I am full of ideas – not only concerning chemmacros and new packages as you might have gathered from recent posts – but also regarding this website.
## new package: elements
While working on chemmacros I found I wanted to use some functionality which is currently provided by the package bohr. But since I only wanted part of the functionality I realised that the same was true for bohr: it needed the functionality but wasn’t originally designed to provide it. It made sense to extract said functionality into a package of its own, extended with further functioniality. Long story short: there’s a new package called elements which will be sent to CTAN once bohr is updated to use the new package.
June 24th, 2015: elements has been sent to CTAN, bohr has been updated to v1.0.
## modern times
Like it if you like it :)
## modular chemmacros
OK, it’s been a while since I’ve posted anything. But now it’s time to discuss a few ideas I have about chemmacros. As you are probably aware if you are a user of the package: it provides lots of different stuff. At least in my experience it is rarely the case that you need all of its features.
## Carbohydrates
Since a few weeks I’m working on a package using chemfig as a backend that allows a simple yet flexible input syntax for typesetting carbohydrates. My draft at this point allows the following:
\documentclass{scrartcl}
\usepackage{carbohydrates}
\begin{document}
\glucose[model=chair,ring]
\end{document
}
which gives:
## Old new friend
A few weeks ago Shinsaku Fujita submitted a new update of XyMTeX to CTAN. Why is this worth mentioning? Because of two (or three) reasons:
## Documents With Style
Last edited: 2015/07/10
People have asked me how I create the manuals of my packages, one has even asked me if I would write a document class similar to classicthesis. I am not going to do this but can provide my basic preamble settings. It is not very complicated, actually.
## chemmacros v4.0
My chemmacros bundle has reached version 4.0. The step to a new major version has been made for two reasons: 1) the bundle has been extended with a new package: chemgreek. 2) every sub package can now be loaded and used independently. In all versions 3.* the ghsystem package, the chemformula package and the chemmacros package have loaded each other which made them one single package, really. This is no longer true. While chemmacros still loads ghsystem and chemformula (and also the new package chemgreek) the same is not true for ghsystem, chemformula or chemgreek. If they’re loaded alone they won’t load any other package of the bundle.
|
2016-08-27 02:51:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32838207483291626, "perplexity": 1388.458567931339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982297699.43/warc/CC-MAIN-20160823195817-00286-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://crypto.stackexchange.com/questions?page=2&sort=active&pagesize=15
|
# All Questions
45 views
### Statistical tests for PRNG that generates a sequence which is not binary
Is there a pratical application to PRNG that generates a sequence which is not a binary one? A ternary, quaternary sequence, for instance. If so, how can we test this? Is there any alternative test ...
65 views
### ECFP harder than ECDLP ?
Given two points $P$ and $Q = \sum_{i=1}^{n} x_i.P$ over $E_p(a, b)$ for $x_1,x_2,...,x_n \in \mathbb F_p$. The Elliptic Curve Factorization Problem (ECFP) is to find the points ...
10 views
### Why does HS1-SIV use the universal hash function the second time?
HS1-SIV uses HS1-Hash twice: once with the message as input, and the second time with the authentication tag as input. The second use seems somewhat strange to me: assuming that the core is a strong ...
128 views
### How to prove the security of block ciphers
I see very often proofs of security for asymmetric crypto algorithms, for instance, using reductions to known hard problems, or game based proofs... In the field of protocols (like authentication) it ...
55 views
### Can one extend the nonce of ChaCha/Salsa20 by XORing the extra bits with the key?
Can one extend the ChaCha and Salsa20 nonces by XORing the extra nonce bits with the key? My reasoning is as follows: the Rumba compression function allows attackers to supply any 48-byte input to ...
37 views
### Weil Pairing - Miller's Algorithm
I'm trying to implement Weil Pairing using Miller's algorithm. I have got couple of questions. How to select $m$? As stated in this link page 13, I interated from $1$ to order of point $P$ such that ...
19 views
### PHP Apple News HASH HHMAC
I am having a problem generating the signature key in PHP as describred in the Apple nes API documentation found here: ...
68 views
### Here I am like a [on hold]
I don't know if it's the place to ask this, but I realy don't know where to look for this information. I heard someone says "Here I am like a..." And that sentence reminded me of something I have ...
17 views
### TLS 1.2 PRF Cipher Suite Specification
I am upgrading our assembler SSL/TLS implementation to support TLS 1.2 and am reviewing RFC5246 specification for TLS 1.2. It states that the PRF is now part of the cipher suite specification but ...
1k views
### Does composing multiple substitution ciphers improve security?
Will using two substitution ciphers one after the another be more secure than using single substitution cipher?
49 views
### Is ChaCha12 considered 256-bit secure?
ChaCha20 is considered 256-bit secure (no attack faster than brute force). However, the best known cryptanalysis that I know of is on ChaCha7. That gives ChaCha20 a rather large security margin ...
71 views
### Is a PRF applied to a secure MAC also a secure MAC?
Suppose I apply a PRF to a secure MAC. Do I still have a secure MAC?
34 views
### Converting a 5-bit s-box to its bit-sliced format
I'm currently trying to convert a 5-bit sbox (the one from this cipher: http://primates.ae/wp-content/uploads/primatesv1.02.pdf) to its bit-sliced format (i.e. to a boolean network). Most papers ...
27 views
### Fully or partially Decrypting Data with Multiple keys
Is there any good paper or research carried out till now that a data is encrypted by a single key or onwer(user) and their are 2 or more decryption keys. One key can decrypt the entire data. Other ...
42 views
### Why are the bit lengths of keys and digests equal in Lamport signatures?
In Lamport's one time signature scheme: One way function to convert a pseudo random number private key to a public key takes $\{0,1\}^n$ and returns $\{0,1\}^n$. Cryptographic hash function to ...
24 views
### Algorithm to rotate values in a predefined manner
I will start with an example. Some beacon like estimote Beacons use something called secure ID. The ID transmitted by the beacon change every 10 minutes autonomously without contacting the server. ...
77 views
### Is it possible to combine two hash functions in such a way that cracking the constructed hash would require cracking the constituent hashes? [duplicate]
Suppose A is some arbitrary hash function, for example BCrypt or MD5. And B be some other arbitrary hash function, maybe SHA256 or SCrypt. Let ...
67 views
### How would an attacker perform an exhaustive key search on a block cipher using ECB mode?
Do you always assume from Kerchoff's principle, that the attacker has access to everything but the decryption key? That is, am I to assume that to perform an exhaustive key search, the attacker has a ...
68 views
### Is every point on an elliptic curve of a prime order group a generator?
If the order of elliptic group is prime then every point is a generator of that group. I tested the above statement on some elliptic curves and found it true. Does that really work on all curves? Is ...
75 views
### Is double AES-CBC encryption with the same key and IV unsafe?
If some plaintext encrypted twice with the same AES function (CBC-mode), same IV and same key, is it possible to retrieve either the key or the plaintext?
37 views
### Triple DES with 2 keys
Suppose triple DES is performed by choosing two keys $K_1$ and $K_2$ and computing $C = T (T (T (L, K_1), K_2), K_2)$. How to attack this modified version with a meet-in-the-middle attack, in which ...
50 views
### How to define order according to domain parameters in elliptic curve pairing groups
According to domain parameters, as an example Type 1 pairing domain parameters are ...
195 views
I was researching about hash, and I thought, If sites store passwords with hash algorithms, then can't this happen: User A has the password 'hello' User B finds out the hash code of the password of ...
28 views
If I want to generate a few one-time pads, is it OK to just read required number of bytes from /dev/urandom without weakening information-theoretical security?
90 views
### Probabalistic Polynomial-time Algorithms & One-way functions
I've been reading up on probabilistic polynomial-time algorithms and one-way functions, and I was hoping to get some guidance on the topic. A textbook I'm reading states the following for one of the ...
103 views
### If we should not reuse primes in DH, shouldn't we not reuse ECDH elliptic curve properties?
An article How is NSA breaking so much crypto? describes NSA's methods for breaking encryption. If a client and server are speaking Diffie-Hellman, they first need to agree on a large prime number ...
30 views
### How are the IVs of SHA512/256 and SHA512/224 calculated?
This seems odd to me, because there are a lot of resources out there saying that the initializer values of SHA512 and SHA384 are the first 64 bits of the fractional parts of the square roots of prime ...
102 views
### Is multiplicative blinding less secure than additive?
It's easy to see that additive blinding (e.g., $x+r$ for secret x and random r) is perfectly secure in a finite field (this is a one-time-pad) and statistically secure for $r$ uniformly distributed in ...
29 views
### How much does using the same $H$ for all messages weaken GCM?
How much is GCM weakened by using the same MAC key $H = E_K(0^*)$ for all messages that use the same key (which is what GCM actually does) instead of using $E_K(N||0_{32})$ (which is different for ...
50 views
### Which AES encryption do banks use? [on hold]
I'm trying to find what size keys banks use for symmetric and asymmetric encryption. I know that the symmetric one is smaller because they want it to be fast but I can't find any information on their ...
37 views
### Is it safe to use fast/small hash for key identity?
I have to roll database access keys often, but it's important to know which key each system is using in order to avoid unavailability. I want the systems to report that without exposing the key ...
79 views
### Difference left-or-right CPA security, IND-CPA security
I am trying to understand the notion of left-or-right-CPA (LOR-CPA) security for private-key encryption schemes introduced in my lecture. If I understood it correctly so far, the only difference to ...
127 views
### CP-ABE for threshold cryptography
I would like to know if there is any research work using CP-ABE for threshold cryptography? I want to know also if applying CP-ABE for threshold cryptography would overcome other techniques.
277 views
### To what extent is WhatsApp's statement on secure messaging realistic?
As those of you who use WhatsApp Messenger probably know, with their recent update chats and calls are now encrypted. Here's what WhatsApp claims: Messages you send to this chat and calls are now ...
329 views
### Blinding twice in RSA
I understand that if you have a message $m$, you can blind it by selecting a random $r$ and then multiplying $r^e\times m \pmod{n}$ Someone else then signs it with $d$, raising to the power of $d$: ...
17 views
### Create composite group for pairing in JPBC
We use bilinear map $S = (G, G_T , e(\cdot, \cdot))$ of composite order $n = s \times n'$ with two subgroups $G_s$ and $G_n'$ of $G$. Random generators $w \in G$, $g \in G_s$, and $φ \in G_n'$ are ...
49 views
### Is it safe to encrypt data using XOR along with CSPRNG seeded with a truly random number?
I was wondering a couple of days ago whether I could reverse the normal encryption order to produce a good AES based cipher variant. The method is as follows: Two users, Alice and Bob, each have a ...
51 views
### How distinct are the meanings of the terms “CSPRNG,” “DRBG” and “stream cipher”?
Are there universal, consensus definitions for the following terms? Cryptographically secure pseudo-random number generator ("CSPRNG") Deterministic random bit generator ("DRBG") Stream cipher ...
60 views
### Strength of $H(k\|H(m))$ as a MAC algorithm
What is the strength of $H(k \| H(m))$ compared to HMAC? Compared to $H(m \| k)$? What is the strength in bits of a given key/output size?
11 views
### Can anyone tell whether combinational has higher frequency of operation than sequential or vice-versa?
I have designed SHA3 algorithm in 2 ways - combinational and sequential. The sequential design is giving frequency of operation as 700MHz while the combinational one is giving 0.5MHz so i want to ask ...
166 views
### Sorting over encrypted data with different symmetric keys
I'm working on a security project. I need to perform a sorting on the lists of encrypted integers and strings. The encryption used is symmetric. The clients send the encrypted data in a list to the ...
26 views
### Partial-Message Collisions on Iterated Hash Functions
In Cryptography Engineering by Ferguson et al it says the following is a problem with iterated hash functions: Because hashing $m$ and $m'$ leads to the same value, $h(m||X) = h(m'||X)$... for all ...
|
2016-05-06 22:40:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6819566488265991, "perplexity": 2341.9372718203654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461864121714.34/warc/CC-MAIN-20160428172201-00147-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://labs.tib.eu/arxiv/?author=L.%20Gatignon
|
• The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear e+e- collider under development. For an optimal exploitation of its physics potential, CLIC is foreseen to be built and operated in a staged approach with three centre-of-mass energy stages ranging from a few hundred GeV up to 3 TeV. The first stage will focus on precision Standard Model physics, in particular Higgs and top-quark measurements. Subsequent stages will focus on measurements of rare Higgs processes, as well as searches for new physics processes and precision measurements of new states, e.g. states previously discovered at LHC or at CLIC itself. In the 2012 CLIC Conceptual Design Report, a fully optimised 3 TeV collider was presented, while the proposed lower energy stages were not studied to the same level of detail. This report presents an updated baseline staging scenario for CLIC. The scenario is the result of a comprehensive study addressing the performance, cost and power of the CLIC accelerator complex as a function of centre-of-mass energy and it targets optimal physics output based on the current physics landscape. The optimised staging scenario foresees three main centre-of-mass energy stages at 380 GeV, 1.5 TeV and 3 TeV for a full CLIC programme spanning 22 years. For the first stage, an alternative to the CLIC drive beam scheme is presented in which the main linac power is produced using X-band klystrons.
• ### CLIC Muon Sweeper Design(1603.00005)
Feb. 27, 2016 hep-ex, physics.acc-ph
There are several background sources which may affect the analysis of data and detector performans at the CLIC project. One of the important background source is halo muons, which are generated along the beam delivery system (BDS), for the detector performance. In order to reduce muon background, magnetized muon sweepers have been used as a shielding material that is already described in a previous study for CLIC [1]. The realistic muon sweeper has been designed with OPERA. The design parameters of muon sweeper have also been used to estimate muon background reduction with BDSIM Monte Carlo simulation code [2, 3].
• The NA48/2 Collaboration at CERN has accumulated unprecedented statistics of rare kaon decays in the Ke4 modes: Ke4(+-) ($K^\pm \to \pi^+ \pi^- e^\pm \nu$) and Ke4(00) ($K^\pm \to \pi^0 \pi^0 e^\pm \nu$) with nearly one percent background contamination. The detailed study of form factors and branching rates, based on these data, has been completed recently. The results brings new inputs to low energy strong interactions description and tests of Chiral Perturbation Theory (ChPT) and lattice QCD calculations. In particular, new data support the ChPT prediction for a cusp in the $\pi^0\pi^0$ invariant mass spectrum at the two charged pions threshold for Ke4(00) decay. New final results from an analysis of about 400 $K^\pm \to \pi^\pm \gamma \gamma$ rare decay candidates collected by the NA48/2 and NA62 experiments at CERN during low intensity runs with minimum bias trigger configurations are presented. The results include a model-independent decay rate measurement and fits to ChPT description.
• The NA62 experiment will begin taking data in 2015. Its primary purpose is a 10% measurement of the branching ratio of the ultrarare kaon decay $K^+ \to \pi^+ \nu \bar{ \nu }$, using the decay in flight of kaons in an unseparated beam with momentum 75 GeV/c.The detector and analysis technique are described here.
• The main characteristics of the COMPASS experimental setup for physics with hadron beams are described. This setup was designed to perform exclusive measurements of processes with several charged and/or neutral particles in the final state. Making use of a large part of the apparatus that was previously built for spin structure studies with a muon beam, it also features a new target system as well as new or upgraded detectors. The hadron setup is able to operate at the high incident hadron flux available at CERN. It is characterised by large angular and momentum coverages, large and nearly flat acceptances, and good two and three-particle mass resolutions. In 2008 and 2009 it was successfully used with positive and negative hadron beams and with liquid hydrogen and solid nuclear targets. This article describes the new and upgraded detectors and auxiliary equipment, outlines the reconstruction procedures used, and summarises the general performance of the setup.
• The NA48/2 Collaboration at CERN has accumulated and analysed unprecedented statistics of rare kaon decays in the $K_{e4}$ modes: $K_{e4}(+-)$ ($K^\pm \to \pi^+ \pi^- e^\pm \nu$) and $K_{e4}(00)$ ($K^\pm \to \pi^0 \pi^0 e^\pm \nu$) with nearly one percent background contamination. It leads to the improved measurement of branching fractions and detailed form factor studies. New final results from the analysis of 381 $K^\pm \to \pi^\pm \gamma \gamma$ rare decay candidates collected by the NA48/2 and NA62 experiments at CERN are presented. The results include a decay rate measurement and fits to Chiral Perturbation Theory (ChPT) description.
• ### Measurement of the branching ratio of the decay $\Xi^{0}\rightarrow \Sigma^{+} \mu^{-} \bar{\nu}_{\mu}$(1212.3131)
Jan. 14, 2013 hep-ex
From the 2002 data taking with a neutral kaon beam extracted from the CERN-SPS, the NA48/1 experiment observed 97 $\Xi^{0}\rightarrow \Sigma^{+} \mu^{-} \bar{\nu}_{\mu}$ candidates with a background contamination of $30.8 \pm 4.2$ events. From this sample, the BR($\Xi^{0}\rightarrow \Sigma^{+} \mu^{-} \bar{\nu}_{\mu}$) is measured to be $(2.17 \pm 0.32_{\mathrm{stat}}\pm 0.17_{\mathrm{syst}})\times10^{-6}$.
• ### Empirical parameterization of the K+- -> pi+- pi0 pi0 decay Dalitz plot(1004.1005)
April 7, 2010 hep-ex
As first observed by the NA48/2 experiment at the CERN SPS, the $\p0p0$ invariant mass (M00) distribution from $\kcnn$ decay shows a cusp-like anomaly at M00=2m+, where m+ is the charged pion mass. An analysis to extract the pi pi scattering lengths in the isospin I=0 and I=2 states, a0 and a2, respectively, has been recently reported. In the present work the Dalitz plot of this decay is fitted to a new empirical parameterization suitable for practical purposes, such as Monte Carlo simulations of K+- -> pi+- pi0 pi0 decays.
• ### Summary of experimental studies at CERN on a positron source using crystal effects(physics/0506016)
June 2, 2005 hep-ex, physics.ins-det
New kind of positron sources for future linear colliders, where the converter is a tungsten crystal oriented on the <111> axis, has been studied at CERN in the WA103 experiment. In such sources the photons which create the $e^+ e^-$ pairs result from channeling radiation and coherent bremsstrahlung. In this experiment electron beams of 6 and 10 GeV were sent on different kinds of targets: a 4 mm thick crystal, a 8 mm thick crystal and a compound taraget (4 mm crystal + 4 mm amorphous disk). An amorphous tungsten target 20 mm thick was also used for comparison with the 8 mm crystal. Tracks of outgoing charged particles were detected and analyzed by a drift chamber in a magnetic field. The energy and angle spectra of the positrons were obtained for energies up to 150 MeV and angles up to 30 degrees. The measured positron distribution in momentum space (longitudinal versus transverse) is also presented, giving a full momentum space description of the source. Results on outgoing photons are also presented. A significant enhancement of both photon and positron production is clearly observed. At 10 GeV incident energy, the positron enhancement factor is 4 for the 4 mm crystal, about 2 for the 8 mm crystal. Besides, the simulation code for the crystal processes is validated by a quite good agreement between the simulated and experimental spectra, both for positrons and photons.
• ### Experimental Determination of the Characteristics of a Positron Source Using Channeling(physics/0008036)
Aug. 11, 2000 physics.acc-ph
Numerical simulations and `proof of principle' experiments showed clearly the interest of using crystals as photon generators dedicated to intense positron sources for linear colliders. An experimental investigation, using a 10 GeV secondary electron beam, of the SPS-CERN, impinging on an axially oriented thick tungsten crystal, has been prepared and operated between May and August 2000. After a short recall on the main features of positron sources using channeling in oriented crystals, the experimental set-up is described. A particular emphasis is put on the positron detector made of a drift chamber, partially immersed in a magnetic field. The enhancement in photon and positron production in the aligned crystal have been observed in the energy range 5 to 40 GeV, for the incident electrons, in crystals of 4 and 8 mm as in an hybrid target. The first results concerning this experiment are presented hereafter.
|
2021-04-14 06:39:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7147120237350464, "perplexity": 1894.6045894542228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00034.warc.gz"}
|
http://tapeop.com/interviews/99/joe-mardin-bonus/
|
In 2006 the world lost Arif Mardin, a classic (and classy) record producer and arranger who'd originally worked at Atlantic Records for over 30 years, producing hits for artists like The Bee Gees, Aretha Franklin, Bette Midler, Hall & Oates, and later working with singers like Norah Jones and Jewel. Recently the Grammy nominated documentary The Greatest Ears in Town: The Arif Mardin Story was released on DVD, and it's a loving tribute to the man and the producer. While I was watching this film I began to notice the presence of Joe Mardin, Arif's son, who acted as co-director (with Doug Biro), producer, and sometimes interviewer for the film. He even co-mixed the soundtrack with Arif Mardin's longtime engineer, Michael O'Reilly. I was curious what Joe's life was like growing up in the Mardin family, and how he'd also followed a career of music production, engineering, writing, arranging, conducting, and even drumming. I visited Joe at his Manhattan-based NuNoise Studio for a journey into remembering his father's career and how it has affected his own life.
Would you ever make another documentary on people you've met along the way?
It's funny. I love film, and I really kind of consider myself a little bit of a student of film. I read film theory, I watch silent films, and I really have a fascination for French new wave and the Italian neo-realists. I love film, and I'd love to make another film, but not the way we made my father's film, because I basically made it with my family's money, without any infrastructure. I had wonderful editors and my co-director and all that, but I didn't really have any business help. Just clearing the music alone, between myself, the one person who works with me, and my lawyer, was a lot of work. Then all the other stuff that followed, and promoting and dealing with it... even though I have a distributor, it took years from my life. It was an honor, and I'm very glad to be able to leave this document not only for my family, but for music and posterity. I couldn't do it the same way again. But making films is fascinating. My father always looked at making records as being a bit like a director in a way. In a film, it's the director's movie, and with a record, it's the artist's record, but you really are sort of directing musicians. Dealing with the script is when you're dealing with the songs. When you're dealing with the artists and musicians, you're dealing with the interpretation and the reading. There are really many analogies. The engineer is kind of like the cinematographer, or maybe you're engineering yourself and working with colors and framing and all those things. I think that making a film and a record are very analogous, at least conceptually. You really try to manage creative contributions from many different people and how their contribution touches the final product and vision. I think Walter Murch talked about that a lot in making films. He's one of my heroes, Coppola's sound editor and a brilliant guy. He talked about how the director is almost the immune system of the film. All of these artisans and craftspeople come in: editors, musicians, actors, writers, and they all put their fingerprint on the film, but the director is the final sort of arbiter or immune system, so to speak, as to whether all of those things will work or not work, and what the body sort of rejects as far as what that organ or film is. I think a producer functions that way, and obviously you have the artist in that situation. With an artist, you obviously have to look at their instincts, and that's very important. I'm going off in all sorts of directions, because I don't know what the future is. I don't know if anybody knows. There's certainly a lot of music going on. I think there's a lot of wonderful music and creativity going on everywhere. I just don't think there's a lot of it in the mainstream. There are definitely some good things in the mainstream. I don't think you can say that it's totally void, even with all of the horrible stuff that's going on.
Songs that are poppy and feel kind of fluffy and throwaway might still hold value to people a few years on.
Absolutely. I've never had a problem with great pop music, like what was on WABC AM radio when I was growing up. "Goodbye Yellow Brick Road," or "Crocodile Rock," if you want to be more bubblegum about it. I never had any problem with that, and I still don't. But it has to come from some kind of a passion. There has to be some kind of a love. If you love bubblegum pop music, make the best, most delicious confection of pop music that you possibly can that's really enjoyable, because at the end of the day, we're doing something for people's enjoyment as much as we are for the sake of making art. It's an interesting balance, isn't it?
One of the things I really want to talk about is arrangement. Gravitating into the mixing and the recording process, so much of what really works is arrangement still, even if you're just doing slight things with mixing.
You're absolutely right. It's so important. You can certainly put things out of proportion with all this technology, but you kind of have to know that's what you're doing. I remember Quincy [Jones] once said that he loves to cook, but if you put a little bit too much lemon in something it overwhelms the dish, and that's what the piccolo does. You have to know how to use it so that it doesn't cover up the whole texture. I think things like the triangle and certain brass instruments will do those kinds of things. Now can you EQ the hell out of it, compress it, and pan it off somewhere so it doesn't? Yeah, but you have to know that's what you're doing and why. Arrangement is so key, and it's become key to the point where some producers who aren't nuts-and-bolts orchestrators become really good at muting. Arrangement is super-important. Getting the form right is super-important too, even in this day and age of pop records being pretty much the same energy from beginning to end. It's not like it was 20 or 30 years ago when even a single had an arc in a certain way; sort of a beginning, middle, and end. I think the energy on most records today is the same if you happen to queue anywhere along its timeline. I still think getting the form is important, getting that feeling of how it ebbs and flows. That's part of arrangement for sure.
Yeah. I'm trying to get all my interviews to bring this up a little.
It's so important. I was watching some videos this morning as I was getting ready, and the '80s thing was on on VH1 Classic. "If it Isn't Love" comes on by New Edition. I had the sound off, but I turned the sound on for that one. I remembered liking it 25 years ago. It's so musical. It was musical then, but compared to most R&B records you hear today, there are these beautiful arpeggios and filigrees that take you into the chorus and stuff. Most people making records these days wouldn't even understand what scale that was. That was a cutting-edge record for that day, and that level of musicality is now almost like Mozart. Those beautiful musical arrangement things that end up having great production values, being great hooks, and having value in the marketplace come from the place of someone who has musical values. It's so important.
I find DAW-type recordings make it very rigid about what comes in on the chorus. It's always in and always out, a very 4/4, boxed-in deal.
Don't you think that people have been much too connected to their eyes, since the screen is there all the time?
I flick it off a lot during playbacks and mixing.
I've noticed that people say, "Oh, it's not really on the grid." I tell them not to look. Just listen. It's an interesting place that we've come to, because everything is so hyper-real now. There's an element of seeing producers or other peoples' session or work, and they feel like it's just good hygiene to tune everything. I think that people develop a fear that it's not going to sound radio-friendly if it's not perfect, but I don't know if that's what the public really wants. I'm not sure. There's a lot of loose tuning on those Adele songs, and the public reacted. They certainly didn't go crazy Auto-Tuning her vocals, and maybe that's a lesson in a certain way. She's selling humongous amounts of records.
I think it feels more human to people, compared to other pop stars at the moment.
Yes. You wonder... these are the questions I ask myself too. Should I change that? Will it work? What's really important to most listeners? Where is that place where it's going to make a difference to most people or not? I don't know. Our ears become hypersensitive and hyper professional with all this work. Is most of what we hear even important for what we're coming up with in the final product? There's no answer, of course.
This is a cool studio space.
Thank you. It works out really well. It's worked out quite well here, and other mix engineers have come in and used the studio too and have been very happy with what they've taken to mastering. The room has worked out very well, and it's quite small as you can see. Not the office area, but this is only about 600 square feet. It does raise an issue in terms of how many more years I can have a studio in midtown Manhattan. I don't know if that's going to be totally possible five years from now, but for the time being I'm here. I have projects and things I have to do, and there's always stuff coming up.
Is there any typical day for you?
No, and in a way I'm kind of lamenting that. I sort of wish there was. I'm so busy with so many different things, that I'm not here as much as I'd really like to be.
We've got a bunch of the same equipment. I like all the Chandler stuff, the Germanium pre and the TG2, the 610. Yeah, the TG-2's quite good.
These [Urie] 1176's are from Atlantic Studios. I'm missing an Altec, another one's in the shop, but this one, the Omnipressor and the other one are from the old Greene Street Studio in SoHo where I used to do a lot of work.
How did you end up with equipment from Atlantic's studios?
When they shut down in '89 they sold a bunch of things. Those were crazy times. I remember Arif bought the Pultecs and I bought the 1176s, and I think they sold them for $200 or$400 each or something, because at that time, nobody cared about those things. It was the height of the SSL thing, but there were microphones and things that we could have bought that we didn't even think about! I could have bought [Nuemann] U 67s and M 49s for nothing. And Fairchild [limiters], because nobody cared about that stuff yet. Maybe in L.A. they did, but in New York, that was not important. Everybody was cutting on SSLs. I was one of the few people trying to make an SSL not sound like an SSL those days, by inserting a Pultec on the mix buss and stuff like that.
We both certainly have this experience of seeing how far you can push a singer to get takes and to see what they might be able to pull off.
That's such a good point. Unless you work with somebody a lot, you're not exactly sure. Then there's that Kubrick thing. Especially from what I understand in later years, he'd do scores of takes, but really to drive the actor crazy. Nicholson gives that performance in The Shining because Kubrick kept asking for more, but he wasn't really saying what he needed. He got this sort of insanity out of him. I'm not saying to do that with a singer. Or take Elia Kazan. He would say things to people on the set or manipulate people. They were fascinating methods. Maybe not the kindest methods sometimes, but I'm sure those actors look at their performances, like Rod Steiger in On the Waterfront, and think that they wouldn't have been able to do that if the director hadn't said something before the camera rolled. It's fascinating to look at directors for those kinds of things.
It's always something to consider. Many engineers have had to become producers. You have to graduate into that in a way, where you're really able to bring something to a project that maybe the next guy who's \$10 less can't do.
That's totally true. It's so interesting, though. I'm sure you've had a lot of experience with this: dealing with bands as opposed to single artists. Bands are ecosystems. It's kind of like you might have great ideas for them, or a great idea for one particular thing, but you have to kind of filter it and see whether it's a great idea in the absolute, and whether a great idea in the absolute is great for this band, or whether it's just taking them away from who they are. It's always a question in a certain way. Maybe it is a great idea, and maybe it's the thing that takes them to the next place. Maybe it's who they should be. It's such an interesting place that you hold when you're a producer with a band.
You have to be so careful and examine the weak links.
At the end of the day, what makes this mass of moving molecules of air or whatever that we capture that comes into the room and satisfies either us or our target listener in terms of being a complete sonic, musical statement? What makes a record?
That's a big question.
Yeah. I think Arif would have all these ideas and end up with things that make you think, "Wow, I wouldn't have thought of that." It ends up being this unique thing. Like when he asked Barry Gibb if he wanted to take it up an octave, and he said he couldn't sing it full-voice, so he'd try it falsetto. Then you end up with this sound that nobody was expecting and that became the focus of their records for ten years after that.
That's some brilliant thinking.
Yeah. It's two in the morning, it's the fade of the record, and, "I need some ad-libs." It wasn't like, "How can I completely re-invent The Bee Gees and pop music?" No, they were in the trenches trying to make things work.
Yeah, that's when things happen, if you're ready for it.
Exactly. If you're ready for it.
|
2017-09-19 13:32:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24227246642112732, "perplexity": 1293.1906031787978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685698.18/warc/CC-MAIN-20170919131102-20170919151102-00251.warc.gz"}
|
https://en.wikipedia.org/wiki/Graph_partition
|
# Graph partition
In mathematics, the graph partition problem is defined on data represented in the form of a graph G = (V,E), with V vertices and E edges, such that it is possible to partition G into smaller components with specific properties. For instance, a k-way partition divides the vertex set into k smaller components. A good partition is defined as one in which the number of edges running between separated components is small. Uniform graph partition is a type of graph partitioning problem that consists of dividing a graph into components, such that the components are of about the same size and there are few connections between the components. Important applications of graph partitioning include scientific computing, partitioning various stages of a VLSI design circuit and task scheduling in multi-processor systems.[1] Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications see Buluc et al. (2013).[2]
## Problem complexity
Typically, graph partition problems fall under the category of NP-hard problems. Solutions to these problems are generally derived using heuristics and approximation algorithms.[3] However, uniform graph partitioning or a balanced graph partition problem can be shown to be NP-complete to approximate within any finite factor.[1] Even for special graph classes such as trees and grids, no reasonable approximation algorithms exist,[4] unless P=NP. Grids are a particularly interesting case since they model the graphs resulting from Finite Element Model (FEM) simulations. When not only the number of edges between the components is approximated, but also the sizes of the components, it can be shown that no reasonable fully polynomial algorithms exist for these graphs.[4]
## Problem
Consider a graph G = (V, E), where V denotes the set of n vertices and E the set of edges. For a (k,v) balanced partition problem, the objective is to partition G into k components of at most size v·(n/k), while minimizing the capacity of the edges between separate components.[1] Also, given G and an integer k > 1, partition V into k parts (subsets) V1, V2, ..., Vk such that the parts are disjoint and have equal size, and the number of edges with endpoints in different parts is minimized. Such partition problems have been discussed in literature as bicriteria-approximation or resource augmentation approaches. A common extension is to hypergraphs, where an edge can connect more than two vertices. A hyperedge is not cut if all vertices are in one partition, and cut exactly once otherwise, no matter how many vertices are on each side. This usage is common in electronic design automation.
### Analysis
For a specific (k, 1 + ε) balanced partition problem, we seek to find a minimum cost partition of G into k components with each component containing maximum of (1 + ε)·(n/k) nodes. We compare the cost of this approximation algorithm to the cost of a (k,1) cut, wherein each of the k components must have exactly the same size of (n/k) nodes each, thus being a more restricted problem. Thus,
${\displaystyle \max _{i}|V_{i}|\leq (1+\varepsilon )\left\lceil {\frac {|V|}{k}}\right\rceil .}$
We already know that (2,1) cut is the minimum bisection problem and it is NP complete.[5] Next we assess a 3-partition problem wherein n = 3k, which is also bounded in polynomial time.[1] Now, if we assume that we have an finite approximation algorithm for (k, 1)-balanced partition, then, either the 3-partition instance can be solved using the balanced (k,1) partition in G or it cannot be solved. If the 3-partition instance can be solved, then (k, 1)-balanced partitioning problem in G can be solved without cutting any edge. Otherwise if the 3-partition instance cannot be solved, the optimum (k, 1)-balanced partitioning in G will cut at least one edge. An approximation algorithm with finite approximation factor has to differentiate between these two cases. Hence, it can solve the 3-partition problem which is a contradiction under the assumption that P = NP. Thus, it is evident that (k,1)-balanced partitioning problem has no polynomial time approximation algorithm with finite approximation factor unless P = NP.[1]
The planar separator theorem states that any n-vertex planar graph can be partitioned into roughly equal parts by the removal of O(√n) vertices. This is not a partition in the sense described above, because the partition set consists of vertices rather than edges. However, the same result also implies that every planar graph of bounded degree has a balanced cut with O(√n) edges.
## Graph partition methods
Since graph partitioning is a hard problem, practical solutions are based on heuristics. There are two broad categories of methods, local and global. Well known local methods are the Kernighan–Lin algorithm, and Fiduccia-Mattheyses algorithms, which were the first effective 2-way cuts by local search strategies. Their major drawback is the arbitrary initial partitioning of the vertex set, which can affect the final solution quality. Global approaches rely on properties of the entire graph and do not rely on an arbitrary initial partition. The most common example is spectral partitioning, where a partition is derived from the spectrum of the adjacency matrix.
## Multi-level methods
A multi-level graph partitioning algorithm works by applying one or more stages. Each stage reduces the size of the graph by collapsing vertices and edges, partitions the smaller graph, then maps back and refines this partition of the original graph.[6] A wide variety of partitioning and refinement methods can be applied within the overall multi-level scheme. In many cases, this approach can give both fast execution times and very high quality results. One widely used example of such an approach is METIS,[7] a graph partitioner, and hMETIS, the corresponding partitioner for hypergraphs.[8]
## Spectral partitioning and spectral bisection
Given a graph with adjacency matrix A, where an entry Aij implies an edge between node i and j, and degree matrix D, which is a diagonal matrix, where each diagonal entry of a row i, dii, represents the node degree of node i. The Laplacian of the matrix L is defined as L = D − A. Now, a ratio-cut partition for graph G = (VE) is defined as a partition of V into disjoint U, and W, such that cost of cut(U,W)/(|U|·|W|) is minimized.
In such a scenario, the second smallest eigenvalue (λ) of L, yields a lower bound on the optimal cost (c) of ratio-cut partition with c ≥ λ/n. The eigenvector (V) corresponding to λ, called the Fiedler vector, bisects the graph into only two communities based on the sign of the corresponding vector entry. Division into a larger number of communities can be achieved by repeated bisection or by using multiple eigenvectors corresponding to the smallest eigenvalues.[9] The examples in Figures 1,2 illustrate the spectral bisection approach.
Figure 1: The graph G = (5,4) is analysed for spectral bisection. The linear combination of the smallest two eigenvectors leads to [1 1 1 1 1]' having an eigen value = 0.
Figure 2: The graph G = (5,5) illustrates that the Fiedler vector in red bisects the graph into two communities, one with vertices {1,2,3} with positive entries in the vector space, and the other community has vertices {4,5} with negative vector space entries.
Minimum cut partitioning however fails when the number of communities to be partitioned, or the partition sizes are unknown. For instance, optimizing the cut size for free group sizes puts all vertices in the same community. Additionally, cut size may be the wrong thing to minimize since a good division is not just one with small number of edges between communities. This motivated the use of Modularity (Q) [10] as a metric to optimize a balanced graph partition. The example in Figure 3 illustrates 2 instances of the same graph such that in (a) modularity (Q) is the partitioning metric and in (b), ratio-cut is the partitioning metric.
Figure 3: Weighted graph G may be partitioned to maximize Q in (a) or to minimize the ratio-cut in (b). We see that (a) is a better balanced partition, thus motivating the importance of modularity in graph partitioning problems.
Another objective function used for graph partitioning is Conductance which is the ratio between the number of cut edges and the volume of the smallest part. Conductance is related to electrical flows and random walks. The Cheeger bound guarantees that spectral bisection provides partitions with nearly optimal conductance. The quality of this approximation depends on the second smallest eigenvalue of the Laplacian λ2.
## Other graph partition methods
Spin models have been used for clustering of multivariate data wherein similarities are translated into coupling strengths.[11] The properties of ground state spin configuration can be directly interpreted as communities. Thus, a graph is partitioned to minimize the Hamiltonian of the partitioned graph. The Hamiltonian (H) is derived by assigning the following partition rewards and penalties.
• Reward internal edges between nodes of same group (same spin)
• Penalize missing edges in same group
• Penalize existing edges between different groups
• Reward non-links between different groups.
Additionally, Kernel PCA based Spectral clustering takes a form of least squares Support Vector Machine framework, and hence it becomes possible to project the data entries to a kernel induced feature space that has maximal variance, thus implying a high separation between the projected communities [12]
Some methods express graph partitioning as a multi-criteria optimization problem which can be solved using local methods expressed in a game theoretic framework where each node makes a decision on the partition it chooses.[13]
## Software tools
Chaco,[14] due to Hendrickson and Leland, implements the multilevel approach outlined above and basic local search algorithms. Moreover, they implement spectral partitioning techniques.
METIS[7] is a graph partitioning family by Karypis and Kumar. Among this family, kMetis aims at greater partitioning speed, hMetis,[8] applies to hypergraphs and aims at partition quality, and ParMetis[7] is a parallel implementation of the Metis graph partitioning algorithm.
PaToH[15] is another hypergraph partitioner.
Scotch[16] is graph partitioning framework by Pellegrini. It uses recursive multilevel bisection and includes sequential as well as parallel partitioning techniques.
Jostle[17] is a sequential and parallel graph partitioning solver developed by Chris Walshaw. The commercialized version of this partitioner is known as NetWorks.
Party[18] implements the Bubble/shape-optimized framework and the Helpful Sets algorithm.
The software packages DibaP[19] and its MPI-parallel variant PDibaP[20] by Meyerhenke implement the Bubble framework using diffusion; DibaP also uses AMG-based techniques for coarsening and solving linear systems arising in the diffusive approach.
Sanders and Schulz released a graph partitioning package KaHIP[21] (Karlsruhe High Quality Partitioning) that implements for example flow-based methods, more-localized local searches and several parallel and sequential meta-heuristics.
The tools Parkway[22] by Trifunovic and Knottenbelt as well as Zoltan[23] by Devine et al. focus on hypergraph partitioning.
List of free open-source frameworks:
Chaco GPL software package implementing spectral techniques and the multilevel approach
DiBaP * graph partitioning based on multilevel techniques, algebraic multigrid as well as graph based diffusion
Jostle * multilevel partitioning techniques and diffusive load-balancing, sequential and parallel
KaHIP GPL several parallel and sequential meta-heuristics, guarantees the balance constraint
kMetis Apache 2.0 graph partitioning package based on multilevel techniques and k-way local search
Mondriaan LGPL matrix partitioner to partition rectangular sparse matrices
PaToH BSD multilevel hypergraph partitioning
Parkway * parallel multilevel hypergraph partitioning
Scotch CeCILL-C implements multilevel recursive bisection as well as diffusion techniques, sequential and parallel
Zoltan BSD hypergraph partitioning
## References
1. Andreev, Konstantin; Räcke, Harald, (2004). "Balanced Graph Partitioning". Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures (Barcelona, Spain): 120–124. doi:10.1145/1007912.1007931. ISBN 1-58113-840-7.
2. ^ Buluc, Aydin; Meyerhenke, Henning; Safro, Ilya; Sanders, Peter; Schulz, Christian (2013). Recent Advances in Graph Partitioning. arXiv:1311.3144.
3. ^ Feldmann, Andreas Emil; Foschini, Luca (2012). "Balanced Partitions of Trees and Applications". Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science (Paris, France): 100–111.
4. ^ a b Feldmann, Andreas Emil (2012). "Fast Balanced Partitioning is Hard, Even on Grids and Trees". Proceedings of the 37th International Symposium on Mathematical Foundations of Computer Science (Bratislava, Slovakia).
5. ^ Garey, Michael R.; Johnson, David S. (1979). Computers and intractability: A guide to the theory of NP-completeness. W. H. Freeman & Co. ISBN 0-7167-1044-7.
6. ^ Hendrickson, B.; Leland, R. (1995). A multilevel algorithm for partitioning graphs. Proceedings of the 1995 ACM/IEEE conference on Supercomputing. ACM. p. 28.
7. ^ a b c Karypis, G.; Kumar, V. (1999). "A fast and high quality multilevel scheme for partitioning irregular graphs". SIAM Journal on Scientific Computing 20 (1): 359. doi:10.1137/S1064827595287997.
8. ^ a b Karypis, G. and Aggarwal, R. and Kumar, V. and Shekhar, S. (1997). Multilevel hypergraph partitioning: application in VLSI domain. Proceedings of the 34th annual Design Automation Conference. pp. 526–529.
9. ^ M. Naumov; T. Moon (2016). "Parallel Spectral Graph Partitioning". NVIDIA Technical Report. nvr-2016-001.
10. ^ Newman, M. E. J. (2006). "Modularity and community structure in networks". PNAS 103 (23): 8577–8696. doi:10.1073/pnas.0601602103. PMC 1482622. PMID 16723398.
11. ^ Reichardt, Jörg; Bornholdt, Stefan (Jul 2006). "Statistical mechanics of community detection". Phys. Rev. E 74 (1): 016110. doi:10.1103/PhysRevE.74.016110.
12. ^ Carlos Alzate; Johan A.K. Suykens (2010). "Multiway Spectral Clustering with Out-of-Sample Extensions through Weighted Kernel PCA". IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE Computer Society) 32 (2): 335–347. doi:10.1109/TPAMI.2008.292. ISSN 0162-8828. PMID 20075462.
13. ^ Kurve, Griffin, Kesidis (2011) "A graph partitioning game for distributed simulation of networks" Proceedings of the 2011 International Workshop on Modeling, Analysis, and Control of Complex Networks: 9 -16
14. ^ Bruce Hendrickson. "Chaco: Software for Partitioning Graphs".
15. ^ Ü. Catalyürek; C. Aykanat (2011). PaToH: Partitioning Tool for Hypergraphs.
16. ^ C. Chevalier; F. Pellegrini (2008). "PT-Scotch: A Tool for Efficient Parallel Graph Ordering". Parallel Computing 34 (6): 318–331. doi:10.1016/j.parco.2007.12.001.
17. ^ C. Walshaw; M. Cross (2000). "Mesh Partitioning: A Multilevel Balancing and Refinement Algorithm". Journal on Scientific Computing 22 (1): 63–80. doi:10.1137/s1064827598337373.
18. ^ R. Diekmann; R. Preis; F. Schlimbach; C. Walshaw (2000). "Shape-optimized Mesh Partitioning and Load Balancing for Parallel Adaptive FEM". Parallel Computing 26 (12): 1555–1581. doi:10.1016/s0167-8191(00)00043-0.
19. ^ H. Meyerhenke; B. Monien; T. Sauerwald (2008). "A New Diffusion-Based Multilevel Algorithm for Computing Graph Partitions". Journal of Parallel Computing and Distributed Computing 69 (9): 750–761. doi:10.1016/j.jpdc.2009.04.005.
20. ^ H. Meyerhenke (2013). Shape Optimizing Load Balancing for MPI-Parallel Adaptive Numerical Simulations. 10th DIMACS Implementation Challenge on Graph Partitioning and Graph Clustering. pp. 67–82.
21. ^ P. Sanders and C. Schulz (2011). Engineering Multilevel Graph Partitioning Algorithms. Proceedings of the 19th European Symposium on Algorithms (ESA). pp. 469–480.
22. ^ A. Trifunovic; W. J. Knottenbelt (2008). "Parallel Multilevel Algorithms for Hypergraph Partitioning". Journal of Parallel and Distributed Computing 68 (5): 563–581. doi:10.1016/j.jpdc.2007.11.002.
23. ^ K. Devine; E. Boman; R. Heaphy; R. Bisseling; Ü. Catalyurek (2006). Parallel Hypergraph Partitioning for Scientific Computing. Proceedings of the 20th International Conference on Parallel and Distributed Processing. pp. 124–124.
|
2016-07-29 03:55:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7000346779823303, "perplexity": 1608.3932506751885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00250-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php?title=Modular_arithmetic/Introduction&diff=next&oldid=5180
|
# Difference between revisions of "Modular arithmetic/Introduction"
Modular arithmetic is a special type of arithmetic that involves only integers.
## The Basic Idea
Let's use a clock as an example, except let's replace the $\displaystyle 12$ at the top of the clock with a $\displaystyle 0$. Starting at noon, the hour hand points in order to the following:
$\displaystyle 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0, \ldots$
This is the way in which we count in modulo 12. When we add $\displaystyle 1$ to $\displaystyle 11$, we arrive back at $\displaystyle 0$. The same is true in any other modulus (modular arithmetic system). In modulo $\displaystyle 5$, we count
$\displaystyle 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, \ldots$
We can also count backwards in modulo 5. Any time we subtract 1 from 0, we get 4. So, the integers from $\displaystyle -12$ to $\displaystyle 0$, when written in modulo 5, are
$\displaystyle 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0,$
where $\displaystyle -12$ is the same as $\displaystyle 3$ in modulo 5. Because all integers can be expressed as $\displaystyle 0$, $\displaystyle 1$, $\displaystyle 2$, $\displaystyle 3$, or $\displaystyle 4$ in modulo 5, we give these give integers their own name: the modulo 5 residues.
While this may not seem all that useful at first, counting in this way can help us solve an enormous array of number theory problems much more easily!
## Congruence
While modulo 5 uses only 5 integers (0 through 4 inclusive), all other integers are said to be congruent to (or the same as) one of those 5 integers in modulo 5. We see how that works by writing the integers going both up and down from 0, then writing the integers modulo 5 below them, making sure that we following the same counting procedure as we would with the hour hand ticking around the clock.
Given integers $a$, $b$, and $n$, with $n > 0$, we say that $a$ is congruent to $b$ modulo $n$, or $a \equiv b$ (mod $n$), if the difference ${a - b}$ is divisible by $n$.
For a given positive integer $n$, the relation $a \equiv b$ (mod $n$) is an equivalence relation on the set of integers. This relation gives rise to an algebraic structure called the integers modulo $n$ (usually known as "the integers mod $n$," or $\mathbb{Z}_n$ for short). This structure gives us a useful tool for solving a wide range of number-theoretic problems, including finding solutions to Diophantine equations, testing whether certain large numbers are prime, and even some problems in cryptology.
## Arithmetic Modulo n
### Useful Facts
Consider four integers ${a},{b},{c},{d}$ and a positive integer ${m}$ such that $a\equiv b\pmod {m}$ and $c\equiv d\pmod {m}$. In modular arithmetic, the following identities hold:
• Addition: $a+c\equiv b+d\pmod {m}$.
• Subtraction: $a-c\equiv b-d\pmod {m}$.
• Multiplication: $ac\equiv bd\pmod {m}$.
• Division: $\frac{a}{e}\equiv \frac{b}{e}\pmod {\frac{m}{\gcd(m,e)}}$, where $e$ is a positive integer that divides ${a}$ and $b$.
• Exponentiation: $a^e\equiv b^e\pmod {m}$ where $e$ is a positive integer.
#### Examples
• ${7}\equiv {1} \pmod {2}$
• $49^2\equiv 7^4\equiv (1)^4\equiv 1 \pmod {6}$
• $7a\equiv 14\pmod {49}\implies a\equiv 2\pmod {7}$
### The Integers Modulo n
The relation $a \equiv b$ (mod $n$) allows us to divide the set of integers into sets of equivalent elements. For example, if $n = 3$, then the integers are divided into the following sets:
$\{ \ldots, -6, -3, 0, 3, 6, \ldots \}$
$\{ \ldots, -5, -2, 1, 4, 7, \ldots \}$
$\{ \ldots, -4, -1, 2, 5, 8, \ldots \}$
Notice that if we pick two numbers $a$ and $b$ from the same set, then $a$ and $b$ differ by a multiple of $3$, and therefore $a \equiv b$ (mod $3$).
We sometimes refer to one of the sets above by choosing an element from the set, and putting a bar over it. For example, the symbol $\overline{0}$ refers to the set containing $0$; that is, the set of all integer multiples of $3$. The symbol $\overline{1}$ refers to the second set listed above, and $\overline{2}$ the third. The symbol $\overline{3}$ refers to the same set as $\overline{0}$, and so on.
Instead of thinking of the objects $\overline{0}$, $\overline{1}$, and $\overline{2}$ as sets, we can treat them as algebraic objects -- like numbers -- with their own operations of addition and multiplication. Together, these objects form the integers modulo $3$, or $\mathbb{Z}_3$. More generally, if $n$ is a positive integer, then we can define
$\mathbb{Z}_n = \{\overline{0}, \overline{1}, \overline{2}, \ldots, \overline{n-1} \}$,
where for each $k$, $\overline{k}$ is defined by
$\overline{k} = \{ m \in \mathbb{Z} \mbox{ such that } m \equiv k \pmod{n} \}.$
### Addition, Subtraction, and Multiplication Mod n
We define addition, subtraction, and multiplication in $\mathbb{Z}_n$ according to the following rules:
$\overline{a} + \overline{b} = \overline{a+b}$ for all $a, b \in \mathbb{Z}$. (Addition)
$\overline{a} - \overline{b} = \overline{a-b}$ for all $a, b \in \mathbb{Z}$. (Subtraction)
$\overline{a} \cdot \overline{b} = \overline{ab}$ for all $a, b \in \mathbb{Z}$. (Multiplication)
So for example, if $n = 7$, then we have
$\overline{3} + \overline{2} = \overline{3+2} = \overline{5}$
$\overline{4} + \overline{4} = \overline{4+4} = \overline{8} = \overline{1}$
$\overline{4} \cdot \overline{3} = \overline{4 \cdot 3} = \overline{12} = \overline{5}$
$\overline{6} \cdot \overline{6} = \overline{6 \cdot 6} = \overline{36} = \overline{1}$
Notice that, in each case, we reduce to an answer of the form $\overline{k}$, where $0 \leq k < 7$. We do this for two reasons: to keep possible future calculations as manageable as possible, and to emphasize the point that each expression takes one of only seven (or in general, $n$) possible values. (Some people find it useful to reduce an answer such as $\overline{5}$ to $\overline{-2}$, which is negative but has a smaller absolute value.)
#### The Natural Appeal of Modular Arithmetic
Observe that we use modular arithmetic even when solving some of the most basic, everyday problems. For example:
Cody is cramming for an exam that will be held at 2 PM. It is the morning of the day of the exam, and Cody did not get any sleep during the night. He knows that it will take him exactly one hour to get to school from the time he wakes up, and he insists upon getting at least five hours of sleep. At what time in the morning should Cody stop studying and go to sleep?
We know that the hours of the day are numbered from $1$ to $12$, with hours having the same number if and only if they are a multiple of $12$ hours apart. So we can use subtraction mod $12$ to answer this question.
We know that since Cody needs five hours of sleep plus one hour to get to school, he must stop studying six hours before the exam. We can find out what time this is by performing the subtraction
$\overline{2} - \overline{6} = \overline{-4} = \overline{8}.$
So Cody must quit studying at 8 AM.
Of course, we are able to perform calculations like this routinely without a formal understanding of modular arithmetic. One reason for this is that the way we keep time gives us a natural model for addition and subtraction in $\mathbb{Z}_n$: a "number circle." Just as we model addition and subtraction by moving along a number line, we can model addition and subtraction mod $n$ by moving along the circumference of a circle. Even though most of us never learn about modular arithmetic in school, we master this computational model at a very early age.
#### A Word of Caution
Because of the way we define operations in $\mathbb{Z}_n$, it is important to check that these operations are well-defined. This is because each of the sets that make up $\mathbb{Z}_n$ contains many different numbers, and therefore has many different names. For example, observe that in $\mathbb{Z}_7$, we have $\overline{1} = \overline{8}$ and $\overline{2} = \overline{9}$. It is reasonable to expect that if we perform the addition $\overline{8} + \overline{9}$, we should get the same answer as if we compute $\overline{1} + \overline{2}$, since we are simply using different names for the same objects. Indeed, the first addition yields the sum $\overline{17} = \overline{3}$, which is the same as the result of the second addition.
The "Useful Facts" above are the key to understanding why our operations yield the same results even when we use different names for the same sets. The task of checking that an operation or function is well-defined, is one of the most important basic techniques in abstract algebra.
### Computation of Powers Mod n
The "exponentiation" property given above allows us to perform rapid calculations modulo $n$. Consider, for example, the problem
What are the tens and units digits of $7^{1942}$?
We could (in theory) solve this problem by trying to compute $7^{1942}$, but this would be extremely time-consuming. Moreover, it would give us much more information than we need. Since we want only the tens and units digits of the number in question, it suffices to find the remainder when the number is divided by $100$. In other words, all of the information we need can be found using arithmetic mod $100$.
We begin by writing down the first few powers of $\overline{7}$:
$\overline{7}, \overline{49}, \overline{43}, \overline{1}, \overline{7}, \overline{49}, \overline{43}, \overline{1}, \ldots$
A pattern emerges! We see that $7^4 = 2401 \equiv 1$ (mod $100$). So for any positive integer $k$, we have $7^{4k} = (7^4)^k \equiv 1^k \equiv 1$ (mod $100$). In particular, we can write
$7^{1940} = 7^{4 \cdot 485} \equiv 1$ (mod $100$).
By the "multiplication" property above, then, it follows that
$7^{1942} = 7^{1940} \cdot 7^2 \equiv 1 \cdot 7^2 \equiv 49$ (mod $100$).
Therefore, by the definition of congruence, $7^{1942}$ differs from $49$ by a multiple of $100$. Since both integers are positive, this means that they share the same tens and units digits. Those digits are $4$ and $9$, respectively.
#### A General Algorithm
In the example above, we were fortunate to find a power of $7$ -- namely, $7^4$ -- that is congruent to $1$ mod $100$. What if we aren't this lucky? Suppose we want to solve the following problem:
What are the tens and units digits of $13^{404}$?
Again, we will solve this problem by computing $\overline{13}^{404}$ modulo $100$. The first few powers of $\overline{13}$ are
$\overline{13}, \overline{69}, \overline{97}, \overline{61}, \overline{93}, \ldots$
This time, no pattern jumps out at us. Is there a way we can find the $404^{th}$ power of $\overline{13}$ without taking this list all the way out to the $404^{th}$ term -- or even without patiently waiting for the list to yield a pattern?
Suppose we condense the list we started above; and instead of writing down all powers of $\overline{13}$, we write only the powers $\overline{13}^k$, where $k$ is a power of $2$. We have
$\overline{13}^1 = \overline{13}$
$\overline{13}^2 = \overline{69}$
$\overline{13}^4 = \overline{69}^2 = \overline{61}$
$\overline{13}^8 = \overline{61}^2 = \overline{21}$
$\overline{13}^{16} = \overline{21}^2 = \overline{41}$
$\overline{13}^{32} = \overline{41}^2 = \overline{81}$
$\overline{13}^{64} = \overline{81}^2 = \overline{61}$
$\overline{13}^{128} = \overline{61}^2 = \overline{21}$
$\overline{13}^{256} = \overline{21}^2 = \overline{41}$
(Observe that this process yields a pattern of its own, if we carry it out far enough!)
Now, observe that, like any positive integer, $404$ can be written as a sum of powers of two:
$404 = 256 + 128 + 16 + 4$
We can now use this powers-of-two expansion to compute $\overline{13}^{404}$:
$\overline{13}^{404} = \overline{13}^{256} \cdot \overline{13}^{128} \cdot \overline{13}^{16} \cdot \overline{13}^4 = \overline{41} \cdot \overline{21} \cdot \overline{41} \cdot \overline{61} = \overline{61}.$
So the tens and units digits of $13^{404}$ are $6$ and $1$, respectively.
We can use this method to compute $M^e$ modulo $n$, for any integers $M$ and $e$, with $e > 0$. The beauty of this algorithm is that the process takes, at most, approximately $2 \log_2 e$ steps -- at most $\log_2 e$ steps to compute the values $\overline{M}^k$ for $k$ a power of two less than $e$, and at most $\log_2 e$ steps to multiply the appropriate powers of $\overline{M}$ according to the binary representation of $e$.
This method can be further refined using Euler's Totient Theorem.
## Applications of Modular Arithmetic
Modular arithmetic is an extremely flexible problem solving tool. The following topics are just a few applications and extensions of its use:
|
2021-02-25 20:04:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 179, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849357962608337, "perplexity": 163.7267665564798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351454.16/warc/CC-MAIN-20210225182552-20210225212552-00120.warc.gz"}
|
https://socratic.org/questions/how-do-you-factor-2y-4-3y-3-9y-2
|
# How do you factor 2y^4 + 3y^3 - 9y^2?
May 8, 2015
You can factor $2 {y}^{4} + 3 {y}^{3} - 9 {y}^{2}$ by grouping out ${y}^{2}$.
The ${y}^{2}$ is present in all terms, so you can pull it out like so:
${y}^{2} \left(2 {y}^{2} + 3 y - 9\right)$
May 8, 2015
$2 {y}^{4} + 3 {y}^{3} - 9 {y}^{2}$
Extract the obvious common factor from each term
${y}^{2} \left(2 {y}^{2} + 3 y - 9\right)$
$y \left(2 y - 3\right) \left(y + 3\right)$
|
2019-09-16 10:21:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7775127291679382, "perplexity": 1183.0758222447873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00277.warc.gz"}
|
https://www.shaalaa.com/question-bank-solutions/find-lcm-hcf-following-integers-applying-prime-factorisation-method-40-36-126-fundamental-theorem-arithmetic_61724
|
# Find the Lcm and Hcf of the Following Integers by Applying the Prime Factorisation Method: 40, 36 and 126 - Mathematics
Numerical
Find the LCM and HCF of the following integers by applying the prime factorisation method:
40, 36 and 126
#### Solution
40, 36 and 126
Let us first find the factors of 40, 36 and 126
40=2^2xx5
36=2^2xx3^2
126=2xx3^3xx7
L.C.M of 40 ,36 and 126 = 2^2xx3^2xx5xx7
L.C.M of 40,36 and 126 =2520
H.C.F of 40,36 and 126 =2
Concept: Fundamental Theorem of Arithmetic
Is there an error in this question or solution?
#### APPEARS IN
RD Sharma Class 10 Maths
Chapter 1 Real Numbers
Exercise 1.4 | Q 2.4 | Page 39
|
2022-06-28 22:31:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28748396039009094, "perplexity": 2204.1153302199455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00129.warc.gz"}
|
https://okpanico.wordpress.com/2016/10/04/octave-grafici-xii-76/
|
## Octave – Grafici – XII – 76
Copio qui continuando da qui.
Uso della proprietà interpreter
All text objects—such as titles, labels, legends, and text—include the property "interpreter" that determines the manner in which special control sequences in the text are rendered.
The interpreter property can take three values: "none", "tex", "latex". If the interpreter is set to "none" then no special rendering occurs—the displayed text is a verbatim copy of the specified text. Currently, the "latex" interpreter is not implemented and is equivalent to "none".
The "tex" option implements a subset of TeX functionality when rendering text. This allows the insertion of special glyphs such as Greek characters or mathematical symbols. Special characters are inserted by using a backslash (\) character followed by a code, as shown in Table 15.1 [qui, non la copio].
Note that for on-screen display the interpreter property is honored by all graphics toolkits. However for printing, only the "gnuplot" toolkit renders TeX instructions.
Besides special glyphs, the formatting of the text can be changed within the string by using the codes
• \bf Bold font
• \it Italic font
• \sl Oblique Font
• \rm Normal font
These codes may be used in conjunction with the { and } characters to limit the change to a part of the string. For example, xlabel ('{\bf H} = a {\bf V}') where the character ‘a‘ will not appear in bold font. Note that to avoid having Octave interpret the backslash character in the strings, the strings themselves should be in single quotes.
It is also possible to change the fontname and size within the text
• \fontname{fontname} Specify the font to use
• \fontsize{size} Specify the size of the font to use
The color of the text may also be changed inline using either a string (e.g., "red") or numerically with a Red-Green-Blue (RGB) specification (.e.g., [1 0 0], also red).
• \color{color} Specify the color as a string
• \color[rgb]{R G B} Specify the color numerically
Finally, superscripting and subscripting can be controlled with the ‘^‘ and ‘_‘ characters. If the ‘^‘ or ‘_‘ is followed by a { character, then all of the block surrounded by the { } pair is superscripted or subscripted. Without the { } pair, only the character immediately following the ‘^‘ or ‘_‘ is changed.
E qui viene la famosa Tabella che non riporto, è qui.
A complete example showing the capabilities of the extended text
x = 0:0.01:3;
plot (x, erf (x));
hold on;
plot (x,x,"r");
axis ([0, 3, 0, 1]);
text (0.65, 0.6175, strcat ('\leftarrow x = {2/\surd\pi', ' {\fontsize{16}\int_{\fontsize{8}0}^{\fontsize{8}x}}', ' e^{-t^2} dt} = 0.6175'))
Continuo, passo qui.
Stampare e salvare grafici
The print command allows you to send plots to you printer and to save plots in a variety of formats. For example, print -dpsc prints the current figure to a color PostScript printer. And, print -deps foo.eps saves the current figure to an encapsulated PostScript file called foo.eps.
Nonostante il warning il file foo.eps viene prodotto (13.4 MB), questo
Viene prodotto anche con estensione sbagliata, ovvamente sempre postscript. Per ottenere il .png il comando è print -dpng foo.png (572 kB).
The different graphic toolkits have different print capabilities. In particular, the OpenGL based toolkits such as fltk do not support the "interpreter" property of text objects. This means special symbols drawn with the "tex" interpreter will appear correctly on-screen but will be rendered with interpreter "none" when printing. Switch graphics toolkits for printing if this is a concern.
Function File: print ()
Function File: print (options)
Function File: print (filename, options)
Function File: print (h, filename, options)
Print a plot, or save it to a file.
Both output formatted for printing (PDF and PostScript), and many bitmapped and vector image formats are supported.
filename defines the name of the output file. If the file name has no suffix, one is inferred from the specified device and appended to the file name. If no filename is specified, the output is sent to the printer.
h specifies the handle of the figure to print. If no handle is specified the current figure is used.
For output to a printer, PostScript file, or PDF file, the paper size is specified by the figure’s papersize property. The location and size of the image on the page are specified by the figure’s paperposition property. The orientation of the page is specified by the figure’s paperorientation property.
The width and height of images are specified by the figure’s paperpositon(3:4) property values.
The print command supports many options (segue tbella kilometrica che non riporto).
Esempi
Print to a file using the pdf device.
figure (1);
clf ();
surf (peaks);
print figure1.pdf
Print to a file using jpg device.
clf ();
surf (peaks);
print -djpg figure2.jpg
Print to a file using png device.
clf ();
surf (peaks);
print -dpng ping.png
Print to printer named PS_printer using ps format.
clf ();
surf (peaks);
print -dpswrite -PPS_printer Ovviamente PS_printer deve esistere; io ottengo questo:
Function File: saveas (h, filename)
Function File: saveas (h, filename, fmt)
Save graphic object h to the file filename in graphic format fmt.
fmt should be one of the following formats: ps, eps, ipg, png, emf, pdf.
All device formats specified in print may also be used. If fmt is omitted it is extracted from the extension of filename. The default format is "pdf".
Function File: orient (orientation)
Function File: orient (hfig, orientation)
Function File: orientation = orient ()
Function File: orientation = orient (hfig)
Query or set the print orientation for figure hfig.
Valid values for orientation are "portrait", "landscape", and "tall".
The "landscape" option changes the orientation so the plot width is larger than the plot height. The "paperposition" is also modified so that the plot fills the page, while leaving a 0.25 inch border.
The "tall" option sets the orientation to "portrait" and fills the page with the plot, while leaving a 0.25 inch border.
The "portrait" option (default) changes the orientation so the plot height is larger than the plot width. It also restores the default "paperposition" property.
When called with no arguments, return the current print orientation.
If the argument hfig is omitted, then operate on the current figure returned by gcf.
clf ();
h = surf (peaks);
saveas (h, "p-ls.png", "png")
clf ();
orient ("landscape")
h = surf (peaks);
saveas (h, "ls.png", "png")
clf ();
orient ("portrait")
h = surf (peaks);
saveas (h, "po.png", "png")
Sì, a me sembrano invertiti, ma è così (pare) 😳
print and saveas are used when work on a plot has finished and the output must be in a publication-ready format. During intermediate stages it is often better to save the graphics object and all of its associated information so that changes—to colors, axis limits, marker styles, etc.—can be made easily from within Octave. The hgsave/hgload commands can be used to save and re-create a graphics object.
Qui dissento, ma forse sono solo io…
Function File: hgsave (filename)
Function File: hgsave (h, filename)
Function File: hgsave (h, filename, fmt)
Save the graphics handle h to the file filename in the format fmt.
If unspecified, h is the current figure as returned by gcf.
When filename does not have an extension the default filename extension .ofig will be appended.
If present, fmt should be one of the following:
• -binary, -float-binary
• -hdf5, -float-hdf5
• -V7, -v7, -7, -mat7-binary
• -V6, -v6, -6, -mat6-binary
• -text
• -zip, -z
Function File: h = hgload (filename)
Load the graphics object in filename into the graphics handle h.
If filename has no extension, Octave will try to find the file with and without the standard extension of .ofig.
e poi
Posta un commento o usa questo indirizzo per il trackback.
|
2020-07-13 08:52:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5867849588394165, "perplexity": 12919.066280678608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657143354.77/warc/CC-MAIN-20200713064946-20200713094946-00306.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-7-exponents-and-exponential-functions-7-5-division-properties-of-exponents-practice-and-problem-solving-exercises-page-444/34
|
## Algebra 1
$\dfrac{1}{a^3}$
RECALL: $\left(\dfrac{x}{y}\right)^n=\dfrac{x^n}{y^n}, y \ne 0$ Use the rule above to have: $=\dfrac{1^3}{a^3} \\=\dfrac{1}{a^3}$
|
2018-12-10 22:02:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892697334289551, "perplexity": 2969.589901866638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823445.39/warc/CC-MAIN-20181210212544-20181210234044-00257.warc.gz"}
|
http://mathoverflow.net/feeds/question/106408
|
Weak admissibility in algebraic families - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-18T22:00:03Z http://mathoverflow.net/feeds/question/106408 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/106408/weak-admissibility-in-algebraic-families Weak admissibility in algebraic families Daniel Larsson 2012-09-05T07:25:15Z 2012-09-05T07:31:49Z <p>Let $M$ be an algebraic family of isocrystals over a base scheme $R/\mathbb{Q}_p$ (<strong>not</strong> a rigid analytic space). </p> <p>The question is: is the set of weakly admissible points (i.e., the points $r\in R$ over which $M$ is weakly admissible) Zariski closed or open (or neither)? </p> <p>The answer might be very simple and/or well-known, but I haven't been able to figure it out.</p> <p>Thanks for any help!</p>
|
2013-06-18 22:00:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796911001205444, "perplexity": 2851.6814671031198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707434477/warc/CC-MAIN-20130516123034-00043-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://web2.0calc.com/questions/bimdas-help
|
+0
# bimdas help
0
214
1
-(-3) squared root - squared root, to the root of 27
Guest Mar 30, 2017
#1
+92625
0
-(-3) squared root - squared root, to the root of 27
I do not know what you mean ://
Maybe......
$$\sqrt{-(-3)}-\sqrt{27}\\ =\sqrt{3}-3\sqrt{3}\\ =-2\sqrt3$$
Melody Mar 30, 2017
|
2018-06-24 20:31:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9959401488304138, "perplexity": 7247.622820280242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00485.warc.gz"}
|
https://www.physicsforums.com/threads/eigenstates-of-hamiltonian.671332/
|
# Homework Help: Eigenstates of Hamiltonian
1. Feb 12, 2013
### opaka
1. The problem statement, all variables and given/known data
The hamiltonian of a simple anti-ferromagnetic dimer is given by
H=JS(1)$\bullet$S(2)-μB(Sz(1)+Sz(2))
find the eigenvalues and eigenvectors of H.
2. Relevant equations
3. The attempt at a solution
The professor gave the hint that the eigenstates are of S2=(S(1)+S(2))2, S(1)2, S(2)2, and Sz. So I know I should have four eigenvalues. but I still have no Idea how to get this into a form that I recognize as being able to get eigenvalues from. (a matrix, a DiffEQ, etc.)
Last edited: Feb 12, 2013
2. Feb 12, 2013
### vela
Staff Emeritus
The hint is to help you to deal with the $\vec{S}_1\cdot\vec{S}_2$ term. Rewrite that term in terms of $\vec{S}^2$, $\vec{S}_1^2$, and $\vec{S}_2^2$.
3. Feb 12, 2013
### opaka
When I do that, and apply the spin operators, S2 ket (S,Sz)=s(s+1) ket (s,sz) and Sz ket (S,Sz) = szket (s,sz)(sorry, couldn't find the ket symbol in latex)
I get
H = J/2 (s(s+1) - s1(s1+1)-s2(s2+1))-μB(s1z+s2z)
Is this correct?
4. Feb 12, 2013
### vela
Staff Emeritus
Yes. Now you can calculate what H does to simultaneous eigenstates of $\vec{S}^2$, $S_z$, $\vec{S}_1^2$, and $\vec{S}_2^2$. Recall that these are exactly the states that you got from adding angular momenta.
5. Feb 12, 2013
### opaka
I get four answers : J/4 + μB, J/4 -μB, J/4 and - 3J/4. Is this right? These look like the singlet and triplet state energies, but with an added B term.
6. Feb 12, 2013
### vela
Staff Emeritus
Yeah, that looks right.
7. Feb 12, 2013
### opaka
Thanks so much Vela! you've been a wonderful help.
|
2018-07-21 08:31:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7238301634788513, "perplexity": 2980.33092761539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592420.72/warc/CC-MAIN-20180721071046-20180721091046-00562.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/175943-discrete-metric-print.html
|
# Discrete metric
• March 26th 2011, 05:31 PM
Connected
Discrete metric
Let $(E,d)$ be the discrete space.
Compute $S(a,r)$ for $r>1,$ and $S(a,r)$ for $r\le1.$
I know that $d(x,y)$ is defined to be $1$ for $x\ne y$ and $0$ for $x=y,$ but I don't know exactly how to use that to solve the problem.
• March 26th 2011, 05:40 PM
Drexel28
Quote:
Originally Posted by Connected
Let $(E,d)$ be the discrete space.
Compute $S(a,r)$ for $r\ge1,$ and $S(a,r)$ for $r\le1.$
I know that $d(x,y)$ is defined to be $1$ for $x\ne y$ and $0$ for $x=y,$ but I don't know exactly how to use that to solve the problem.
I assume that $S(a,r)$ is the 'sphere' (that is an uncommon noation) of radius $r$ centered at $a$. I think you're overthinking your problem. $S(a,r)$ has a simple formulation. Think about fixing this one point $a$ then one can think (purely heuristically) as the situation being analgous to $a$ being the origin in $\mathbb{R}^2$ and $E-\{a\}$ being the unit circle. With this in mind, is the solution clear?
• March 26th 2011, 05:43 PM
Connected
No, I don't get it well.
I understand a bit your reasoning, but is there a way to make it analytically?
If $S(a,r)=\{x\in E\mid d(x,a), then what happens in the two cases $r<1$ and $r\ge1$? In particular, what do you know if d(x,y)<1 in this metric?
|
2014-04-18 11:40:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882771909236908, "perplexity": 218.43652524675568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://planetcalc.com/4406/
|
homechevron_rightStudychevron_rightPhysics
# The waves and the wind. Calculation of wave characteristics
##### Calculation of wave characteristics
The user gave us a request - , where asked to create a calculator "calculation of the wave height and intervals between waves (frequency)?".
Intuition suggests that there is some relationship between the force of the wind and waves. Since I don't know much about the wave theory I've got to study it.
The result of my study is a calculator below along with my thoughts on the subject. The calculator does not calculate, or more precisely, does not predict the height of the waves - is a separate issue which is reviewed here - The waves and the wind. Wave height statistical forecasting.
### Wave performance calculation
Digits after the decimal point: 2
Relative depth
Wavelength (meters)
Wavenumber
Phase velocity (m / s)
Group velocity (m/s)
## Theory
It is obvious enough that the waves on the sea can not be described by a single sine wave, as they are formed as a result of imposing a plurality of waves with different periods and phases. For example, look at the picture below, which shows the wave resulting from the imposition of three different sine waves.
Source: "Wave disp" by Kraaiennest - Own work. Licensed under GFDL via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Wave_disp.gif#mediaviewer/File:Wave_disp.gif
Therefore, for analysis of the sea state energy spectrum is usually built, that is energy units lotted on the Y-axis, and frequency on the X-axis, thus obtaining energy density - the amount of energy carried by waves with a corresponding frequency range. And, as it turned out, under the influence of the wind, the shape of the energy spectrum is changed, the stronger the wind, the more strongly peak in the spectrum expressed - waves of certain frequencies carrying the most energy. In the picture below have drawn its aproximate look as best I could.
The frequencies, where a peak is observed are called dominant. Accordingly, you can make your life easier and calculate the characteristics of the waves only for the dominant frequency. The practice has shown that it will be enough to give a good approximation to the reality.
What concerns the characteristics of waves, the linear wave theory comes to help, namely, the calculation of gravitational waves in the linearized approximation. To make it clearer what I mean, let's give some definitions further from
wikipedia:
Capillary waves —the name of various waves generated at the interface between a liquid and a gas or liquid and liquid. The lower part of waves is called trough, higher— crest.
Gravity waves on the water — kind of waves on the surface of the liquid in which the force returning the deformed surface of the liquid to a state of balance are simply the force of gravity related to the height difference of crests and troughs in the gravitational field.
Wave dispersion — in the wave theory the difference in phase velocities of linear waves according to their frequency. That is, different wavelengths (respectively of different frequency) have different speed in an environment that has clearly demonstrated experience with the refraction of light in a prism. This is important for further discussion
Wavenumber — is the 2π radians to the wavelength ratio: $k \equiv \frac{2\pi}{\lambda}$.Wavenumber can be represented as the difference in wave phase (in radians) at the same time in the spatial points at a distance per unit length (one meter) or a number of spatial periods (crest) of waves arriving at 2π meters.
Using the definition of the wave number we can write the following formula:
Wavelength
$\lambda=\frac{2\pi}{k}$
Phase velocity (crest velocity )
$c=\frac{\omega}{k}$
Wave period(expressed in terms of angular frequency)
$T=\frac{2\pi}{\omega}$
Picture to draw attention - кred dot shows the phase velocity, green - the group velocity (the velocity of the wave packet)
Source: "Wave group" by Kraaiennest - Own work. Licensed under GFDL via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Wave_group.gif#mediaviewer/File:Wave_group.gif
### The dispersion law
The key element in the calculation of the waves characteristics is the concept of the dispersion law or dispersion relation (ratio)
The dispersion law or dispersion equation(ratio) in the wave theory - is the relationship of frequency and wave vector (wave number).
In general terms, this relationship is written as
$\omega=f(k)$.
This ratio of water derived in the linear wave theory for the so-called free surface, ie the surface of the liquid, not limited to the walls of the vessel or bed, and looks as follows:
$\omega^2=gk \tanh(kh)$,
where
g - acceleration of free fall,
k - wave number,
tanh - hyperbolic tangent,
h - distance from the liquid surface to the bottom.
It is possible to further simplify the formula, based on the graph of the hyperbolic tangent. Note that for kh, tends to zero, the hyperbolic tangent can be approximated by its argument, ie, kh value, and if kh, tends to infinity, the hyperbolic tangent kh tends to one. The latter case, obviously, relates to a very great depth. Is it possible to evaluate how they need to be big? If you take the hyperbolic tangent of pi, then its value is approximately equal to 0.9964, which is already quite close to one (the number Pi is taken for the convenience of the formula). Then
$kh\geq\pi \Rightarrow \frac{2 \pi h}{\lambda}\geq\pi \Rightarrow h\geq\frac{\lambda}{2}$.
That is, to calculate the characteristics of the wave can water be considered as deep if the depth is greater than at least half the wavelength, and in most places of the world ocean, this condition is met.
In general, based on the graph of a hyperbolic tangent, the following classification of waves for the relative depth is used (ratio of depth to wave length).
1. Waves in deep water
The depth is more than half of wavelength, the hyperbolic tangent approximates by one
$h\geq\frac{\lambda}{2}, \tanh(kd)\approx1$
2. Waves on the transitional depths
The depth from one-twentieth to one-half wavelength, the hyperbolic tangent can not be approximated
$\frac{\lambda}{20} \leq h \leq \frac{\lambda}{2}, \tanh(kd)=\tanh(kd)$
3. Waves in shallow water
The depth less than one-twentieth of the wavelength, the hyperbolic tangent approximated by its argument
$h\leq\frac{\lambda}{20}, \tanh(kd)=kd$
Consider the ratio for these cases
### The case of shallow water
The equation takes the form
$\omega^2=gk(kh)=ghk^2$,
whence
$c=\sqrt{gh}$$\lambda=T\sqrt{gh}$
The group velocity for the case of shallow water
$c_g=c=\sqrt{gh}$
That is, according to the theory, in shallow water, the wave does not have to have the dispersion because the phase velocity is independent of frequency. However, we must remember that in shallow water nonlinear effects are beginning to work associated with the increase in wave amplitude. Nonlinear effects impact, when the amplitude of the wave is comparable to its length. One of the characteristic effects in this mode is the appearance of fractures on the wavestops. In addition, there is a posibility of wave breaking - a well-known surf. These effects are not yet amenable to precise analytic calculation.
### Transitional depths case
The equation is not simplified, and then
$c=\frac{gT}{2\pi}\tanh(\frac{2\pi d}{\lambda})$$\lambda=\frac{gT^2}{2\pi}\tanh(\frac{2\pi d}{\lambda})$
The group velocity for transitional depths
$c_g=\frac{1}{2}(1+\frac{4 \pi \frac{d}{\lambda}}{\sinh(4 \pi \frac{d}{\lambda})})c$
Note that the equation of the wavelength is transcendental and find its solution should be found numerically. For example, usingFixed-point iteration method
### Deep water case
The equation takes the form
$\omega^2=gk$,
whence
$c=\frac{gT}{2\pi}$$\lambda=\frac{gT^2}{2\pi}$
The group velocity in the deep water case
$c_g=\frac{1}{2}c=\frac{gT}{4\pi}$
So, by measuring the period of a wave, with sufficient accuracy, we can calculate the phase velocity, group velocity, and wavelength.And the measurement of the wave period can be made, for example, by timing the passage time of wavecrests with a stopwatch, ie the period - this is the most reasonable thing that can be measured without special instruments. If you are somewhere near the coast - it is necessary to imagine the depth, if the depth obviously big, you can use formulas for the deep water, in which the depth, as a parameter is not included. Since we have a computer processing power at hand, the calculator does not use the simplified formula, finding wavelength by iteration method (method will converge, as the derivative of the function is less than one).
Now, back to the wind. Actually, constantly blowing in one direction wind is what generates the wave, that reports energy to wave.
And quite obvious, in order to report energy to waves, the wind should blow faster, or at least at a rate equal to the phase velocity of the wave.
Here comes fully developed sea formulation). Fully developed sea - wave, which achieved maximal values at a given wind. That is, the wave is in equilibrium with respect to energy - as much energy is reported, as much goes into the motion. Not every wave reaches such a state, as it's required for the wind to constantly blow over the entire surface, which the wave passes for some time. And the stronger the wind, the more time and distance required for the formation of such a wave. But certainly when it's formed, it will catch up with the phase velocity of the wind speed.
|
2017-12-11 09:06:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 21, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6713293194770813, "perplexity": 591.3888825003759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513330.14/warc/CC-MAIN-20171211090353-20171211110353-00747.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/979-permutation-problem.html
|
# Math Help - permutation problem
1. ## permutation problem
The problem is from DeGroot 3rd edition, section 1.7,
problem 10: n=100 balls, r red balls. The balls
are chosen one at a time without replacement.
a) what is the pr that a red ball is chosen on the
first selection? b) what is the pr a red ball is
chosen on the 50th selection? c) what is the pr
a red ball is chosen on the 100th selection?
a) Pr(red ball 1st) = r/n, easy
c) Pr(red ball 100th)? There are 100!
arrangements of the n=100 balls because sampling
is w/o replacement. Of those arrangements, the
only way to guarantee a red ball is chosen last
is to arrange the rest of the r-1 red balls in
the first 99 positions, which may be done in
P(99,r-1) ways where P(n,k) is the permutation
function. Therefore, Pr(red ball 100th) =
P(99,r-1) / 100!
b) Pr(red ball 50th)? Not so sure of this one.
I don't think I can use the above reasoning
because r might be > 50. There are a total of
100! permutations in 100 selections, 50! by the
50th selection. If r<=50 then there are P(50,r)
permutations of the red balls. To guarantee the
50th is a red ball, there are P(49,r-1) ways to
arrange the red balls. If r=50, then Pr(50th
red)=1. If r>50, then Pr(50th red)=1. For r<50
Pr(50th red) should be P(49,r-1)/50! But how to
combine the three cases for r to get one pr for
all r? I'm fairly sure I'm missing something
here.
2. ## Red Ball Selection
You might want to consider using a PR value from a
Hypergeometric Distribution.
3. I feel really dumb posting the solution becauses of all the hoops I jumped through. My professor said that the Pr(Red 1st) = Pr(Red 50th) = Pr(Red 100th) = r/100. Given no other information, the probability of a red ball on each selection is the same. Even sampling without replacement. Go figure. Trick question!
|
2015-09-01 06:54:57
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533152937889099, "perplexity": 3910.2702157057547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645167576.40/warc/CC-MAIN-20150827031247-00262-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://wikidoc.org/index.php/Lorentz_force
|
# Lorentz force
Electromagnetism
Key topics
Electricity · Magnetism
Electrostatics
·
Magnetostatics
·
Electrodynamics
· EMF ·
Electrical Network
·
Covariant formulation
·
</div>
This box: view talk edit
File:Lorentz force.svg
Trajectory of a particle with charge q, under the influence of a magnetic field B (directed perpendicularly out of the screen), for different values of q.
In physics, the Lorentz force is the force on a point charge due to electromagnetic fields. It is given by the following equation in terms of the electric and magnetic fields:[1]
${\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} ),}$
where
F is the force (in newtons)
E is the electric field (in volts per meter)
B is the magnetic field (in teslas)
q is the electric charge of the particle (in coulombs)
v is the instantaneous velocity of the particle (in meters per second)
× is the vector cross product
and ∇ × are gradient and curl, respectively
or equivalently the following equation in terms of the vector potential and scalar potential:
${\displaystyle \mathbf {F} =q(-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {v} \times (\nabla \times \mathbf {A} )),}$
where:
A and ɸ are the magnetic vector potential and electrostatic potential, respectively, which are related to E and B by[2]
${\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}$
${\displaystyle \mathbf {B} =\nabla \times \mathbf {A} .}$
Note that these are vector equations: All the quantities written in boldface are vectors (in particular, F, E, v, B, A).
The interesting feature of the second form of the Lorentz force law is its clean separation of the portion of the force due to the irrotational or grad φ portion of the force, which is due to electrical charges, and the solenoidal portion of the force or A-field portion, which corresponds to the part that appears as magnetic or as electric force depending upon the relative velocity of the frame of reference.
The Lorentz force law has a close relationship with Faraday's law of induction.
A positively charged particle will be accelerated in the same linear orientation as the E field, but will curve perpendicularly to both the instantaneous velocity vector v and the B field according to the right-hand rule (in detail, if the thumb of the right hand points along v and the index finger along B, then the middle finger points along F).
The term qE is called the electric force, while the term qv × B is called the magnetic force.[3] According to some definitions, the term "Lorentz force" refers specifically to the formula for the magnetic force:[4]
${\displaystyle \mathbf {F} _{mag}=q\mathbf {v} \times \mathbf {B} }$
with the total electromagnetic force (including the electric force) given some other (nonstandard) name. This article will not follow this nomenclature: In what follows, the term "Lorentz force" will refer only to the expression for the total force.
The magnetic force component of the Lorentz force manifests itself as the force that acts on a current-carrying wire in a magnetic field. In that context, it is also called the Laplace force.
## History
Lorentz introduced this force in 1892.[5] However, the discovery of the Lorentz force was before Lorentz's time. In particular, it can be seen at equation (77) in Maxwell's 1861 paper On Physical Lines of Force. Later, Maxwell listed it as equation "D" of his 1864 paper, A Dynamical Theory of the Electromagnetic Field, as one of the eight original Maxwell's equations. In this paper the equation was written as follows:
${\displaystyle \mathbf {E} =\mathbf {v} \times (\mu \mathbf {H} )-{\frac {\partial \mathbf {A} }{\partial t}}-\nabla \phi }$
where
A is the magnetic vector potential,
${\displaystyle \phi }$ is the electrostatic potential,
H is the magnetic field H,
${\displaystyle \mu }$ is magnetic permeability.
Although this equation is obviously a direct precursor of the modern Lorentz force equation, it actually differs in two respects:
• It does not contain a factor of q, the charge. Maxwell didn't use the concept of charge. The definition of E used here by Maxwell is unclear. He uses the term electromotive force. He operated from Faraday's electro-tonic state A,[6] which he considered to be a momentum in his vortex sea. The closest term that we can trace to electric charge in Maxwell's papers is the density of free electricity, which appears to refer to the density of the aethereal medium of his molecular vortices and that gives rise to the momentum A. Maxwell believed that A was a fundamental quantity from which electromotive force can be derived.[7]
• The equation here contains the information that what we nowadays call E, which today can be expressed in terms of scalar and vector potentials according to
${\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}$
The fact that E can be expressed this way is equivalent to one of the four modern Maxwell's equations, the Maxwell-Faraday equation.[8]
Despite its historical origins in the original set of eight Maxwell's equations, the Lorentz force is no longer considered to be one of "Maxwell's equations" as the term is currently used (that is, as reformulated by Heaviside). It now sits adjacent to Maxwell's equations as a separate and essential law.[1]
## Significance of the Lorentz force
While the modern Maxwell's equations describe how electrically charged particles and objects give rise to electric and magnetic fields, the Lorentz force law completes that picture by describing the force acting on a moving point charge q in the presence of electromagnetic fields.[1][9] The Lorentz force law describes the effect of E and B upon a point charge, but such electromagnetic forces are not the entire picture. Charged particles are possibly coupled to other forces, notably gravity and nuclear forces. Thus, Maxwell's equations do not stand separate from other physical laws, but are coupled to them via the charge and current densities. The response of a point charge to the Lorentz law is one aspect; the generation of E and B by currents and charges is another.
In real materials the Lorentz force is inadequate to describe the behavior of charged particles, both in principle and as a matter of computation. The charged particles in a material medium both respond to the E and B fields and generate these fields. Complex transport equations must be solved to determine the time and spatial response of charges, for example, the Boltzmann equation or the Fokker–Planck equation or the Navier-Stokes equations. For example, see magnetohydrodynamics, fluid dynamics, electrohydrodynamics, superconductivity, stellar evolution. An entire physical apparatus for dealing with these matters has developed. See for example, Green–Kubo relations and Green's function (many-body theory).
Although one might suggest that these theories are only approximations intended to deal with large ensembles of "point particles", perhaps a deeper perspective is that the charge-bearing particles may respond to forces like gravity, or nuclear forces, or boundary conditions (see for example: boundary layer, boundary condition, Casimir effect, cross section (physics)) that are not electromagnetic interactions, or are approximated in a deus ex machina fashion for tractability.[10]
## Lorentz force law as the definition of E and B
In many textbook treatments of classical electromagnetism, the Lorentz Force Law is used as the definition of the electric and magnetic fields E and B.[11] To be specific, the Lorentz Force is understood to be the following empirical statement:
The electromagnetic force on a test charge at a given point and time is a certain function of its charge and velocity, which can be parameterized by exactly two vectors E and B, in the functional form:
${\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} ).}$
If this empirical statement is valid (and, of course, countless experiments have shown that it is), then two vector fields E and B are thereby defined throughout space and time, and these are called the "electric field" and "magnetic field".
Note that the fields are defined everywhere in space and time, regardless of whether or not a charge is present to experience the force. In particular, the fields are defined with respect to what force a test charge would feel, if it were hypothetically placed there.
Note also that as a definition of E and B, the Lorentz force is only a definition in principle because a real particle (as opposed to the hypothetical "test charge" of infinitesimally-small mass and charge) would generate its own finite E and B fields, which would alter the electromagnetic force that it experiences. In addition, if the charge experiences acceleration, for example, if forced into a curved trajectory by some external agency, it emits radiation that causes braking of its motion. See, for example, Bremsstrahlung and synchrotron light. These effects occur through both a direct effect (called the radiation reaction force) and indirectly (by affecting the motion of nearby charges and currents).
Moreover, the electromagnetic force is not in general the same as the net force, due to gravity, electroweak and and other forces, and any extra forces would have to be taken into account in a real measurement.
## Lorentz force and Faraday's law of induction
Given a loop of wire in a magnetic field, Faraday's law of induction states:
${\displaystyle {\mathcal {E}}=-{\frac {d\Phi _{B}}{dt}}}$
where:
${\displaystyle \Phi _{B}\ }$ is the magnetic flux through the loop,
${\displaystyle {\mathcal {E}}}$ is the electromotive force (EMF) experienced,
t is time
The sign of the EMF is determined by Lenz's Law.
Using the Lorentz force law, the EMF around a closed path ∂Σ is given by:[12][13]
${\displaystyle {\mathcal {E}}=\oint _{\partial \Sigma (t)}d{\boldsymbol {\ell }}\cdot \mathbf {F} /q=\oint _{\partial \Sigma (t)}d{\boldsymbol {\ell }}\cdot \left(\mathbf {E} +\mathbf {v\times B} \right)\ ,}$
where d is an element of the curve ∂Σ(t), imagined to be moving in time. The flux ΦB in Faraday's law of induction can be expressed explicitly as:
${\displaystyle {\frac {d\Phi _{B}}{dt}}={\frac {d}{dt}}\iint _{\Sigma (t)}d{\boldsymbol {A}}\cdot \mathbf {B} (\mathbf {r} ,\ t)\ ,}$
where
Σ(t) is a surface bounded by the closed contour ∂Σ(t)
E is the electric field,
d is an infinitesimal vector element of the contour ∂Σ,
v is the velocity of the infinitesimal contour element d,
B is the magnetic field.
dA is an infinitesimal vector element of surface Σ , whose magnitude is the area of an infinitesimal patch of surface, and whose direction is orthogonal to that surface patch.
Both d and dA have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin-Stokes theorem.
The surface integral at the right-hand side of this equation is the explicit expression for the magnetic flux ΦB through Σ. Thus, incorporating the Lorentz law in Faraday's equation, we find:[14] [15]
${\displaystyle \oint _{\partial \Sigma (t)}d{\boldsymbol {\ell }}\cdot \left(\mathbf {E} (\mathbf {r} ,\ t)+\mathbf {v\times B} (\mathbf {r} ,\ t)\right)=-{\frac {d}{dt}}\iint _{\Sigma (t)}d{\boldsymbol {A}}\cdot \mathbf {B} (\mathbf {r} ,\ t)\ .}$
Notice that the ordinary time derivative appearing before the integral sign implies that time differentiation must include differentiation of the limits of integration, which vary with time whenever Σ(t) is a moving surface.
The above result can be compared with the version of Faraday's law of induction that appears in the modern Maxwell's equations, called here the Maxwell-Faraday equation:
${\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}\ .}$
The Maxwell-Faraday equation also can be written in an integral form using the Kelvin-Stokes theorem:[16]
${\displaystyle \oint _{\partial \Sigma (t)}d{\boldsymbol {\ell }}\cdot \mathbf {E} (\mathbf {r} ,\ t)=-\ \iint _{\Sigma (t)}d{\boldsymbol {A}}\cdot {{\partial \mathbf {B} (\mathbf {r} ,\ t)} \over \partial t}}$
Comparison of the Faraday flux law with the integral form of the Maxwell-Faraday relation suggests:
${\displaystyle {\frac {d}{dt}}\iint _{\Sigma (t)}d{\boldsymbol {A}}\cdot \mathbf {B} (\mathbf {r} ,\ t)=\iint _{\Sigma (t)}d{\boldsymbol {A}}\cdot {{\partial \mathbf {B} (\mathbf {r} ,\ t)} \over \partial t}-\oint _{\partial \Sigma (t)}d{\boldsymbol {\ell }}\cdot \left(\mathbf {v\times B} (\mathbf {r} ,\ t)\right)\ .}$
which is a form of the Leibniz integral rule valid because div B = 0.[17] The term in v × B accounts for motional EMF, that is the movement of the surface Σ, at least in the case of a rigidly translating body. In contrast, the integral form of the Maxwell-Faraday equation includes only the effect of the E-field generated by ∂B/∂t.
Often the integral form of the Maxwell-Faraday equation is used alone, and is written with the partial derivative outside the integral sign as:
${\displaystyle \oint _{\partial \Sigma }d{\boldsymbol {\ell }}\cdot \mathbf {E} (\mathbf {r} ,\ t)=-{\partial \over \partial t}\ \iint _{\Sigma }d{\boldsymbol {A}}\cdot {\mathbf {B} (\mathbf {r} ,\ t)}\ .}$
Notice that the limits ∂Σ and Σ have no time dependence. In the context of the Maxwell-Faraday equation, the usual interpretation of the partial time derivative is extended to imply a stationary boundary. On the other hand, Faraday's law of induction holds whether the loop of wire is rigid and stationary, or in motion or in process of deformation, and it holds whether the magnetic field is constant in time or changing. However, there are cases where Faraday's law is either inadequate or difficult to use, and application of the underlying Lorentz force law is necessary. See inapplicability of Faraday's law.
If the magnetic field is fixed in time and the conducting loop moves through the field, the flux magnetic flux ΦB linking the loop can change in several ways. For example, if the B-field varies with position, and the loop moves to a location with different B-field, ΦB will change. Alternatively, if the loop changes orientation with respect to the B-field, the B•dA differential element will change because of the different angle between B and dA, also changing ΦB. As a third example, if a portion of the circuit is swept through a uniform, time-independent B-field, and another portion of the circuit is held stationary, the flux linking the entire closed circuit can change due to the shift in relative position of the circuit's component parts with time (surface Σ(t) time-dependent). In all three cases, Faraday's law of induction then predicts the EMF generated by the change in ΦB.
In a contrasting circumstance, when the loop is stationary and the B-field varies with time, the Maxwell-Faraday equation shows a nonconservative[18] E-field is generated in the loop, which drives the carriers around the wire via the q E term in the Lorentz force. This situation also changes ΦB, producing an EMF predicted by Faraday's law of induction.
Naturally, in both cases, the precise value of current that flows in response to the Lorentz force depends on the conductivity of the loop.
## Lorentz force in terms of potentials
If the scalar potential and vector potential replace E and B (see Helmholtz decomposition), the force becomes:
${\displaystyle \mathbf {F} =q(-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial \mathbf {t} }}+\mathbf {v} \times (\nabla \times \mathbf {A} ))}$
or, equivalently (making use of the fact that v is a constant; see triple product),
${\displaystyle \mathbf {F} =q(-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial \mathbf {t} }}+\nabla (\mathbf {v} \cdot \mathbf {A} )-(\mathbf {v} \cdot \nabla )\mathbf {A} )}$
where
A is the magnetic vector potential
${\displaystyle \phi }$ is the electrostatic potential
The symbols ${\displaystyle \nabla ,(\nabla \times ),(\nabla \cdot )}$ denote gradient, curl, and divergence, respectively.
The potentials are related to E and B by
${\displaystyle \mathbf {E} =-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}}$
${\displaystyle \mathbf {B} =\nabla \times \mathbf {A} }$
## Lorentz force in cgs units
The above-mentioned formulae use SI units which are the most common among experimentalists, technicians, and engineers. In cgs units, which are somewhat more common among theoretical physicists, one has instead
${\displaystyle \mathbf {F} =q_{cgs}\cdot (\mathbf {E} _{cgs}+{\frac {\mathbf {v} }{c}}\times \mathbf {B} _{cgs}).}$
where c is the speed of light. Although this equation looks slightly different, it is completely equivalent, since one has the following relations:
${\displaystyle q_{cgs}={\frac {q_{SI}}{\sqrt {4\pi \epsilon _{0}}}}}$, ${\displaystyle \mathbf {E} _{cgs}={\sqrt {4\pi \epsilon _{0}}}\,\mathbf {E} _{SI}}$, and ${\displaystyle \mathbf {B} _{cgs}={\sqrt {4\pi /\mu _{0}}}\,{\mathbf {B} _{SI}}}$
where ε0 and μ0 are the vacuum permittivity and vacuum permeability, respectively. In practice, unfortunately, the subscripts "cgs" and "SI" are always omitted, and the unit system has to be assessed from context.
## Covariant form of the Lorentz force
Newton's law of motion can be written in covariant form in terms of the field strength tensor.
${\displaystyle {\frac {dp^{\alpha }}{d\tau }}=qu_{\beta }F^{\alpha \beta }}$
where
${\displaystyle \tau }$ is c times the proper time of the particle,
q is the charge,
u is the 4-velocity of the particle, defined as:
${\displaystyle u_{\beta }=\left(u_{0},u_{1},u_{2},u_{3}\right)=\gamma \left(c,v_{x},v_{y},v_{z}\right)\,}$
with γ = Lorentz factor defined above, and F is the field strength tensor (or electromagnetic tensor) and is written in terms of fields as:
${\displaystyle F^{\alpha \beta }={\begin{bmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{bmatrix}}}$.
The fields are transformed to a frame moving with constant relative velocity by:
${\displaystyle {\acute {F}}^{\mu \nu }={\Lambda ^{\mu }}_{\alpha }{\Lambda ^{\nu }}_{\beta }F^{\alpha \beta },}$
where ${\displaystyle {\Lambda ^{\mu }}_{\alpha }}$ is a Lorentz transformation. Alternatively, using the four vector:
${\displaystyle A^{\alpha }=\left(\phi /c,\ A_{x},\ A_{y},\ A_{z}\right)\ ,}$
related to the electric and magnetic fields by:
${\displaystyle \mathbf {E=-\nabla } \phi -\partial _{t}\mathbf {A} }$ ${\displaystyle \mathbf {B=\nabla \times A} \ ,}$
the field tensor becomes:[19]
${\displaystyle F^{\alpha \beta }={\frac {\partial A^{\beta }}{\partial x_{\alpha }}}-{\frac {\partial A^{\alpha }}{\partial x_{\beta }}}\ ,}$
where:
${\displaystyle x_{\alpha }=\left(-ct,\ x,\ y,\ z\right)\ .}$
### Translation to vector notation
The ${\displaystyle \mu =1}$ component (x-component) of the force is
${\displaystyle \gamma {\frac {dp^{1}}{dt}}={\frac {dp^{1}}{d\tau }}=qu_{\beta }F^{1\beta }=q\left(-u^{0}F^{10}+u^{1}F^{11}+u^{2}F^{12}+u^{3}F^{13}\right).\,}$
Here, ${\displaystyle \tau }$ is the proper time of the particle. Substituting the components of the electromagnetic tensor F yields
${\displaystyle \gamma {\frac {dp^{1}}{dt}}=q\left(-u^{0}\left({\frac {-E_{x}}{c}}\right)+u^{2}(B_{z})+u^{3}(-B_{y})\right)\,}$
Writing the four-velocity in terms of the ordinary velocity yields
${\displaystyle \gamma {\frac {dp^{1}}{dt}}=q\gamma \left(c\left({\frac {E_{x}}{c}}\right)+v_{y}B_{z}-v_{z}B_{y}\right)\,}$
${\displaystyle \gamma {\frac {dp^{1}}{dt}}=q\gamma \left(E_{x}+\left(\mathbf {v} \times \mathbf {B} \right)_{x}\right).\,}$
The calculation of the ${\displaystyle \mu =2}$ or ${\displaystyle \mu =3}$ is similar yielding
${\displaystyle \gamma {\frac {d\mathbf {p} }{dt}}={\frac {d\mathbf {p} }{d\tau }}=q\gamma \left(\mathbf {E} +(\mathbf {v} \times \mathbf {B} )\right)\ ,}$
or, in terms of the vector and scalar potentials A and φ,
${\displaystyle {\frac {d\mathbf {p} }{d\tau }}=q\gamma (-\nabla \phi -{\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {v} \times (\nabla \times \mathbf {A} ))\ ,}$
which are the relativistic forms of Newton's law of motion when the Lorentz force is the only force present.
## Force on a current-carrying wire
When a wire carrying an electrical current is placed in a magnetic field, each of the moving charges, which comprise the current, experiences the Lorentz force, and together they can create a macroscopic force on the wire (sometimes called the Laplace force). By combining the Lorentz force law above with the definition of electrical current, the following equation results, in the case of a straight, stationary wire:
${\displaystyle \mathbf {F} =I\mathbf {L} \times \mathbf {B} \,}$
where
F = Force, measured in newtons
I = current in wire, measured in amperes
B = magnetic field vector, measured in teslas
${\displaystyle \times }$ = vector cross product
L = a vector, whose magnitude is the length of wire (measured in metres), and whose direction is along the wire, aligned with the direction of conventional current flow.
Alternatively, some authors write
${\displaystyle \mathbf {F} =L\mathbf {I} \times \mathbf {B} }$
where the vector direction is now associated with the current variable, instead of the length variable. The two forms are equivalent.
If the wire is not straight but curved, the force on it can be computed by applying this formula to each infinitesimal segment of wire d, then adding up all these forces via integration. Formally, the net force on a stationary, rigid wire carrying a current I is
${\displaystyle \mathbf {F} =I\oint d{\boldsymbol {\ell }}\times \mathbf {B} ({\boldsymbol {\ell }}\ )}$
(This is the net force. In addition, there will usually be torque, plus other effects if the wire is not perfectly rigid.)
One application of this is Ampère's force law, which describes how two current-carrying wires can attract or repel each other, since each experiences a Lorentz force from the other's magnetic field. For more information, see the article: Ampère's force law.
## EMF
The magnetic force (q v × B) component of the Lorentz force is responsible for motional electromotive force (or motional EMF), the phenomenon underlying many electrical generators. When a conductor is moved through a magnetic field, the magnetic force tries to push electrons through the wire, and this creates the EMF. The term "motional EMF" is applied to this phenomenon, since the EMF is due to the motion of the wire.
In other electrical generators, the magnets move, while the conductors do not. In this case, the EMF is due to the electric force (qE) term in the Lorentz Force equation. The electric field in question is created by the changing magnetic field, resulting in an induced EMF, as described by the Maxwell-Faraday equation (one of the four modern Maxwell's equations).[20]
The two effects are not however symmetric. As one demonstration of this, a charge rotating around the magnetic axis of a stationary, cylindrically-symmetric bar magnet will experience a magnetic force, whereas if the charge is stationary and the magnet is rotating about its axis, there will be no force. This asymmetric effect is called Faraday's paradox.
Both of these EMF's, despite their different origins, can be described by the same equation, namely, the EMF is the rate of change of magnetic flux through the wire. (This is Faraday's law of induction, see above.) Einstein's theory of special relativity was partially motivated by the desire to better understand this link between the two effects.[20] In fact, the electric and magnetic fields are different faces of the same electromagnetic field, and in moving from one inertial frame to another, the solenoidal vector field portion of the E-field can change in whole or in part to a B-field or vice versa.[21]
## General references
The numbered references refer in part to the list immediately below.
• Griffiths, David J. (1999), Introduction to electrodynamics (3rd ed.), Upper Saddle River, [NJ.]: Prentice-Hall, ISBN 0-13-805326-X
• Jackson, John David (1999), Classical electrodynamics (3rd ed.), New York, [NY.]: Wiley, ISBN 0-471-30932-X
• Serway, Raymond A.; Jewett, John W., Jr. (2004), Physics for scientists and engineers, with modern physics, Belmont, [CA.]: Thomson Brooks/Cole, ISBN 0-534-40846-X
## Numbered footnotes and references
1. See Jackson page 2. The book lists the four modern Maxwell's equations, and then states, "Also essential for consideration of charged particle motion is the Lorentz force equation, F = q ( E+ v × B ), which gives the force acting on a point charge q in the presence of electromagnetic fields."
2. These definitions use Helmholtz's theorem. Because divB = 0 (Gauss's law for magnetism), Helmholtz's theorem proves that we can define a vector field A (called the magnetic potential) such that B = ∇ × A. From the Maxwell-Faraday equation, ∇ × E = −∂t B so ∇ × [ E + ∂t A ] = 0. Applying Helmholtz's theorem again to E + ∂t A, which has zero curl, we find that we can define a scalar field ɸ (called the electric potential) with E + ∂t A = −∇ɸ. The equation for B automatically satisfies ∇•B = 0, that is, demonstrates that B is a solenoidal vector field. Also, the equation for E shows that it can have two different components: a conservative or irrotational vector field component (which originates in electric charges) and a nonconservative or curl component (which originates in the Maxwell-Faraday equation). For more details, see magnetic potential and electric potential.
3. See Griffiths page 204.
4. For example, see the website of the "Lorentz Institute": [1], or Griffiths.
5. Darrigol, Olivier (2000), Electrodynamics from [[André Ampère |Ampère]] to [[Albert Einstein|Einstein]], Oxford, [England]: Oxford University Press, p. 327, ISBN 0-198-50593-0 URL–wikilink conflict (help)
6. "While the wire is subject to either volta-electric or magneto-electric induction it appears to be in a peculiar state, for it resists the formation of an electrical current in it. … I have … ventured to designate it as the electro-tonic state." Quoted by Maxwell from Faraday, Trans. Cam. Phil. Soc., p. 51, v. 10 (1864)
7. At the experimental level in classical electromagnetism, E and B are the fundamental, measurable, physical fields. See, for example, Griffiths page 417, or Jackson page 239. However, in quantum field theory, the potentials A and ${\displaystyle \phi }$ play a fundamental role. See, for example, Srednicki, Chapter 58, p. 351 ff. and R Littlejohn on quantization of the electromagnetic field; Physics 221B notes–quantizationPhysics 221B notes–interaction However, the fields themselves can be related to electromotive force (in the modern definition) only by addition of the Lorentz force. Maxwell did not formulate the equations with a separate Lorentz force equation.
8. See Griffiths page 417, or Jackson page 239.
9. See Griffiths page 326, which states that Maxwell's equations, "together with the [Lorentz] force law...summarize the entire theoretical content of classical electrodynamics".
10. That is, a first-principles approach might be approximated to make calculation possible without complications that are not very significant to the results. For example, a metallic boundary might be approximated as having infinite conductivity. A statistical mechanical model of a plasma might approximate the treatment of collisions with boundaries and between particles.
11. See, for example, Jackson p777-8.
12. Landau, L. D., Lifshit︠s︡, E. M., & Pitaevskiĭ, L. P. (1984). Electrodynamics of continuous media; Volume 8 Course of theoretical physics (Second Edition ed.). Oxford: Butterworth-Heinemann. p. §63 (§49 pp. 205-207 in 1960 edition). ISBN 0750626348.
13. M N O Sadiku (2007). Elements of elctromagnetics (Fourth Edition ed.). NY/Oxford: Oxford University Press. p. p. 391. ISBN 0-19-530048-3.
14. If the boundary deforms, so velocity varies with location, the velocity v is the velocity at the location of d. See Rothwell Edward J Rothwell, Michael J Cloud (2001). Electromagnetics. Boca Raton, Fla: CRC Press. p. p. 56. ISBN 084931397X.
15. Jackson JD. Eqs. 5.141 & 5.142, p. 211. ISBN 0-471-30932-X.
16. Roger F Harrington (2003). Introduction to electromagnetic engineering. Mineola, NY: Dover Publications. p. p. 56. ISBN 0486432416.
17. If the surface deforms, the Leibniz integral rule is more complicated. A mathematical demonstration of this result for deformable surfaces has not been located.
18. That is, a field that is not conservative, not expressible as the gradient of a scalar field, and not subject to the gradient theorem.
19. DJ Griffiths (1999). Introduction to electrodynamics. Saddle River NJ: Pearson/Addison-Wesley. p. p. 541. ISBN 0-13-805326-X.
20. See Griffiths pages 301–3.
21. Tai L. Chow (2006). Electromagnetic theory. Sudbury MA: Jones and Bartlett. p. p. 395. ISBN 0-7637-3827-1.
## Applications
The Lorentz force occurs in many devices, including:
In its manifestation as the Laplace force on an electric current in a conductor, this force occurs in many devices including:
|
2017-12-17 11:52:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 56, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851635456085205, "perplexity": 592.4399775591411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595858.79/warc/CC-MAIN-20171217113308-20171217135308-00309.warc.gz"}
|
https://zbmath.org/?q=an:06950938
|
## Bifurcation analysis of a coupled Kuramoto-Sivashinsky- and Ginzburg-Landau-type model.(English)Zbl 1397.35228
Summary: We study the bifurcation and stability of trivial stationary solution $$(0,0)$$ of coupled Kuramoto-Sivashinsky- and Ginzburg-Landau-type equations (KS-GL) on a bounded domain $$(0,L)$$ with Neumann’s boundary conditions. The asymptotic behavior of the trivial solution of the equations is considered. With the length $$L$$ of the domain regarded as bifurcation parameter, branches of nontrivial solutions are shown by using the perturbation method. Moreover, local behavior of these branches is studied, and the stability of the bifurcated solutions is analyzed as well.
### MSC:
35Q35 PDEs in connection with fluid mechanics 35B32 Bifurcations in context of PDEs 35B35 Stability in context of PDEs 35Q56 Ginzburg-Landau equations
Full Text:
|
2022-06-25 03:10:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3232238292694092, "perplexity": 1076.6163513344154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00171.warc.gz"}
|
http://math.stackexchange.com/questions/280567/if-f-g-are-two-formulas-hf-is-the-height-of-the-formula-f-then-h-g-a-f
|
# if F , G are two formulas , h[f] is the height of the formula f ,then h[ G a F ] is less or equal to sup( h[F] , h[G] ) + 1
if F , G are two propostional formulas , h[f] is the height of the formula f ,
then h[ G a F ] is less or equal to sup( h[F] , h[G] ) + 1 , a is one of the connectives , my question is , what is sup ??? and how to compute sup ?!!
-
$\sup$ is most likely the supremum:
In mathematics, the supremum (sup) of a subset S of a totally or partially ordered set T is the least element of T that is greater than or equal to all elements of S. Consequently, the supremum is also referred to as the least upper bound (lub or LUB). If the supremum exists, it is unique meaning that there will be only one supremum. If S contains a greatest element, then that element is the supremum; otherwise, the supremum does not belong to S (or does not exist). For instance, the negative real numbers do not have a greatest element, and their supremum is 0 (which is not a negative real number).
As André Nicolas already mentioned, there is no difference between the maximum and the supremum given that the formulas whose height you need to compute are of finite length. Infinitary logic, however, admits formulas of infinite height.
-
thank you very much :) , i use text " mathematical logic : a course with exercises part 1 " by rene cori and lascar .. it would be better if he used max instead of sup ! – Maths Lover Jan 17 '13 at 18:09
The term $\sup$ stands for supremum. For finite sets, it coincides with $\max$, the maximum. So for example $\sup(3,7)=7$, and $\sup(4,4)=4$. It is surprising that $\sup$ was used instead of the more common $\max$.
|
2014-03-11 21:47:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461387395858765, "perplexity": 396.5652972383469}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011284846/warc/CC-MAIN-20140305092124-00085-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://www.bartleby.com/solution-answer/chapter-14-problem-14re-mathematical-applications-for-the-management-life-and-social-sciences-11th-edition/9781305108042/14-find-the-slope-of-the-tangent-in-the-x-direction-to-the-surface/8f012ac7-6040-11e9-8385-02ee952b546e
|
Chapter 14, Problem 14RE
### Mathematical Applications for the ...
11th Edition
Ronald J. Harshbarger + 1 other
ISBN: 9781305108042
Chapter
Section
### Mathematical Applications for the ...
11th Edition
Ronald J. Harshbarger + 1 other
ISBN: 9781305108042
Textbook Problem
# Find the slope of the tangent in the x-direction to the surface z = 5 x 4 − 3 x y 2 + y 2 at ( 1 , 2 , − 3 ) .
To determine
To calculate: The slope of the tangent in the x-direction to the surface z=5x43xy2+y2 at the point (1,2,3).
Explanation
Given Information:
The provided surface is z=5x43xy2+y2.
The provided point is (1,2,3).
Formula used:
The slope of the tangent of a function f(x,y) in the positive x-direction is given by fx.
For a function f(x,y), the partial derivative of f with respect to x is calculated by taking the derivative of f(x,y) with respect to x and keeping the other variable y constant. The partial derivative of f with respect to x is denoted by fx and the partial derivative of f with respect to y is denoted by fy.
Power of x rule for a real number n is such that, if f(x)=xn then f(x)=nxn1.
Constant function rule for a constant c is such that, if f(x)=c then f(x)=0.
Coefficient rule for a constant c is such that, if f(x)=cu(x), where u(x) is a differentiable function of x, then f(x)=cu(x).
Calculation:
Consider the provided surface, z=5x43xy2+y2.
Since, z is a function of the variables x and y.
Thus, the function is z(x,y)=5x43xy2+y2
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
|
2019-10-22 08:25:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8043789863586426, "perplexity": 1475.152805325992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00508.warc.gz"}
|
https://www.physicsforums.com/threads/solving-a-pair-of-nonlinear-coupled-des.687425/
|
# Solving a pair of nonlinear coupled DEs
1. Apr 23, 2013
### zeroseven
Hi everyone, new member zeroseven here. First, I want to say that it's great to have a forum like this! Looking forward to participating in the discussion.
Anyway, I need to solve a pair of differential equations for an initial value problem, but am not sure if an analytical solution exists. I have been able to solve a special case as I explain below, but remain stumped with the more general form.
The equations are as follows:
dx/dt=-ax-cxy
dy/dt=-bx-cxy
Where a, b, and c are constants (all >0 in the problem I am trying to solve) and x and y the functions I need to solve.
I can solve the special case when a=b by substracting the 2nd eq. from the 1st. Then I get
d(x-y)/dy=-a(x-y) which is easy to solve for x-y, and the rest is pretty easy too. But this doesn't work for the general form where a and b are different.
Anyone have any ideas? Ultimately, what I really need is x*y, so if there is a way to get that without solving for x and y first, that is fine too.
They look deceptively simple, I hope a solution exists!
Cheers,
zeroseven
Last edited: Apr 23, 2013
2. Apr 24, 2013
### zeroseven
Just wanted to add a bit more detail about what I need to do:
The end result that I want is the integral
$\int^{∞}_{0}$cx(t)y(t)dt
So again, if it is possible to obtain this without having analytical solutions for x(t) and y(t) separately, that is fine. I don't even need x(t)y(t) like I misleadingly wrote in my original post, the definite integral is enough.
The initial values x(0) and y(0) are positive constants.
This is for some research I am doing, and really the only step that I haven't got figured out. Any suggestions would be much appreciated!
3. Apr 25, 2013
### haruspex
Yes, they look simple, but have you tried eliminating y to obtain a second order DE in x? That doesn't look so nice.
4. Apr 25, 2013
### Staff: Mentor
Divide the second equation by the first equation and see what you get.
5. Apr 26, 2013
### the_wolfman
These equations are very similar to the Lotka-Volterra equations. To my knowledge the L-V equations do not have a simple known solution, but studying them might give you some hints about how to solve your equation.
Also do you know the time asymptotic solutions for x(t) and y(t)?
You can rewrite your equations as
$cxy=-ax- d_t x$
$cxy=-bx- d_t y$
Integrating these two equations over time gives
$\int cxy dt=-a\int x dt-x(\infty)+x(0)$
$\int cxy dt=-b\int x dt-y(\infty)+y(0)$
If you know that your integrals are well behaved and if you know the initial values of x and y and their time asymptotic solutions, then with a little bit of algebra you can solve for $\int cxy dt$
6. Apr 26, 2013
### Staff: Mentor
$$\frac{dy}{dx}=\frac{a+cy}{b+cy}$$
$$dx=\frac{(b+cy)dy}{a+cy}=(\frac{b-a}{a+cy}+1)dy$$
$$x=y+\frac{(b-a)}{c}\ln(a+cy) + C$$
7. Apr 27, 2013
### zeroseven
First, thanks for the replies everyone!
Second, I need to apologize.. Seems I made a small type in my first post ... the equations should be
dx/dt=-ax-cxy
dy/dt=-by-cxy
(so bx in the first post should be by)
But the good news is, the replies didn't go to waste anyway. They pointed me in the right direction in that I should aim to eliminate dt completely from the equations. That way even this correct form of the equations becomes separable:
(b+cx)/x dx = (a+cy)/y dy
The solution cannot be expressed in elementary functions (as far as I know). It involves the Lambert w function. Luckily matlab and mathematica can deal with this easily.
Asumptotically, x(t) and y(t) go from a positive value to zero, so the integral $\int^{∞}_{0}$cx(t)y(t)dt can be evaluated numerically.
So this is pretty much solved despite the typo!
Very interesting connection to the Lotka-Volterra equations wolfman, I need to look into that...
8. Apr 27, 2013
### zeroseven
Interestingly (and frustratingly) I am unable to obtain the elementary function solution even for the special case a=b if I do it by eliminating dt. If I use the method I described in the first post, I can get a fairly simple elementary function for the integral that I need. But if I start by eliminating dt, then I keep running into equations with Lambert's w in them, and don't know how to get back to elementary functions from there.
This makes me wonder whether there is a solution with elementary functions for the general case (a and b unequal).
9. Apr 28, 2013
### zeroseven
Another update: Lotka-Volterra equations was a great tip. They are almost identical in form to my equations, and cannot be solved with elementary functions, which convinces me that my equations can't be either. Lambert's W works in both cases, though.
Anyone interested, have a look here:
http://www.emis.de/journals/DM/v13-2/art3.pdf
|
2017-10-19 17:42:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5851001143455505, "perplexity": 484.314342853766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823350.23/warc/CC-MAIN-20171019160040-20171019180040-00101.warc.gz"}
|
https://www.risk.net/derivatives/1510599/sunk-by-correlation
|
Sunk by correlation
Equity Derivatives
Equity derivatives businesses have seen their revenues riven by losses during the first half of 2008. The malaise in credit, brought on by excessive US subprime mortgage losses, took hold of major equity markets earlier in the year. Subsequent spikes in volatility and correlation, combined with a drop in dividend expectations, have conspired to cause mark-to-market carnage for dealers' exotic books, which some estimate runs into billions of dollars.
"Everybody's looking at their exotic books. We've seen a whole series of two- and three-, if not more, standard-deviation moves. People never think the extreme moves are going to happen until they do," says Dan Fields, Paris-based global head of trading for equity derivatives and derivatives solutions at Societe Generale Corporate and Investment Banking (SG CIB).
Throughout most of December 2007 and into the beginning of January, 10-day historical volatility on the Dow Jones Eurostoxx 50 index drifted between 10% and 20%. But having been at 13.44% on January 17, volatility rocketed to a high of 62.89% on January 30. It subsequently receded to between 20% and 35% for much of February before spiking again to 39.43% on March 25.
Volatile markets usually make for trickier trading conditions - but crucially, this coincided with a sharp rise in correlation across global markets. Realised and implied correlations between global equity indexes, individual stocks and even between stock markets and some currency pairs have all soared. With most dealers short correlation as a consequence of popular structured products sold to retail and private banking clients, this delivered a painful blow to exotic books.
"It's not the products that are the most complicated from a financial-engineering perspective that would have caused most of the problems," says Neil McCormick, head of global equity exotics and hybrids at JP Morgan in London. "It would be those that have been around for many years, are high volume and have fairly low barriers to entry for a bank. The type of products would have been shorter-dated yield-enhancement products, typically with maturities between one and three years. They would involve people putting their principal at risk by selling options and trying to get coupons above the risk-free rate."
This encompasses products such as auto-callables, reverse convertibles and worst-of baskets. By referencing notes to baskets of stocks, investors are able to enhance coupons by taking a view on correlation. In the case of worst-of products, the payout is referenced to the worst-performing share in the basket. It is therefore beneficial when correlation is high and all stocks move in same direction, increasing the likelihood that the path of the worst-performing share is positive. The dealer, conversely, is short correlation.
The whipsawing in equity markets followed a widespread bull run in structured products issuance, which saw a flood of new players enter the market - many with little experience of handling unexpected spikes in correlation and volatility. Even for those that have experienced similar conditions in the past, the current dislocation has lasted longer than any previous disruption, says McCormick. "It's not the first time we've seen periods of high volatility and high correlation, but I think it's probably the largest and most prolonged move we've seen in those two parameters, and it's come at a time when the business is very much larger than it has been in the past," he says.
7 days in 60 seconds
VM change, Libor fallback and DTCC blockchain
The week on Risk.net, November 10-16, 2017
|
2017-11-21 00:46:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1886442005634308, "perplexity": 3931.1027715992927}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806309.83/warc/CC-MAIN-20171121002016-20171121022016-00752.warc.gz"}
|
http://xarray.pydata.org/en/v0.12.3/generated/xarray.plot.contour.html
|
# xarray.plot.contour¶
xarray.plot.contour(x, y, z, ax, **kwargs)
Contour plot of 2d DataArray
Parameters
darrayDataArray
Must be 2 dimensional, unless creating faceted plots
xstring, optional
Coordinate for x axis. If None use darray.dims[1]
ystring, optional
Coordinate for y axis. If None use darray.dims[0]
figsizetuple, optional
A tuple (width, height) of the figure in inches. Mutually exclusive with size and ax.
aspectscalar, optional
Aspect ratio of plot, so that aspect * size gives the width in inches. Only used if a size is provided.
sizescalar, optional
If provided, create a new figure for the plot with the given size. Height (in inches) of each plot. See also: aspect.
axmatplotlib axes object, optional
Axis on which to plot this figure. By default, use the current axis. Mutually exclusive with size and figsize.
rowstring, optional
If passed, make row faceted plots on this dimension name
colstring, optional
If passed, make column faceted plots on this dimension name
col_wrapinteger, optional
Use together with col to wrap faceted plots
xscale, yscale‘linear’, ‘symlog’, ‘log’, ‘logit’, optional
Specifies scaling for the x- and y-axes respectively
xticks, yticksSpecify tick locations for x- and y-axes
xlim, ylimSpecify x- and y-axes limits
xincreaseNone, True, or False, optional
Should the values on the x axes be increasing from left to right? if None, use the default for the matplotlib function.
yincreaseNone, True, or False, optional
Should the values on the y axes be increasing from top to bottom? if None, use the default for the matplotlib function.
Use xarray metadata to label axes
normmatplotlib.colors.Normalize instance, optional
If the norm has vmin or vmax specified, the corresponding kwarg must be None.
vmin, vmaxfloats, optional
Values to anchor the colormap, otherwise they are inferred from the data and other keyword arguments. When a diverging dataset is inferred, setting one of these values will fix the other by symmetry around center. Setting both values prevents use of a diverging colormap. If discrete levels are provided as an explicit list, both of these values are ignored.
cmapmatplotlib colormap name or object, optional
The mapping from data values to color space. If not provided, this will be either be viridis (if the function infers a sequential dataset) or RdBu_r (if the function infers a diverging dataset). When Seaborn is installed, cmap may also be a seaborn color palette. If cmap is seaborn color palette and the plot type is not contour or contourf, levels must also be specified.
colorsdiscrete colors to plot, optional
A single color or a list of colors. If the plot type is not contour or contourf, the levels argument is required.
centerfloat, optional
The value at which to center the colormap. Passing this value implies use of a diverging colormap. Setting it to False prevents use of a diverging colormap.
robustbool, optional
If True and vmin or vmax are absent, the colormap range is computed with 2nd and 98th percentiles instead of the extreme values.
extend{‘neither’, ‘both’, ‘min’, ‘max’}, optional
How to draw arrows extending the colorbar beyond its limits. If not provided, extend is inferred from vmin, vmax and the data limits.
levelsint or list-like object, optional
Split the colormap (cmap) into discrete color intervals. If an integer is provided, “nice” levels are chosen based on the data range: this can imply that the final number of levels is not exactly the expected one. Setting vmin and/or vmax with levels=N is equivalent to setting levels=np.linspace(vmin, vmax, N).
infer_intervalsbool, optional
Only applies to pcolormesh. If True, the coordinate intervals are passed to pcolormesh. If False, the original coordinates are used (this can be useful for certain map projections). The default is to always infer intervals, unless the mesh is irregular and plotted on a map projection.
subplot_kwsdict, optional
Dictionary of keyword arguments for matplotlib subplots. Only applies to FacetGrid plotting.
cbar_axmatplotlib Axes, optional
Axes in which to draw the colorbar.
cbar_kwargsdict, optional
Dictionary of keyword arguments to pass to the colorbar.
**kwargsoptional
Additional arguments to wrapped matplotlib function
Returns
artist :
The same type of primitive artist that the wrapped matplotlib function returns
|
2019-11-15 02:41:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3254896104335785, "perplexity": 6451.342352591097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668561.61/warc/CC-MAIN-20191115015509-20191115043509-00400.warc.gz"}
|
http://umj.imath.kiev.ua/article/?lang=en&article=11018
|
2019
Том 71
№ 11
# Symmetric α-stable stochastic process and the third initial-boundary-value problem for the corresponding pseudodifferential equation
Abstract
We consider a pseudodifferential equation of parabolic type with operator of fractional differentiation with respect to a space variable generating a symmetric $\alpha$ -stable process in a multidimensional Euclidean space with an initial condition and a boundary condition imposed on the values of an unknown function at the points of the boundary of a given domain. The last condition is quite similar to the condition of the so-called third (mixed) boundary-value problem in the theory of differential equations with the difference that a traditional (co)normal derivative is replaced in our problem with a pseudodifferential operator. Another specific feature of the analyzed problem is the two-sided character of the boundary condition, i.e., a consequence of the fact that, in the case of \alpha with values between 1 and 2, the corresponding process reaches the boundary making infinitely many visits to both the interior and exterior regions with respect to the boundary.
Citation Example: Osipchuk M. M., Portenko N. I. Symmetric α-stable stochastic process and the third initial-boundary-value problem for the corresponding pseudodifferential equation // Ukr. Mat. Zh. - 2017. - 69, № 10. - pp. 1406-1421.
Full text
|
2020-02-22 05:59:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7731952667236328, "perplexity": 375.69925761085636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00458.warc.gz"}
|
https://www.physicsforums.com/threads/vector-identity.159829/
|
# Vector identity
## Homework Statement
I am to show: closed integral {phi (grad phi)} X (n^)dS=0
## The Attempt at a Solution
Related Introductory Physics Homework Help News on Phys.org
Dick
Homework Helper
Is 'X' supposed to mean a cross product between the normal vector and the gradient?
yes.It means that.
Dick
Try using $$\int_S n \times a dS = \int_V curl(a) dV$$.
|
2020-03-30 01:48:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39557239413261414, "perplexity": 7662.378287815223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00111.warc.gz"}
|
https://pyqmri.readthedocs.io/en/latest/_modules/solver.html
|
# Solver¶
Module holding the classes for different numerical Optimizer.
class pyqmri.solver.CGSolver(par, NScan=1, trafo=1, SMS=0)
This Class performs a CG reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). NScan (int) – Number of Scan which should be used internally. Do not need to be the same number as in par[“NScan”] trafo (bool) – Switch between radial (1) and Cartesian (0) fft. SMS (bool) – Simultaneouos Multi Slice. Switch between noraml (0) and slice accelerated (1) reconstruction.
eval_fwd_kspace_cg(y, x, wait_for=None)
Apply forward operator for image reconstruction. :param y: The result of the computation :type y: PyOpenCL.Array :param x: The input array :type x: PyOpenCL.Array :param wait_for: A List of PyOpenCL events to wait for. :type wait_for: list of PyopenCL.Event, None
Returns: A PyOpenCL.Event to wait for. PyOpenCL.Event
run(data, iters=30, lambd=1e-05, tol=1e-08, guess=None, scan_offset=0)
Start the CG reconstruction.
All attributes after data are considered keyword only.
Parameters: data (numpy.array) – The complex k-space data which serves as the basis for the images. iters (int) – Maximum number of CG iterations lambd (float) – Weighting parameter for the Tikhonov regularization tol (float) – Termination criterion. If the energy decreases below this threshold the algorithm is terminated. guess (numpy.array) – An optional initial guess for the images. If None, zeros is used. The result of the image reconstruction. numpy.Array
class pyqmri.solver.CGSolver_H1(prg, queue, par, irgn_par, coils, linops)
This Class performs a CG reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. prg (PyOpenCL Program A PyOpenCL Program containing the) – kernels for optimization. linops (PyQMRI Operator The operator to traverse from) – parameter to data space. coils (PyOpenCL Buffer or empty list) – coil buffer, empty list if image based fitting is used.
power_iteration(x, num_simulations=50)
power_iteration_grad(x, num_simulations=50)
run(guess, data, iters=30)
Start the CG reconstruction.
All attributes after data are considered keyword only.
Parameters: guess (numpy.array) – An optional initial guess for the images. If None, zeros is used. data (numpy.array) – The complex k-space data which serves as the basis for the images. iters (int) – Maximum number of CG iterations The result of the fitting. dict of numpy.Array
setFvalInit(fval)
Set the initial value of the cost function.
Parameters: fval (float) – The initial cost of the optimization problem
updateRegPar(irgn_par)
Update the regularization parameters.
Performs an update of the regularization parameters as these usually vary from one to another Gauss-Newton step.
Parameters: (dic) (irgn_par) –
update_box(outp, inp, par, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the x variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
class pyqmri.solver.PDBaseSolver(par, irgn_par, queue, tau, fval, prg, coil, model, DTYPE=<class 'numpy.complex64'>, DTYPE_real=<class 'numpy.float32'>)
Primal Dual splitting optimization.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimate of the initial step size based on the operator norm of the linear operator. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL Program A PyOpenCL Program containing the) – kernels for optimization. reg_type (string String to choose between "TV" and "TGV") – optimization. data_operator (PyQMRI Operator The operator to traverse from) – parameter to data space. coil (PyOpenCL Buffer or empty list) – coil buffer, empty list if image based fitting is used. model (PyQMRI.Model) – Instance of a PyQMRI.Model to perform plotting
delta
Regularization parameter for L2 penalty on linearization point.
Type: float
omega
Not used. Should be set to 0
Type: float
lambd
Regularization parameter in front of data fidelity term.
Type: float
tol
Relative toleraze to stop iterating
Type: float
stag
Stagnation detection parameter
Type: float
display_iterations
Switch between plotting (true) of intermediate results
Type: bool
mu
Strong convecity parameter (inverse of delta).
Type: float
tau
Estimated step size based on operator norm of regularization.
Type: float
beta_line
Ratio between dual and primal step size
Type: float
theta_line
Line search parameter
Type: float
unknwons_TGV
Number of T(G)V unknowns
Type: int
unknowns_H1
Number of H1 unknowns (should be 0 for now)
Type: int
unknowns
Total number of unknowns (T(G)V+H1)
Type: int
num_dev
Total number of compute devices
Type: int
dz
Ratio between 3rd dimension and isotropic 1st and 2nd image dimension.
Type: float
model
The model which should be fitted
Type: PyQMRI.Model
modelgrad
The partial derivatives evaluated at the linearization point. This variable is set in the PyQMRI.irgn Class.
Type: PyOpenCL.Array or numpy.Array
min_const
list of minimal values, one for each unknown
Type: list of float
max_const
list of maximal values, one for each unknown
Type: list of float
real_const
list if a unknown is constrained to real values only. (1 True, 0 False)
Type: list of int
static factory(prg, queue, par, irgn_par, init_fval, coils, linops, model, reg_type='TGV', SMS=False, streamed=False, imagespace=False, DTYPE=<class 'numpy.complex64'>, DTYPE_real=<class 'numpy.float32'>)
Generate a PDSolver object.
Parameters: prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. init_fval (float) – Estimate of the initial cost function value to scale the displayed values. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. linops (list of PyQMRI Operator) – The linear operators used for fitting. model (PyQMRI.Model) – The model which should be fitted reg_type (string, "TGV") – String to choose between “TV” and “TGV” optimization. SMS (bool, false) – Switch between standard (false) and SMS (True) fitting. streamed (bool, false) – Switch between streamed (1) and normal (0) reconstruction. imagespace (bool, false) – Switch between k-space (false) and imagespace based fitting (true). DTYPE (numpy.dtype, numpy.complex64) – Complex working precission. DTYPE_real (numpy.dtype, numpy.float32) – Real working precission.
power_iteration(x, data_shape, num_simulations=50)
run(inp, data, iters)
Optimization with 3D T(G)V regularization.
Parameters: (numpy.array) (x) – Initial guess for the unknown parameters (numpy.array) – The complex valued data to fit. iters (int) – Number of primal-dual iterations to run A tupel of all primal variables (x,v in the Paper). If no streaming is used, the two entries are opf class PyOpenCL.Array, otherwise Numpy.Array. tupel
setFvalInit(fval)
Set the initial value of the cost function.
Parameters: fval (float) – The initial cost of the optimization problem
updateRegPar(irgn_par)
Update the regularization parameters.
Performs an update of the regularization parameters as these usually vary from one to another Gauss-Newton step.
Parameters: (dic) (irgn_par) –
update_Kyk2(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Precompute the v-part of the Adjoint Linear operator.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_primal(outp, inp, par, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the x variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_r(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Update the data dual variable r.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_v(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the v variable in Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z1(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TGV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z1_tv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z2(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z2 variable in Primal-Dual Algorithm for TGV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
class pyqmri.solver.PDSoftSenseBaseSolver(par, pdsose_par, queue, tau, fval, prg, coils, DTYPE=<class 'numpy.complex64'>, DTYPE_real=<class 'numpy.float32'>)
Primal Dual Soft-SENSE optimization.
This Class performs a primal-dual algorithm for solving a Soft-SENSE reconstruction on complex input data
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the required parameters for the regularized Soft-SENSE reconstruction. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimate of the initial step size based on the operator norm of the linear operator. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL Program A PyOpenCL Program containing the) – kernels for optimization. coil (PyOpenCL Buffer or empty list) – The coils used for reconstruction.
lambd
Regularization parameter in front of data fidelity term.
Type: float
tol
Relative toleraze to stop iterating
Type: float
stag
Stagnation detection parameter
Type: float
adaptive_stepsize
Type: bool
tau
Estimated step size based on operator norm of regularization.
Type: float
unknowns_TGV
Number of T(G)V unknowns
Type: int
unknowns
Total number of unknowns –> Reflects the number of cmaps for Soft-SENSE
Type: int
num_dev
Total number of compute devices
Type: int
dz
Ratio between 3rd dimension and isotropic 1st and 2nd image dimension.
Type: float
extrapolate_v(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Extrapolation step of the v variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
extrapolate_x(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Extrapolation step of the x variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
static factory(prg, queue, par, pdsose_par, init_fval, coils, linops, reg_type='TGV', streamed=False, DTYPE=<class 'numpy.complex64'>, DTYPE_real=<class 'numpy.float32'>)
Generate a PDSoftSenseSolver object.
Parameters: prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the parameters for the regularized Soft-SENSE reconstruction init_fval (float) – Estimate of the initial cost function value to scale the displayed values. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. linops (list of PyQMRI Operator) – The linear operators used for fitting. reg_type (string, "TGV") – String to choose between “TV” and “TGV” optimization. streamed (bool, false) – Switch between streamed (1) and normal (0) reconstruction. DTYPE (numpy.dtype, numpy.complex64) – Complex working precission. DTYPE_real (numpy.dtype, numpy.float32) – Real working precission.
run(inp, data, iters)
Optimization with 3D T(G)V regularization.
Parameters: (numpy.array) (data) – Initial guess for the reconstruction (numpy.array) – The complex valued (undersampled) kspace data. (int) (iters) – Number of primal-dual iterations to run Primal variable x. If no streaming is used, the two entries are opf class PyOpenCL.Array, otherwise Numpy.Array. tupel
set_fval_init(fval)
Set the initial value of the cost function.
Parameters: fval (float) – The initial cost of the optimization problem
set_sigma(sigma)
Set the step size for the dual update.
Parameters: sigma (float) – Step size for the dual update
set_tau(tau)
Set the step size for the primal update.
Parameters: tau (float) – Step size for the primal update
update_Kyk2(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Precompute the v-part of the Adjoint Linear operator.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_v(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the v variable in Primal-Dual Algorithm for TGV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_x(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the x variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_x_tgv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the x variable in the Primal-Dual Algorithm for TGV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_y(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Update the data dual variable y.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z1_tgv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TGV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z2_tgv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z2 variable in Primal-Dual Algorithm for TGV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z_tv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z variable in Primal-Dual Algorithm for TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
class pyqmri.solver.PDSoftSenseBaseSolverStreamed(par, pdsose_par, queue, tau, fval, prg, coils, **kwargs)
Streamed version of the PD Soft-SENSE Solver.
This class is the base class for the streamed array optimization.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the required parameters for the regularized Soft-SENSE reconstruction. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction.
unknown_shape
Size of the unknown array
Type: tuple of int
grad_shape
Size of the finite difference based gradient
Type: tuple of int
symgrad_shape
Size of the finite difference based symmetrized gradient. Defaults to None in TV based optimization.
Type: tuple of int, None
data_shape
Size of the data to be fitted
Type: tuple of int
data_trans_axes
Order of transpose of data axis, requried for streaming
Type: list of int
data_shape_T
Size of transposed data.
Type: tuple of int
class pyqmri.solver.PDSoftSenseSolverStreamedTGV(par, ss_par, queue, tau, fval, prg, linop, coils, **kwargs)
Streamed PD Soft-SENSE TGV version.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction.
alpha_0
alpha0 parameter for TGV regularization weight
Type: float
alpha_1
alpha1 parameter for TGV regularization weight
Type: float
symgrad_shape
Type: tuple of int
class pyqmri.solver.PDSoftSenseSolverStreamedTV(par, ss_par, queue, tau, fval, prg, linop, coils, **kwargs)
Streamed PD Soft-SENSE TV version.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction.
class pyqmri.solver.PDSoftSenseSolverTGV(par, pdsose_par, queue, tau, fval, prg, linop, coils, **kwargs)
Primal Dual splitting optimization for TGV.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction.
alpha
TV regularization weight
Type: float
class pyqmri.solver.PDSoftSenseSolverTV(par, pdsose_par, queue, tau, fval, prg, linop, coils, **kwargs)
Primal Dual splitting optimization for TV.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). pdsose_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction.
class pyqmri.solver.PDSolverICTGV(par, irgn_par, queue, tau, fval, prg, linop, coils, model, **kwargs)
Primal Dual splitting optimization for IC-TV.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted
alpha
TV regularization weight
Type: float
update_primal(outp, inp, par, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the x variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z1_ictgv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z2_ictgv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z_sympart(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z variable for the symmetrized gradient in Primal-Dual Algorithm forg TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
class pyqmri.solver.PDSolverICTV(par, irgn_par, queue, tau, fval, prg, linop, coils, model, **kwargs)
Primal Dual splitting optimization for IC-TV.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted
alpha
TV regularization weight
Type: float
update_primal(outp, inp, par, idx=0, idxq=0, bound_cond=0, wait_for=None)
Primal update of the x variable in the Primal-Dual Algorithm.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z1_ictv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
update_z2_ictv(outp, inp, par=None, idx=0, idxq=0, bound_cond=0, wait_for=None)
Dual update of the z1 variable in Primal-Dual Algorithm for TV.
Parameters: outp (PyOpenCL.Array) – The result of the update step inp (PyOpenCL.Array) – The previous values of x par (list) – List of necessary parameters for the update idx (int) – Index of the device to use idxq (int) – Index of the queue to use bound_cond (int) – Apply boundary condition (1) or not (0). wait_for (list of PyOpenCL.Events, None) – A optional list for PyOpenCL.Events to wait for A PyOpenCL.Event to wait for. PyOpenCL.Event
class pyqmri.solver.PDSolverStreamed(par, irgn_par, queue, tau, fval, prg, coils, model, imagespace=False, **kwargs)
Streamed version of the PD Solver.
This class is the base class for the streamed array optimization.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted imagespace (bool, false) – Switch between imagespace (True) and k-space (false) based fitting.
unknown_shape
Size of the unknown array
Type: tuple of int
model_deriv_shape
Size of the partial derivative array of the unknowns
Type: tuple of int
grad_shape
Size of the finite difference based gradient
Type: tuple of int
symgrad_shape
Size of the finite difference based symmetrized gradient. Defaults to None in TV based optimization.
Type: tuple of int, None
data_shape
Size of the data to be fitted
Type: tuple of int
data_trans_axes
Order of transpose of data axis, requried for streaming
Type: list of int
data_shape_T
Size of transposed data.
Type: tuple of int
class pyqmri.solver.PDSolverStreamedTGV(par, irgn_par, queue, tau, fval, prg, linop, coils, model, imagespace=False, SMS=False, **kwargs)
Streamed TGV optimization.
This class performes streamd TGV optimization.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted imagespace (bool, false) – Switch between imagespace (True) and k-space (false) based fitting. SMS (bool, false) – Switch between SMS (True) and standard (false) reconstruction.
alpha
alpha0 parameter for TGV regularization weight
Type: float
beta
alpha1 parameter for TGV regularization weight
Type: float
symgrad_shape
Type: tuple of int
class pyqmri.solver.PDSolverStreamedTGVSMS(par, irgn_par, queue, tau, fval, prg, linop, coils, model, imagespace=False, **kwargs)
Streamed TGV optimization for SMS data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted imagespace (bool, false) – Switch between imagespace (True) and k-space (false) based fitting. SMS (bool, false) – Switch between SMS (True) and standard (false) reconstruction.
alpha
alpha0 parameter for TGV regularization weight
Type: float
beta
alpha1 parameter for TGV regularization weight
Type: float
symgrad_shape
Type: tuple of int
class pyqmri.solver.PDSolverStreamedTV(par, irgn_par, queue, tau, fval, prg, linop, coils, model, imagespace=False, SMS=False, **kwargs)
Streamed TV optimization.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted imagespace (bool, false) – Switch between imagespace (True) and k-space (false) based fitting. SMS (bool, false) – Switch between SMS (True) and standard (false) reconstruction.
alpha
alpha0 parameter for TGV regularization weight
Type: float
symgrad_shape
Type: tuple of int
class pyqmri.solver.PDSolverStreamedTVSMS(par, irgn_par, queue, tau, fval, prg, linop, coils, model, imagespace=False, **kwargs)
Streamed TV optimization for SMS data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted imagespace (bool, false) – Switch between imagespace (True) and k-space (false) based fitting.
alpha
alpha0 parameter for TGV regularization weight
Type: float
symgrad_shape
Type: tuple of int
class pyqmri.solver.PDSolverTGV(par, irgn_par, queue, tau, fval, prg, linop, coils, model, **kwargs)
TGV Primal Dual splitting optimization.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted
alpha
alpha0 parameter for TGV regularization weight
Type: float
beta
alpha1 parameter for TGV regularization weight
Type: float
class pyqmri.solver.PDSolverTV(par, irgn_par, queue, tau, fval, prg, linop, coils, model, **kwargs)
Primal Dual splitting optimization for TV.
This Class performs a primal-dual variable splitting based reconstruction on single precission complex input data.
Parameters: par (dict) – A python dict containing the necessary information to setup the object. Needs to contain the number of slices (NSlice), number of scans (NScan), image dimensions (dimX, dimY), number of coils (NC), sampling points (N) and read outs (NProj) a PyOpenCL queue (queue) and the complex coil sensitivities (C). irgn_par (dict) – A python dict containing the regularization parameters for a given gauss newton step. queue (list of PyOpenCL.Queues) – A list of PyOpenCL queues to perform the optimization. tau (float) – Estimated step size based on operator norm of regularization. fval (float) – Estimate of the initial cost function value to scale the displayed values. prg (PyOpenCL.Program) – A PyOpenCL Program containing the kernels for optimization. linops (list of PyQMRI Operator) – The linear operators used for fitting. coils (PyOpenCL Buffer or empty list) – The coils used for reconstruction. model (PyQMRI.Model) – The model which should be fitted
alpha
TV regularization weight
Type: float
|
2022-12-09 05:02:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2150304764509201, "perplexity": 11454.050046858223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00416.warc.gz"}
|
http://math.iisc.ac.in/seminars/2020/2020-03-11-thomas-richard.html
|
#### Geometry & Topology Seminar
##### Venue: LH-1, Mathematics Department
Let $(M,g)$ be a Riemannian manifold and ‘$c$’ be some homology class of $M$. The systole of $c$ is the minimum of the $k$-volume over all possible representatives of $c$. We will use combine recent works of Gromov and Zhu to show an upper bound for the systole of $S^2 \times \{*\}$ under the assumption that $S^2 \times \{*\}$ contains two representatives which are far enough from each other.
Contact: +91 (80) 2293 2711, +91 (80) 2293 2265 ; E-mail: chair.math[at]iisc[dot]ac[dot]in
Last updated: 03 Feb 2023
|
2023-02-03 13:29:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.644402027130127, "perplexity": 1068.082208360285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00212.warc.gz"}
|
http://tex.stackexchange.com/questions/299108/overwriting-fill-patterns-in-tikz
|
# Overwriting fill patterns in TikZ
A circle lies within a square. The square has a fill pattern, say , A, and the circle has fill pattern, say, B. I wish the two patterns not to overlap. In case this is unclear, here is an MWE:
\documentclass[10pt,class=memoir]{standalone}
\usepackage[cmyk,dvipsnames,svgnames]{xcolor}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\draw[pattern=dots, pattern color=green] (0, 0) rectangle (4, 4);
\draw[pattern=bricks, pattern color=brown] (2, 2) circle[radius=1];
\end{tikzpicture}
\end{document}
If I uncomment the line with \fill[white] I get the result I want. But this seems an inelegant hack. Is there a better way of achieving the same result?
Thanks.
-
If you don't want the white fill then you can punch the rectangle and repat the path (and use even odd rule if complex paths are used). If you have a really complicated path then use layers and send them to different layers.
\documentclass[tikz]{standalone}
\usetikzlibrary{patterns}
\begin{document}
\begin{tikzpicture}
\fill[yellow!20] (-1,-1) rectangle (5,5);
\draw[pattern=dots, pattern color=green] (0, 0) rectangle (4, 4) (2, 2) circle[radius=1];
\path[pattern=bricks,pattern color=brown] (2,2) circle (1);
\end{tikzpicture}
\end{document}
-
It's possible to apply inner patern as a postaction over a previously white filled circle.
\documentclass[10pt,class=memoir]{standalone}
\usepackage[cmyk,dvipsnames,svgnames]{xcolor}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\draw[pattern=dots, pattern color=green] (0, 0) rectangle (4, 4);
\draw[fill=white, postaction={pattern color=brown, pattern=bricks}] (2, 2) circle[radius=1];
\end{tikzpicture}
\end{document}
-
If care is taken over the path directions (which is a nuisance in this case) then the different filling rules can be exploited (at least for PDF output) and it can be done in one path with various post actions:
\documentclass[tikz,border=5]{standalone}
\usetikzlibrary{patterns}
\begin{document}
\tikz\draw [pattern=bricks, pattern color=brown, nonzero rule,
postaction={fill=white, even odd rule,
postaction={pattern=dots, pattern color=green}}]
(0, 0) rectangle (4, 4) (3, 2) arc (360:0:1);
\end{document}
-
You can specify a preaction that fills the circle. Pretty much the same hack as you suggested but one line of code less.
Use:
\documentclass[10pt,class=memoir]{standalone}
\usepackage[cmyk,dvipsnames,svgnames]{xcolor}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}
\draw[pattern=dots, pattern color=green] (0, 0) rectangle (4, 4);
\draw[preaction={fill=white},pattern=bricks, pattern color=brown] (2, 2) circle[radius=1];
\end{tikzpicture}
\end{document}
Tikz then fills the circle before the brick pattern is drawn. Without preaction, tikz obviously draws the patterns last.
Result:
Same as Ignasi's answer but the other way around.
-
you can also clip the rectangle without the circle
\documentclass{article}
\usepackage[cmyk,dvipsnames,svgnames]{xcolor}
\usepackage{tikz}
\usetikzlibrary{patterns}
\begin{document}
\begin{tikzpicture}
\begin{scope}
\clip[insert path={(0, 0) rectangle (4, 4)}] (2, 2) circle[radius=1];
\draw[pattern=dots, pattern color=red] (0, 0) rectangle (4, 4);
\end{scope}
\draw[pattern=bricks, pattern color=brown] (2, 2) circle[radius=1];
\end{tikzpicture}
\end{document}
-
|
2016-06-30 19:36:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5385149717330933, "perplexity": 2741.9172151121306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399117.38/warc/CC-MAIN-20160624154959-00036-ip-10-164-35-72.ec2.internal.warc.gz"}
|
http://ibpsexamguide.org/quantitative-aptitude/quantitative-aptitude-course/number-system-simplification/exercise-3-10.html
|
# Exercise : 3
1. What should come in the place of the question mark(?) in the following equation?
69012 – 20167 + (51246 ÷ 6) = ?
(a) 57385
(b) 57286
(c) 57476
(d) 57368
(e) None of these
#### View Ans & Explanation
Ans.e
? = 48845 + 512466 = 48845 + 8541 = 57386
2. What approximate value should come in the place of question mark(?) in the following equation?
98.98 ÷ 11.03 + 7.014 × 15.99 = (?)2
(a) 131
(b) 144
(c) 12
(d) 121
(e) 11
#### View Ans & Explanation
Ans.e
98.98 ÷ 11.03 + 7.014 × 15.99 = (?)2
Suppose ? = x
Then 99 ÷ 11 + 7 × 16 + ≈ 121
(taking approximate value)
∴ x = 11
3. What approximate value should come in place of the question mark (?) in the following equation?
39.05 × 14.95 – 27.99 × 10.12 = (36 + ?) × 5
(a) 22
(b) 29
(c) 34
(d) 32
(e) 25
#### View Ans & Explanation
Ans.e
Solve using approximation
4. What approximate value will come in place of the question mark(?) in the following equation?
2070.50 ÷ 15.004 + 39.001 × (4.999)2 = ?
(a) 1005
(b) 997
(c) 1049
(d) 1213
(e) 1113
#### View Ans & Explanation
Ans.e
2070.50 ÷ 15.004 + 39.001 × (4.999)2 = ?
or? ≈ 2070 ÷ 15 + 39 × 5 × 5
= 138 + 975 = 1113
5. What should come in the place of the question mark(?) in the following equation?
$\frac{45^{2} \times 27^{2}}{135^{2}} = ?$
(a) 81
(b) 1
(c) 243
(d) 9
(e) None of these
#### View Ans & Explanation
Ans.a
? = $\frac{45 \times 45 \times 27 \times 27}{135 \times 135} = 81$
6. What should come in place of the question mark(?) in the following equation?
412 × 413 - 813 ÷ 523 = ?
(a) 8
(b) 18134
(c) 13334
(d) 717
(e) None of these
#### View Ans & Explanation
Ans.b
? = 92 × 133 - 253 × 317
= 392 - 2517 = $\frac{663 - 50}{34} = \frac{613}{34} = 18\tfrac{1}{34}$
7. What approximate value should come in place of the question mark (?) in the following equation?
85.147 + 34.912 × 6.2 + ? = 802.293
(a) 400
(b) 450
(c) 550
(d) 600
(e) 500
#### View Ans & Explanation
Ans.e
85.147 + 34.912 × 6.2 + ? = 802.293
or, ? = 802.293 – 85.147 – 34.912 × 6.2
≈ 800 – 85 – 35 × 6 ≈ 500
8. What should come in place of the question mark (?) in the following equation?
9548 + 7314 = 8362 + ?
(a) 8230
(b) 8500
(c) 8410
(d) 8600
(e) None of these
#### View Ans & Explanation
Ans.b
9548 + 7314 = 8362 + ?
or,? = 9548 + 7314 – 8362 = 8500
9. What approximate value should come in place of the question mark (?) in the following equation?
248.251 ÷ 12.62 × 20.52 = ?
(a) 400
(b) 450
(c) 600
(d) 350
(e) 375
#### View Ans & Explanation
Ans.a
248.251 ÷ 12.62 × 20.52 = ?
or,? ≈ 240 ÷ 12 × 20 = 20 × 20 = 400
10. What proximate value should come in place of the question mark (?) in the following question?
6.595 × 1084 + 2568.34 – 1708.34 = ?
(a) 6,000
(b) 12,000
(c) 10,000
(d) 8,000
(e) 9,000
#### View Ans & Explanation
Ans.d
? ≈ 6.6 × 1080 + 2560 – 1700 ≈ 7128 + 860 ≈ 8000
|
2018-03-22 05:45:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5565613508224487, "perplexity": 3557.6567814327045}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00750.warc.gz"}
|
https://www.physicsforums.com/threads/why-do-quantum-fluctuations-need-inflation.847470/
|
# Why do Quantum Fluctuations need Inflation?
1. Dec 9, 2015
### Dr. Strange
The general logic of Inflation is that some field popped into existence just long enough to flatten out the universe, then disappeared again. Before the field, the universe had tiny fluctuations in the plasma. Inflation blew these up from the size of an atom to the size of a grapefruit (If I understand the scale correctly). What I don't understand is: why was inflation needed? Why wouldn't these fluctuations grow in a normally expanding Big Bang scenario?
2. Dec 9, 2015
### Staff: Mentor
No. The inflaton field does not "pop into existence and disappear". It is always present; all that changes is its state. Most of the inflation models under consideration today, as I understand it, are "eternal inflation" models, in which the inflaton field has a large energy density into the infinite past; our universe is simply a bubble in which the field underwent a phase transition from a "false vacuum" state with a large energy density to a "true vacuum" state with a very small (or zero) energy density; the transition transfers the original large energy density to ordinary matter and radiation, which are left in a hot, dense, rapidly expanding state (the "Big Bang").
3. Dec 9, 2015
### Dr. Strange
Thank you for the clarification. It would be great if you could answer the original question, though.
4. Dec 9, 2015
### Staff: Mentor
No. The fluctuations were in the inflaton field itself. During inflation, there was no "plasma"; the Standard Model quantum fields (the ones associated with ordinary matter and radiation) were all in vacuum states.
They would, but not fast enough to be correlated in all parts of the sky we observe today. Without inflation, we would expect the temperature we observe in, for example, the CMB, to vary much more from one part of sky to another than it actually does, because the fluctuations in different parts of the sky we observe would be uncorrelated. The fact that the temperature varies so little (about 1 part in 100,000 for the CMB) is evidence that every part of the sky we see must have been causally connected in the past. A normally expanding Big Bang can't produce causal connections over that wide a range in only 13.7 billion years.
5. Dec 9, 2015
### Dr. Strange
Let me restate this to see if I understand. Without Inflation, the fluctuations in the field (whatever field we'd have without Inflation) would have grown to cosmic proportions, but because there was no way to exchange information across horizons, we would expect them to grow independently and thus have a much larger ΔT/T range than what we see today in the CMB. Is that the general idea? Inflation explains why all the anisotropies are so uniform across the sky, not how the actual fluctuations grew to cosmic proportions?
6. Dec 9, 2015
### Staff: Mentor
Actually, inflation is needed to an extent for the latter as well--at least, for the fluctuations to grow into what we see today in only 13.7 billion years. Ordinary expansion without inflation would not have magnified them to the same extent in that time; it needs a "jump start" of some period of inflation.
7. Dec 11, 2015
### Dr. Strange
As I understand it, as long as you have dark matter and the overdensities represented by CMB, you can get the matter distribution we see in the universe today. I'm not sure I understand how inflation effects anything after CMB. Could you explain in more detail?
8. Dec 11, 2015
### George Jones
Staff Emeritus
What Peter wrote does not contradict this. From where do the anisotropies in the CMB come? According to recent grad/research level texts, inflation, via quantum fluctuations, gives the most plausible mechanism for the the generation of perturbations:
9. Dec 11, 2015
### Staff: Mentor
As long as you have dark matter and the overdensities represented by the CMB, with the magnitudes they had at the end of inflation. But a Big Bang model without inflation won't produce those magnitudes in the short time that would be required--at least, that's our current best understanding.
10. Dec 11, 2015
### Dr. Strange
Yes, thank you. I'm trying to zero in on what problems, exactly, are solved by Inflation. That's why I needed the clarification. So let me try it again. You seem to be saying that tiny fluctuations in a normal quantum field would grow into densities that we see in the CMB (with much larger variations because they were not in causal contact), but it would have taken (guessing here) billions of years. Inflation solves the problem in that those fluctuations grow to the proper size (that we see in the CMB) in roughly 300,000 years.
11. Dec 11, 2015
### Staff: Mentor
Not quite. From the end of inflation to the time of CMB production, roughly 300,000 years, we have normal (non-inflationary) expansion, so it is the same in both models (with and without inflation). The problem is the state at the start of that period--i.e., at the time referred to, in inflationary models, as "the end of inflation". At that time, we already have quantum fluctuations magnified to a certain point--enough to produce, after roughly 300,000 years of non-inflationary expansion, the fluctuations in the CMB. But without inflation, it would not be possible to get quantum fluctuations magnified to that point in such a short time. More precisely: in a non-inflationary model, the universe would only take about $10^{-32}$ seconds to get from the Planck scale to the temperature it was at the "end of inflation" time; but a non-inflationary model cannot magnify quantum fluctuations in such a short time to the magnitude they must have been at that point in time in order to produce the necessary fluctuations in the CMB 300,000 years later.
12. Dec 13, 2015
### Dr. Strange
I'm trying to visualize this, but something is giving me trouble. At the end of inflation, were all the points of space in causal contact with each other?
13. Dec 13, 2015
### Staff: Mentor
Not quite, because according to our best current models, the universe is spatially infinite. But what we now see as our observable universe was all in causal contact at the end of inflation--more than that, it was all in thermal equilibrium at the end of inflation, except for the quantum fluctuations that had been magnified by inflation.
14. Dec 14, 2015
### Dr. Strange
Were all parts of the universe in causal contact before inflation?
15. Dec 14, 2015
### Staff: Mentor
There might not have been any "before inflation"; in the "eternal inflation" models (which IIRC are the current front-runners), inflation extends into the infinite past, from the standpoint of our universe.
In models where there is a "before inflation", no, all parts of the universe (by which I assume you mean the observable universe--i.e., the region which ultimately expanded into what is our observable universe today) were not in causal contact before inflation.
|
2018-02-25 20:04:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5483509302139282, "perplexity": 996.5333662182343}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816912.94/warc/CC-MAIN-20180225190023-20180225210023-00605.warc.gz"}
|
https://devzone.nordicsemi.com/blogs/845/segger-embedded-studio-blog-post-rev2-deprecated/
|
Posted 2016-02-04 12:51:53 +0100
blogs->nordicers
# see new blog post for new version of embedded studio here:
https://devzone.nordicsemi.com/blogs/1032/segger-embedded-studio-a-cross-platform-ide-w-no-c/
# Introduction
This post is an introductory tutorial to SEGGER Embedded Studio. If you haven't already please skim through this post https://devzone.nordicsemi.com/blogs/825/segger-embedded-studio-cross-platform-ide-w-no-cod/ but don't follow along with it. This is the new and improved tutorial using the nRF device pack and assumes no prior knowledge on Embedded Studio.
After following this tutorial you will be able to build, debug and run a BLE project on nRF5x devices! You will also be able to load a softdevice (and any other file of your choice, i.e. a bootloader) to your board upon loading your application.
# Edit - New Release!
We just released V2.16 of Embedded Studio, which includes some improvements based on your / blog readers' feedback:
• Target -> Erase All enabled
• Properties -> Debugger -> Debugger Options -> Start From Entry Point Symbol added to disable setting the PC after reset
• Fixed project property dialog forgetting previously modified properties on cancel.
• Imported projects from uvproj(x) files now use the TargetName.
• Fixed reading of XML files with a UTF-8 byte order mark.
# Setting up Embedded Studio
Open SEGGER Embedded Studio V2.14. If this is the first time you will see the Dashboard Welcome Screen. If you've already been using Embedded Studio make sure you close all open solutions. The first thing we need to do is install the CMSIS and nRF Device packages. Go to "Tools -> Packages -> Manually Install Packages" and select nRF.emPackage.
Now in "Tools -> Package Manager..." update the packages by clicking the "Refresh package list" button in the upper right hand corner. Select the "CMSIS-Core Support Package" by clicking it and then click "next" (right lower corner) and follow the instructions to download and install this package. Now make sure the CMSIS-Core Support Package and the Nordic Semiconductor nRF CPU Support Package are both installed.
# Importing a Keil uVision Project
Embedded Studio allows you to import Keil and IAR projects. In this tutorial we will import the bleapp_hrs_s132_pca10040 example (if you are using an nRF51 Series device or the nRF52 Preview DK import the corresponding uVision project file and follow along - minor differences will be highlighted along the way). This feature makes Embedded Studio plug & play with our SDK. Go to "File -> Open IAR EWARM / Keil MDK Project" and select '\nRF5_SDK...\examples\ble_peripheral\ble_app_hrs\pca10040\s132\arm5_no_packs\ ble_app_hrs_s132_pca10040.uvprojx.' Make sure you select nRF_EXE as the template for this project.
Our project will get its structure from the Keil uVision project we imported. All the source files, preprocessor definitions, user include directories and many project settings will be carried over from Keil. The nRF_EXE template (from our nRF device package) configures the rest of our project settings (memory map, flash section placement, etc...) and adds the system/startup files to our project.
At this point we are almost done! But we need to make some minor adjustments to compile our project.
1. In the "Project Explorer" notice that two Projects have been imported. This is because in Keil, Nordic SDK examples that use the SoftDevice include a dummy project specifically for flashing the SoftDevice directly from Keil. Since we can automatically flash the SoftDevice with our application in Embedded Studio we can delete this project. Remove the project 'flash_s130_nrf51_2.0.0-7.alpha_softdevice' or similar from Embedded Studio so you are left with only the real project. WARNING: it looks like Embedded Studio imports this dummy SoftDevice flash project as the default project. After importing a project from Keil it is likely this is the active (bolded) project.
1. Remove the "Source Files -> nRF_Segger_RTT" folder. Embedded Studio automatically includes RTT files as you can see and we want to use these (they are more up to date and correct for Embedded Studio).
2. Open "retarget.c" and comment out lines 28 and 29 (// FILE stdout; // FILE stdin;). If you don't you will get a compiler error saying 'storage size of '__stdout' isn't known.' You can even remove "retarget.c" entirely from the project as it is not needed. For more information on why see: https://devzone.nordicsemi.com/question/29200/retargeting/.
At this point your project should compile with no errors or warnings (but it won't run correctly)! Build it and make sure!
# The Softdevice
As with every BLE project we will need to reserve some FLASH and RAM for the softdevice. To do this we need to set the Section Placement Macros used by "flash_placement.xml". This file tells our linker where to put the different parts of our application in FLASH and RAM. Go to "Project Properties -> Linker -> Linker Options" and in "Section Placement Macros" set FLASH_START=0x1b000 RAM_START=0x20001f00 (see screenshot below) or whatever these addresses should be for your specific softdevice.
Now we may need to edit the memory map for our device (most likely if you are using the nRF52 Preview DK). Go to "Edit Memory Map" (right above Import Section Placement) and double check that this is correct for your device. If you are using the nRF52 Preview DK change the size of RAM to 0x8000.
Now we want Embedded Studio to automatically program the softdevice to our board along with our application whenever we run/debug. To do this go to "Project Properties -> Debugger -> Loader Options" and set "Additional Load File[0]" to the full path of your softdevice.
Now rebuild your project. Everything should compile without warnings or errors and you should notice that your application is leaving some space for the softdevice!
At this point you should be able to run and debug this application. Embedded Studio will program the softdevice and then the application. Your program should run until the first instruction in main().
However you will hard fault. We have a few more fixes to do...
# Important Fixes
1. Increase the stack size. The stack should be at least 1024 bytes but lets make it 2048 for this project. By default the SDK does not use heap but lets just leave the heap at 256 bytes. Do this by right clicking the project and changing "Main stack size" as seen in the screenshot below.
2. Add NO_VTOR_CONFIG to the "Preprocessor Definitions."
3. In Properties -> Debugger -> Debugger Options -> Start From Entry Point Symbol, set to 'No.' This is because we should enter our SoftDevice's ResetHandler(), not our applications. For more information, see RK's comment below.
Rebuild and run the application. Everything should work!
# Notes
• It seems that Embedded Studio can lose project settings (and this includes the memory map and flash placement section). We are working with them to fix this bug. It seems to happen occasional when cleaning the project. Just beware of this and if you start to have weird problems you shouldn't take a look at the project settings to see they are as expected (preprocessor definitions, stack size, softdevice in file loader, memory map modifications, flash placement modifications).
• You may need to change "Target Connection" from "Simulator" to "J-Link." This is done in project properties. If you get weird error messages when trying to run/debug your program check this.
# Future
Thanks to all the great feedback on the last post about SEGGER Embedded Studio we decided to make this post! Note that there are still little workarounds we have to do that will be fixed very shortly. We are working with SEGGER to make Embedded Studio even easier to use with nRF5x devices. Please continue to give us feedback so we can keep improving! Nordic customers using OS X/Linux are no longer second class citizens and your feedback is driving this initiative!
Please experiment around with Embedded Studio. Tell us what you like, what you don't like, what you want to see and why you need this to be fully supported in the future!
Posted Feb. 4, 2016, 1:54 p.m.
Hi Michael!
Thanks for the great tutorial. Newcomers (that haven't read your earlier tutorial) may initially miss how to import the flash_placement.xml, since there is no reference to the menu item until later ;-)
I got it to work (I think), the advertisement-indication LED1 blinks. I couldn't get this working following the old tutorial.
For now I've only tried with the PCA10040.
[EDIT:] I got it working, my Android Wear version of the nRF Toolbox found the Nordic HRM (!!). On my Android device I had to bond with the HRM in the normal Bluetooth settings. After that I could find it in the nRF Toolbox app. w00t! :)
Best, Henrik
Posted Feb. 4, 2016, 2:07 p.m.
Henrik: that was fast, great to hear you got everything working! Thanks for the input, made it more clear. Good luck with Embedded Studio!
Posted Feb. 4, 2016, 2:12 p.m.
Note: Blog post was just updated with new nRF device package from SEGGER with some bug fixes. (No need for changing any interrupts in the vector table).
Posted Feb. 4, 2016, 4:38 p.m.
Hi Michael I tried your tutorial and it worked so far for the SDK project. When I try to import my uVision project (also based on the SDK 11 alpha2) the SES just does nothing (no error message / no warning). Is there somewhere a log file I can check what happend? BR, Adrian
SOLUTION: Found the problem! Our project file had some spaces in it. After removing the spaces the import worked as expected.
Posted Feb. 5, 2016, 2:56 a.m.
Nice post! But I still don't know why the "Target-->Erase all" is always gray. Can I use SES to erase the flash? If so, how can I make it work, not gray?
Posted Feb. 5, 2016, 8:53 a.m.
Bee: Currently this feature isn't implemented for nRF5x devices. It is on SEGGER's TODO list. I expect that in their next release this command will be available.
Currently when Embedded Studio loads a file to your board, it erases the addresses that this file has data in. So if you program the softdevice it will erase addresses 0x0-0x1B000 and then write the softdevice. And then when you program your application it will erase 0x1B000-END_ADDRESS (for example).
My recommendation is to erase all with nrfjprog, JLink.exe or the tool of your choice until this feature is available. But note that you don't need to be doing this every time you re-flash your board (unless you are writing to flash in your application) but I would recommend doing this every once in a while or when you run into a bug.
Posted Feb. 5, 2016, 3 p.m.
Bee: if you're on OSX I can warmly recommend RKNRFGO: http://sourceforge.net/projects/rknrfgo/
Posted Feb. 5, 2016, 7:26 p.m.
Hi, Michael, this is great progress since the first blog on using SES. I tried the blinky project and had no problem to run it, needing absolutely no changes after the SES import; this is very good so far. However, when I tried the blinky_rtx example, it still did not like the difference in rtx (M3 vs M4F?) (The example runs fine with Keil.) I then tried the blinky_freertos -- it was compiling until it hit "port.c" and then started complaining about syntax. (Again, this ran fine in Keil.) So, it looks like good progress but still some work to do. Thanks to you and the Segger team for getting this tool ready for general use.
Posted Feb. 6, 2016, 7:44 a.m.
Great tutorial update.
You don't need to import and modify the flash_placement.xml file if you don't want to, the default one has macros in it which can be replaced with the actual values
To do that you set the Section Placement Macros property with
FLASH_START=0x1b000
RAM_START=0x20001f00
and it will substitute them into the default file and 'just work'.
I see when running that the Registers window shows only generic Cortex M4 registers. That seems to be because the default XML definition file doesn't have much in it. I pointed it to the SVD file in the SDK and changed the type to SVD but that didn't work. I have my own nRF52 register file I made for Crossworks, I pointed it to that, that does work. With that you get all the registers, all the bitfields, everything, so you might want to talk to Segger about including a full XML file in the released support package.
Apart from that .. it's working pretty well.
Posted Feb. 6, 2016, 4:29 p.m.
Mike: Thanks! I will look into these problems this week and get back to you. Have you ever tried compiling/running these projects with GCC? If that works OK we shouldn't have a problem getting it working with Embedded Studio.
RK: Thanks for the great feedback! I updated the blog post to use this method as it is much simpler. Ok I will relay this to SEGGER, I was wondering about this myself and tried the SVD file with no luck as well.
Posted Feb. 7, 2016, 2:44 a.m.
I figured out what the SVD problem is, it's something I reported to Crossworks a while ago. If there's a Byte Order Mark at the start of the file the SVD reader can't read it. They had another bug too to do with register clusters but that's been fixed I see, even though the issue is still open.
If you trim the first 3 bytes off the SVD file (EF BB BF) with a hex editor, it should work, does for me. That instantly enables all the register information and is very useful.
The nrf51.svd file doesn't have the byte order mark so that one works out of the box. Possibly the simplest solution is for Nordic to start distributing that file without the BOM, it's not really needed, that file's UTF-8 anyway.
Posted Feb. 10, 2016, 1:49 a.m.
Michael: Doing what you described resulted in success for me. In fact, I found that I didn't even need to do a 'build and run'. If I just did 'build and debug', stop the debug session, hit the reset button on the board, and then 'build and debug' again, it worked.
RK: Trying what you suggested also resulted in success for me on the first 'build and debug' try. No reset button necessary!
I made sure to erase everything in nRFGo Studio between iterations so I knew I was working with a clean slate. I was also able to switch between working and not working by either not using the reset button in Michael's case or setting the 'Entry Point Symbol' field back to blank in RK's case, so these both appear to be 100% repeatable (erasing between attempts of course).
Once I got the debug session to work with the beacon app properly, pausing the debug session and restarting was still resulting in a fault being generated, and me having to stop the session and restart. But, I think I read in one of these posts that BLE and debug don't play well together on the nRF51 anyway due to loss of a clock or context, so that's probably what's going on. I think that post mentioned that was one of the advantages of the nRF52, was that you could keep the BLE stack satisfied even when pausing a debug session, correct?
Thanks for the help!
Posted Feb. 10, 2016, 10:16 a.m.
Hey Edward,
I actually just saw this issue yesterday when I was debugging an application on an nRF51. I need to talk to SEGGER about it, but what I found was similar to what you have said. The thing is if I do 'build and run' and then hit the reset button on the board the application runs as expected. Then from there if I go 'build and debug' the application debugs as expected (doesn't hang at the SVCALL). Can you try this out and let me know what you see? (note: I haven't tried this in Release mode yet).
I will bring this issue up with SEGGER - I think it may have to do with the non standard SWD interface on the nRF51 but really not sure, its a weird one..
Posted Feb. 10, 2016, 2 p.m.
That sounds as if the softdevice init code isn't being run, ie you're not starting from the real reset handler down at 0x00000004 but starting right at the reset handler in your application code. When you hit the reset button it starts at the right place and initialises the SD correctly.
Do you want to try testing that by setting the property 'Entry Point Symbol' which is most likely empty to 'nonexistent' (yes really the string with the word nonexistent in it, or any load of garbage which doesn't actually exist). I don't know if that will help, but it's worth trying. If it does help, I'll explain why, if it doesn't, there's no point explaining why :)
... since Edward tried it and says it works .. Crossworks, from which SES is derived, when it runs your code, doesn't just reset the chip and let it go, well it can, but it doesn't always. It pulls the address of a start symbol out of the ELF binary, stops the chip, sets the PC to the address of it and then lets it run again. Certainly in Crossworks' case the default for that symbol if not specified is reset_handler, but it's reset_handler in the binary it just built, your binary, not reset_handler in the softdevice, so the softdevice doesn't get to run its short setup code and the first call into it crashes. Setting the symbol to something which really doesn't exist forces a start from reset.
I suspect it works in a release binary because it's probably stripped off enough symbols that reset_handler can't be found and it works if you hit the reset button because .. well that's obvious right.
My home grown nrf package for Crossworks sets that symbol to nonexistent automatically because I fell over this one quite a few times.
Posted Feb. 10, 2016, 8:30 p.m.
Not sure how, but I ended up editing (thus overwriting) my first post containing my original question. Sorry about that.
Posted Feb. 12, 2016, 5:12 p.m.
I suggest to change the project settings directly to "Release" and "Debug" configurations, seems like the lost settings atleast for me happened when I did them as "Common". Of course one should ignore the warning when saving properties.
Posted Feb. 16, 2016, 3:58 p.m.
Guys, please see the edit in the blog post. SEGGER has added pretty much all features that have been requested and fixed all bugs that have been reported in their new release of Embedded Studio. I'd recommend updating it!
Posted Feb. 18, 2016, 5:12 a.m.
Hi, Michael,
I've installed 2.16, thanks for getting things moving and updated so quickly.
I'm trying the IOT SDK 0.9.0 with the coap_server_observe_pca10040 example. It runs fine with Keil. However, with SES, I'm getting a linker error:
C:/Program Files (x86)/SEGGER/SEGGER Embedded Studio 2.16/gcc/arm-none-eabi/bin/ld: Output/Debug/Exe/iot_ipv6_coap_server_observe_pca10040.elf section i.ble_6lowpan_init' will not fit in regionUNPLACED_SECTIONS'
C:/Program Files (x86)/SEGGER/SEGGER Embedded Studio 2.16/gcc/arm-none-eabi/bin/ld: region UNPLACED_SECTIONS' overflowed by 4449 bytes
In the SES forums there is mention of this, but I couldn't find a solution. Have you seen this problem?
Thanks, Mike
Update1:
I added ble_6lowpan.a to linker options and now it build and runs ok. This will be very useful going forward, since my IOT programs were severely limited by the 32K size in the Keil eval version.
Update 2: I just downloaded and installed SES 2.16a. I ran it on the original example and it worked perfectly the first time, no manual tweaking. Very nice job by the Segger team.
Posted Feb. 18, 2016, 2:32 p.m.
This is great post, thanks!
I am wondering if is there anything like keil packs planned in SES? I am using packs with keil and they make things much easier - something similar for SES would definitely convince me to move to SES with all my projects (right now I am using keil and eclipse for bigger ones. Eclipse also lacks of pack functionality which is a bit painful comparing to keil). But very nice IDE anyway :)
Posted Feb. 22, 2016, 11:42 a.m.
Note: With the 2.16 update of Embedded Studio there are some changes when importing Keil projects. Important: See the Adjustments section above which has been modified. This has to do with two Keil projects being imported into Embedded Studio. It explains that this is because of the dummy SoftDevice flash project in Keil that comes with our examples. If you are using Embedded Studio since 2.14 you need to read this update.
Also updated the Important Fixes section above. Mention Properties -> Debugger -> Debugger Options -> Start From Entry Point Symbol added to disable setting the PC after reset. You need to set this to 'No' when debugging applications using the softdevice. See RK's comment above for more info...
Posted Feb. 23, 2016, 2:56 p.m.
Hi Michael,
I've upgraded to SEM 2.16a. I need to use both Central and Peripheral, and I'm using the nRF52 DK (not preview).
I've followed the updated guidelines above in every detail, and imported project:
nRF5_SDK_11.0.0-2.alpha_bc3f6a0/examples/ble_central_and_peripheral/experimental/ble_app_hrs_rscs_relay/pca10040/s132/arm5_no_packs/ble_app_hrs_rscs_relay_pca10040.uvprojx
But when I build the project I get the following error:
Linking nrf52832_xxaa_s132.elf Output/Debug/Exe/nrf52832_xxaa_s132.elf section fs_data' will not fit in region UNPLACED_SECTIONS' region UNPLACED_SECTIONS' overflowed by 17 bytes
Any ideas?
Best, Henrik
Posted Feb. 23, 2016, 3:42 p.m.
Henrik: See this post https://devzone.nordicsemi.com/question/68722/segger-embedded-studio-unplaced_sections-problem/. I haven't looked into this problem yet but seems like its the same as your having so maybe it will help.
Posted Feb. 23, 2016, 5:10 p.m.
Cool, thanks Michael. After adding the following to the imported flash_placement.xml as the last element inside the <MemorySegment name="(FLASH_NAME:FLASH)"> element, it builds perfectly! :) <ProgramSection alignment="4" load="Yes" name="fs_data" address_symbol="__start_fs_data" end_symbol="__stop_fs_data" /> Posted Feb. 27, 2016, 9:08 p.m. I tried this tutorial but with the PCA10028 and of course S130, otherwise I used all the same settings. Whenever I try to Build and Debug the program is immediately stopped and "Stopped by vector catch" since device_manager_peripheral.c:2544 catches an exception in the function dm_ble_evt_handler. Seems like something is missing in the settings somehow? When building the same with plain GCC and GNU make from the command line while flashing it with JLinkExe I do not get this error. Posted Feb. 27, 2016, 9:28 p.m. Found a solution for the "Stopped by vector catch" problem. Apparently I was using the wrong flash and RAM starts Setting the Section Placement Macros to FLASH_START=0x1b000 RAM_START=0x20001f00 based on the linker file included in the armgcc directory worked for me. Note that I was using S130 with PCA10028 and SDK v11-alpha2. I did not change the flash_placement.xml. I believe the start values for flash and RAM there should remain what they are and not be the same as the section placement macros, i.e. not factor in the SoftDevice. Correct me if I am wrong, but that is what I used and it worked for me. Posted Feb. 29, 2016, 6:40 a.m. Visit for great iOS android source code . you can also sell your app templates or code here " mobile app development and keep working on freedom251 Posted Feb. 29, 2016, 9:30 a.m. Raphael: I agree with you, flash_placement.xml should not be changed and the section placement macros should be set as you did. I thought this was how i described it in the tutorial - this is important so any suggestions on how I can clarify this in the tutorial? Posted March 2, 2016, 5:56 a.m. Hi, Thank you for your greate tutorial. I have a problem. I followed your tutorial, on the Adjustments section, I imported ble_app_hrs_s130_with_dfu_pca10028 in SDK10.0.0 example. and remove one project, comment out stdout and stdin from retarget.c Then I clicked build command I got the error like attachment. How do I solve it? I tested ble_app_hrs_s130_pca10028 example and the error never happened. Posted March 2, 2016, 11:26 a.m. I've seen this problem as well, I think it is common to GCC? In Project Properties -> C/C++ -> Compiler Options -> Additional C Compiler Only Options you need to specifiy the flag '-fomit-frame-pointer' This is only needed when you have an application with DFU service as you have. Posted March 3, 2016, 2:15 a.m. @Michael Dietz Hi, I ve got a question, I did import keil project and compile, then set address and other attribute. Finally, I would download flash to DK board but never connected. Embedded studio haven't found DK board, Which part should I check to connect to DK board? Posted March 3, 2016, 9:33 a.m. Just connect the DK as you normally would (usb cable). When you plug in your board to your computer (JLINK (E:)) should come up as a USB storage device. (Note: our boards are shipped with JLink firmware but did you put a different firmware on the board such as mbed or CMSIS DAP?) You will have to use JLink firmware on the on board debugger if you want to use SEGGER Embedded Studio. Select Target->Connect JLink. Posted March 4, 2016, 1:39 a.m. I imported a project that I've been working on in Keil MDK into Segger, followed the steps and all seems to go well except that the linker barfs on the arm_math library (CMSIS-DSP). I get this: undefined reference to arm_negate_q31' undefined reference toarm_float_to_q31' undefined reference to arm_fir_interpolate_init_q31' I did install the CMSIS-DSP support pack and went back to verify that it is really installed: The same project builds with no problems in MDK. I also tried copying the preprocessor defines over (which didn't seem to come through in the project import) and that didn't fix it either. help? Posted March 4, 2016, 9:22 a.m. Hey, if the preprocessor defines didn't come through in the project import then you are using the wrong projects (2 are imported because of Keil's dummy softdevice flasher project). Please ready 'Adjustments' section above and follow those steps. As for your problem I'm not sure. Try fixing above and see if that helps. It is just compiling with GCC so if this works with GCC it should work with Embedded Studio Posted March 4, 2016, 10:37 p.m. I'm not using a softdevice though, I have a project that just implements a proprietary point-point protocol so I didn't think that stuff applied in that case. When I comment out all the calls to the arm_math library then the project builds fine. Posted March 9, 2016, 3:41 a.m. Have you seen the error like below? Posted March 9, 2016, 3:52 a.m. Have you seen the error like below? Posted March 16, 2016, 3:56 p.m. Is there possibility in SES to use newlib-nano instead of standard library? While developing in Keil i used microlib, in Eclipse with GCC i used newlib-nano (--specs=nano.specs). I am not very experienced at this linker and compiler settings and I have no idea how to make it work in SES. Posted March 28, 2016, 9:07 p.m. I have followed the tutorial, everything is working fine except when I try to connect to the 10028 board I get a "Can not connect to J-Link via USB." Error. I can connect fine using Keil, so I know is not the board. I am running a Linux Ubuntu system. I plug in the board and it shows up as the file system, so I know that I am connecting. Is there some project option or something I am missing?? Garret Posted April 20, 2016, 7:50 p.m. Great article, thank you! I had to make a minor tweak to get the HRS example working for my touch test env, which was: • board: nrf52 PCA10040 (not preview DK) • SDK: nRF5_SDK_11.0.0_89a8197 • SoftDevice s132 • SES V2.16a • macOS 10.11 Go to "Project Properties -> Linker -> Linker Options" and in "Section Placement Macros" set FLASH_START=0x1C000 RAM_START=0x20002080 These values came from the armgcc build settings of the same HRS project, namely examples/ble_peripheral/ble_app_hrs/pca10040/s132/armgcc/ble_app_hrs_gcc_nrf52.ld This is anecdotal, but, I had better success seeing the debugger stop in main() and the project advertise after starting from scratch a second time and using: 'MenuBar' -> Target -> Connect J-Link before doing a rebuild and run or debug. I only mention it because that was something I didn't do first time around, so just in case that did some magic; it may of course have been my fat fingers the first time that were another problem. :) All the best Wayne Posted April 23, 2016, 1:22 a.m. Thank you for the tutorial! I wasn't able to get this to work on using the following My error is this... Programming failed @ address 0x00000000 (block verification error) Verification failed @ address 0x00000000 Failed to download application. Error during verification phase. Please check J-Link and target connection. My target connection is J-Link... Any assistance would be much appreciated!! Posted May 16, 2016, 8:12 p.m. Thanks for the info on this development option. Do you know what kind of restrictions there are with respect to the free license? Eval only? Non-profit? etc.? Posted May 17, 2016, 9:08 a.m. After installation and import the SDK sample, cannot find the linker when open project option page. any idea? Kan. Posted May 20, 2016, 5:24 p.m. Hi and thanks for a great article! I am trying to run hrs example on PCA100028 with s130 softdevice and was able to perform all the steps without errors, and the board seems to be successfully programmed. But while the debugger shows program running, the nRF51 status LEDs stay always on, and I can't connect to the board - nRF Toolbox doesn't find any device. I double-checked all the steps in the tutorial, and even started from a vanilla SDK11.0.0, with same result. I'd be grateful for any ideas what might be wrong. Borut Posted May 23, 2016, 3:47 p.m. EDIT: Solved by changing ram start in linker options to 0x20002080. Posted June 15, 2016, 12:43 p.m. No luck, just getting four always-on LEDs on my nRF52 DK :( I've set FLASH_START=0x1C000 and RAM_START=0x20001f00 to the values shown in the s13x_nrf5x_2.0.0_migration_document.pdf. Heap/Stack is set to 256/2048 Bytes and the memory map says FLASH start="0x00000000" size="0x00080000" and RAM start="0x20000000" size="0x8000". Memory usage looks like this: There must've been something I've missed >_> Posted June 15, 2016, 9:58 p.m. Hi, I've been trying to run proximity example without success. The example runs on PCA100028, but hard-faults immediately after connect. Double-checked everything. Unfortunately I can't run the same example through Keil as the code exceeds 32k limit. Any help much appreciated. Borut Posted June 16, 2016, 12:14 a.m. Hi, I followed along with the directions with the ble_app_hrs but for some reason I get a "Cannot download multiple load files because they overlap" The only additional load file is the softdevice. Anyone else have this problem? Thanks Posted July 6, 2016, 1:20 p.m. I am having the same issue as alhay49. After following the directions meticulously, the ble_app_hrs project builds fine but when trying to debug - I get the "Cannot download multiple load files because they overlap" error. • board: nrf52 PCA10040 (not preview DK) • SDK: nRF5_SDK_11.0.0_89a8197 • SoftDevice s132 • SES V2.20 • Windows 10 UPDATE: I managed to solve this issue by following Wayne's suggestion: "Go to "Project Properties -> Linker -> Linker Options" and in "Section Placement Macros" set FLASH_START=0x1C000 RAM_START=0x20002080 These values came from the armgcc build settings of the same HRS project, namely examples/ble_peripheral/ble_app_hrs/pca10040/s132/armgcc/ble_app_hrs_gcc_nrf52.ld " Thanks Wayne! Posted Aug. 18, 2016, 8:32 a.m. I have Keil project and want to migrate it to SEGGER Embedded Studio. I followed instructions shown above. However, when I tried to "Build" -> "Build and Run", an error occured. error says my momery settings are, <!DOCTYPE Board_Memory_Definition_File> <root name="nRF51822_xxAC"> <MemorySegment name="FLASH" start="0x18000" size="0x28000" access="ReadOnly" /> <MemorySegment name="RAM" start="0x20002000" size="0x6000" access="Read/Write" /> </root> these are the same as in Keil project setting shown below. Could you please give me any clue to solve this issue? Thanks in advance. Posted Sept. 8, 2016, 3:21 p.m. Has anyone tried using SES V3 to compile NRF5 SDK Examples? I tried using this tutorial with SES V3.10a and there's no "Project Template Chooser" window during import. The imported project is missing CMSIS files and build fails because it can't find "nrf_gpio.h" (and probably other files past that). I posted about this on the Segger forum. Posted Sept. 15, 2016, 5:28 p.m. Did anyone get it to work with the Preview DK, SDK12.0.0, and SES 3.10a? I'm close but I get no memory errors when initializing the stack and softdevice. Posted Sept. 24, 2016, 2:30 p.m. It seems to be some new changes in v3 of the Segger embedded studio so that this walk-through is no longer valid for the most resent version of SES. I am experiencing the same problem as embedded-creations. The problem seems to be the that SES does not set some include paths for the prepsosessor. I am including them manually now and keep you posted about the progress. EDIT: got it compiling see my blogpost https://devzone.nordicsemi.com/blogs/1020/ Posted Oct. 12, 2016, 8:55 p.m. I was curious to know if there is a guide/tutorial available yet for creating a brand new nRF5x project with SES without exporting an existing project from Keil or IAR. I work primarily on Mac and Linux and I am interested in learning how to create SES projects from scratch and adding the necessary libraries as needed, as opposed to working from a converted example. Thanks! Posted May 27, 2017, 1:31 a.m. @dave75 I imported a project that I've been working on in Keil MDK into Segger, followed the steps and all seems to go well except that the linker barfs on the arm_math library (CMSIS-DSP). I get this: undefined reference to arm_negate_q31' undefined reference toarm_float_to_q31' undefined reference to arm_fir_interpolate_init_q31' I went through similar trouble as you did with the Linker. But you basically need to state the corresponding math library file explicitly, as far as I know, so the linker knows it exists. Goto Project->Edit Options...->Linker(under Code)->Additional Input Files: Include:(PackagesDir)/CMSIS_4/CMSIS/Lib/GCC/libarm_cortexXXlf_math.a
## Recent blog posts
• ### Estudando Projetos do SDK 10 para nRF5x com Eclipse Oxygen (Portuguese)
Posted 2017-11-12 00:08:55 by Carlos Delfino
• ### Configurando o Eclipse Oxygen para Desenvolvimento com nRF5x (Portuguese)
Posted 2017-11-10 21:15:47 by Carlos Delfino
• ### How to use Git for embedded software development
Posted 2017-11-06 13:21:55 by Yaniv Nis
• ### Thingy:52 based Weather Station
Posted 2017-10-29 22:31:15 by Krzysztof Szewczyk
• ### How to build continuous integration and delivery process for embedded SW development
Posted 2017-10-03 11:01:29 by Yaniv Nis
## Recent questions
• ### PB-ADV & PB-GATT node development using nordic mesh SDK
Posted 2017-11-18 11:29:37 by vikrant8051
|
2017-11-18 11:45:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3946051001548767, "perplexity": 3385.4021557989977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804881.4/warc/CC-MAIN-20171118113721-20171118133721-00682.warc.gz"}
|
https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/4193
|
## Computing exact D-optimal designs by mixed integer second order cone programming
Please always quote using this URN: urn:nbn:de:0297-zib-41932
• Let the design of an experiment be represented by an $s$-dimensional vector $\vec{w}$ of weights with non-negative components. Let the quality of $\vec{w}$ for the estimation of the parameters of the statistical model be measured by the criterion of $D$-optimality defined as the $m$-th root of the determinant of the information matrix $M(\vec{w})=\sum_{i=1}^s w_iA_iA_i^T$, where $A_i$, $i=1,...,s$, are known matrices with $m$ rows. In the paper, we show that the criterion of $D$-optimality is second-order cone representable. As a result, the method of second order cone programming can be used to compute an approximate $D$-optimal design with any system of linear constraints on the vector of weights. More importantly, the proposed characterization allows us to compute an \emph{exact} $D$-optimal design, which is possible thanks to high-quality branch-and-cut solvers specialized to solve mixed integer second order cone problems. We prove that some other widely used criteria are also second order cone representable, for instance the criteria of $A$-, and $G$-optimality, as well as the criteria of $D_K$- and $A_K$-optimality, which are extensions of $D$-, and $A$-optimality used in the case when only a specific system of linear combinations of parameters is of interest. We present several numerical examples demonstrating the efficiency and universality of the proposed method. We show that in many cases the mixed integer second order cone programming approach allows us to find a provably optimal exact design, while the standard heuristics systematically miss the optimum.
|
2019-07-22 02:09:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045560121536255, "perplexity": 235.49614701950827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527458.86/warc/CC-MAIN-20190722010436-20190722032436-00282.warc.gz"}
|
http://math.stackexchange.com/questions/168316/calculating-conditional-entropy-given-two-random-variables
|
# Calculating conditional entropy given two random variables
I have been reading a bit about conditional entropy, joint entropy, etc but I found this: $H(X|Y,Z)$ which seems to imply the entropy associated to $X$ given $Y$ and $Z$ (although I'm not sure how to describe it). Is it the amount of uncertainty of $X$ given that I know $Y$ and $Z$? Anyway, I'd like to know how to calculate it. I thought this expression means the following:
$$H(X|Y,Z) = -\sum p(x,y,z)log_{2}p(x|y,z)$$
and assuming that $p(x|y,z)$ means $\displaystyle \frac{p(x,y,z)}{p(y)p(z)}$, then \begin{align} p(x|y,z)&=\displaystyle \frac{p(x,y,z)}{p(x,y)p(z)}\frac{p(x,y)}{p(y)}\\&=\displaystyle \frac{p(x,y,z)}{p(x,y)p(z)}p(x|y) \\&=\displaystyle \frac{p(x,y,z)}{p(x,y)p(x,z)}\frac{p(x,z)}{p(z)}p(x|y)\\&=\displaystyle \frac{p(x,y,z)}{p(x,y)p(x,z)}p(x|z)p(x|y) \end{align} but that doesn't really help.
Basically I wanted to get a nice identity such as $H(X|Y)=H(X,Y)-H(Y)$ for the case of two random variables.
Any help?
Thanks
-
$$H(X\mid Y,Z)=H(X,Y,Z)-H(Y,Z)=H(X,Y,Z)-H(Y\mid Z)-H(Z)$$ Edit: Since $\log p(x\mid y,z)=\log p(x,y,z)-\log p(y,z)$, $$H(X\mid Y,Z)=-\sum\limits_{x,y,z}p(x,y,z)\log p(x,y,z)+\sum\limits_{y,z}\left(\sum\limits_{x}p(x,y,z)\right)\cdot\log p(y,z).$$ Each sum between parenthesis being $p(y,z)$, this proves the first identity above.
-
Can you show me how to manipulate $p(x|y,z)$ to get that identity? I can see why it's reasonable to get $H(X,Y,Z)-H(Y,Z)$ from $H(X|Y) = H(X,Y)-H(Y)$ but I'd like to know how to work out that from probability distributions. – Robert Smith Jul 8 '12 at 22:54
See Edit. This uses nothing but the definition, really. – Did Jul 9 '12 at 5:58
Yes, entropy is often referred to as "uncertainty", so $H(X|Y)$ can be thought of as your uncertainty about $X$, given that you know $Y$. If it's zero, then we would say that knowing $Y$ tells us "everything" about $X$, and so on.
It might be easier to think in terms of just two variables, although your basic idea is right. You can see wikipedia for more explicit calculations.
-
Thanks. Yes, I think I understand the conditional entropy correctly, however, I find it a bit awkward with two "conditional variables", though. What about my calculation? Unfortunately, Wikipedia didn't help a lot because it doesn't provide $H(X|Y,Z)$. – Robert Smith Jul 8 '12 at 19:55
@Robert: As did said, you can use the chain rule on that wikipedia page to change $H(X|Y,Z)$ into an expression involving only $H(Y|Z)$ – Xodarap Jul 8 '12 at 22:19
|
2015-11-30 01:45:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9599466919898987, "perplexity": 195.79246598970906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398460519.28/warc/CC-MAIN-20151124205420-00153-ip-10-71-132-137.ec2.internal.warc.gz"}
|
https://blog.bossylobster.com/mathematics
|
Edit on GitHub
# Mathematics
For as long as I can remember, I have had a love of learning math and problem solving. In 7th grade, I participated in my first math competition (MathCounts) and the rest is history.
From August 2013 to August 2018, I spent my time taking classes, teaching and doing research as a graduate student at UC Berkeley. (See my post "Graduated" for a retrospective.) This page is mostly intended as a place to gather some things I made during that time.
### Code and Writing:
• Write-up and some code for generating the "optimal" finite difference stencils to compute a given derivative (November 2013)
• Discussion of spectral norm as dual to the nuclear norm, in the sense of linear programming. Part of presentation of a paper during student seminar (November 2014)
• IPython notebook about GMRES (October 2015)
• Discussion of parametric curves. In particular, how to classify cubics and how implicitization helps with conversion from a parametric curve to an algebraic curve. (April 2016)
• Math 273 topics course on numerical analysis (GitHub, course ran during Spring 2016)
• bezier library (GitHub, published in JOSS in August 2017)
• foreign-fortran project (GitHub, started in Summer 2017, added to readthedocs.org in August 2018)
### Talks:
• Butterfly Algorithm for Geometric Non-uniform FFT (slides, GitHub, given February 2015)
• Paper presentation on WENO survey (slides, given February 2016 in Math 273 topics course)
• Paper presentation on supermeshing / conservative interpolation (slides, given April 2016 in Math 273 topics course)
• Thesis talk (slides, given August 2018)
|
2019-06-19 00:45:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20767059922218323, "perplexity": 4171.879952981114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998879.63/warc/CC-MAIN-20190619003600-20190619025442-00055.warc.gz"}
|
https://thedailywtf.com/articles/comments/A_Constant_Barrage
|
• (cs)
In fact, now that I think about it, maybe it useful after all ... if the rate of the Earth's spin were ever to change, Daniel's coworker would have to only change PAUSE_BY_6_SECONDS from "6" to "6.23" ...
Application.PauseSeconds := PAUSE_BY_6_SECONDS;
The earths rate of rotation changes on a regular basis so the joke's on you!
[;)]
• (cs)
:O Dude. W.T.F.F.
These look like the work of people who never understood the purpose of constants yet still somehow managed to remember their CS101 Prof's admonition that "literals in code are bad, use constants to define literals and use those in code." Thus, instead of meaningful constants such as SIZE_OF_HEADER or BUFFER_SIZE you get INT_512.
• LP (unregistered)
You know, I was just checking about:config in my firefox, and I noticed general.config.obscure_value as well
I found the explanation:
The general.config.obscure_value preference specifies how the configuration file is obscured. Firefox expects that each byte in the file will be rotated by the specified value. The default value is 13. If this value is left unchanged, then the configuration file must be encoded as ROT13. Autoconfig will fail if the cfg file is not encoded as specified by this preference. A value of 0 indicates that the file is unencoded-- i.e. it is unobscured plain text. It is recommended that you set this value to 0. (This will allow you to skip the encoding step in part 3.)
• (cs) in reply to dubwai
dubwai:
The earths rate of rotation changes on a regular basis so the joke's on you!
[;)]
Yep, that should be a function that varies on time of year, speed of solar winds, the alignment of the planet, and the number of people are walking east versus the number of people walking west.
[:)]
• (cs)
Constants aren't, variables don't.
• (cs)
Hey, the good news is that at least the above code examples avoid the Magic Number Anti-pattern!
• Mirandir (unregistered)
One funny thing I just noticed is that Josh Buhler's coworker only seems to master "not" and "larger than" operators:
if(!(val.indexOf(" ")>0) || !(val.length>TWO)) {
(val.indexOf(" ") == -1 || val.length < TWO)
And second half of above if-statement in reprise:
if(!(val.length>TWO))
And he probably would have used constants if there where any(It looks very much like Flash ActionScript 2.0).
/Mirandir
• (cs) in reply to dubwai
dubwai:
The earths rate of rotation changes on a regular basis so the joke's on you!
[;)]
I suppose calling it something like "duration of earth's rotation" would be too sensible...?
• (cs) in reply to Mirandir
Mirandir:
One funny thing I just noticed is that Josh Buhler's coworker only seems to master "not" and "larger than" operators:
er..um... It seems you haven't even gotten that far.....
if(!(val.indexOf(" ")>0) || !(val.length>TWO)) {
(val.indexOf(" ") == -1 || val.length < TWO)
Make that....
(val.indexOf(" ") == -1 || val.length <= TWO)
(I should have fixed the first half of that as well, but we know what JB's Coworker really meant...)
• Mark Staggs (unregistered)
And this isn't even counting a submission a made a while back that has these defines in a C header file.
#define FIELD_WITH_LENGTH_FIFTY 50
#define FIELD_WITH_LENGTH_FORTY_SEVE 47
#define FIELD_WITH_LENGTH_FORTY 40
#define FIELD_WITH_LENGTH_THIRTY 30
#define FIELD_WITH_LENGTH_THIRTY_ONE 31
#define FIELD_WITH_LENGTH_TWENTY_FIVE 25
#define FIELD_WITH_LENGTH_TWENTY 20
#define FIELD_WITH_LENGTH_THIRTEEN 13
#define FIELD_WITH_LENGTH_TWELVE 12
#define FIELD_WITH_LENGTH_TEN 10
#define FIELD_WITH_LENGTH_NINE 9
#define FIELD_WITH_LENGTH_EIGHT 8
#define FIELD_WITH_LENGTH_SEVEN 7
#define FIELD_WITH_LENGTH_SIX 6
#define FIELD_WITH_LENGTH_FIVE 5
#define FIELD_WITH_LENGTH_FOUR 4
#define FIELD_WITH_LENGTH_THREE 3
#define FIELD_WITH_LENGTH_TWO 2
#define SINGLETON_FIELD_LENGTH 1
• (cs)
Next is from Kyle, who found the root cause behind the failure of long-running daemons ...
TODAY = today()
YESTERDAY = yesterday()
Am I missing something here? I can see how it's a dumb thing to do, but I don't see the connection between that and the failure of long-running daemons.
• (cs) in reply to JamesCurran
If that code isn't executed every day at midnight, the values will only be right for a certain amount of time, no? But then again, 24 hours or possibly 12 hours is not what I would call long-running for a daemon....
And WTF if this code is executed at 23:59:59'99? TODAY, YESTERDAY, LASTWEEK and LASTMONTH (stupid capslockitis) might all be wrong then.
• (cs) in reply to JamesCurran
JamesCurran:
Next is from Kyle, who found the root cause behind the failure of long-running daemons ...
TODAY = today()
YESTERDAY = yesterday()
Am I missing something here? I can see how it's a dumb thing to do, but I don't see the connection between that and the failure of long-running daemons.
I can only imagine that since TODAY is a constant, TODAY says the same thing as it did when the app was started as it does two weeks later. So maybe they are comparing TODAY with TOMMORROW and determining whether to exit.
• (cs) in reply to dubwai
It might also to unwanted effects when the year changes, but I don't know in what format today() returns its information, i.e. if the year is included.
• (cs)
<font color="#000099">if</font> (totalGlue > EIGHT) totalGlue = EIGHT;// burried in a header file ...#<font color="#000099">define</font> EIGHT 16
LOL that made my day, I have nothing to say but WTF!
• diaphanein (unregistered) in reply to johnl
johnl:
dubwai:
The earths rate of rotation changes on a regular basis so the joke's on you!
[;)]
I suppose calling it something like "duration of earth's rotation" would be too sensible...?
Rate makes much more sense. Duration implies that it starts and ends.... Would be awfully strange if the earth just upped and stopped rotating after 24 hours. [:D]
• (cs) in reply to diaphanein
Anonymous:
Rate makes much more sense. Duration implies that it starts and ends.... Would be awfully strange if the earth just upped and stopped rotating after 24 hours. [:D]
Time would stop! [:D]
Haven't you ever seen Superman?
• (cs) in reply to mizhi
mizhi:
Anonymous:
Rate makes much more sense. Duration implies that it starts and ends.... Would be awfully strange if the earth just upped and stopped rotating after 24 hours. [:D]
Haven't you ever seen Superman?
Unfortuetly his code never references an IGyroscope so he cannot detect if the earth's rotation rate was changing.
Also, anybody who previously said the Earth's retation is static was incorrect ( http://www.iers.org/iers/earth/rotation/ ), it does change slightly according to the distance of planets & moons in our immediate area. Of course going with an average rotation is pretty reliable, I mean, it's not like his code is going to automate landing a probe from earth and it MUST land within a 6 inch square on Mars and he has no visual cues to go by. Wait I take that back, he might have been a coder on previous Mars lander projects.
• (cs)
I am reminded of an early FORTRAN system I worked on that would allow you change the values of constants by passing a constant to a function which changed to value of the parameter. So you give the constant value 3 a value of 5 so that wherever else 3 was used it use 5 so the expression 3 + 2 would calculate to 7.
Then that fancy virtual memory came around and people started putting the values of number in protected pages and it took all the fun out of it.
• (cs)
<font color="#000099">private const int </font>INT_ZERO = 0;<font color="#000099">private const int </font>INT_ONE = 1;<font color="#000099">private const int </font>INT_TWO = 2;<font color="#000099">private const int </font>INT_THREE = 3;<font color="#009900">'...</font>
<font color="#000099">private const int </font>INT_TWENTY_SEVEN = 27;<font color="#000099">private const int </font>INT_TWENTY_SEVEN = 28;
On the ZX81 home computer, using the built-in BASIC, it was a common coding practice to do something similar; e.g.
10 LET I=1 20 LET O=0 because numeric literals required 5 extra bytes of memory for each occurence (for speed reasons, the system stored the float value along with the readable representation) and memory was very limited, 1KB in the basic model. So using this "constants" saved 5 bytes each time; the declaration of both costs est. 24 bytes, so it pays off soon.
Next is from Kyle, who found the root cause behind the failure of long-running daemons ...
TODAY = today()YESTERDAY = yesterday()WEEKAGO = weekago()LASTMONTH = lastmonth()
Hard to say if this is a WTF without knowing the background; sometimes similar code is the right way to avoid a date switch-over during processing which would cause bad effects. Of course, your program should run into these lines at least once per day ;-)
• AC (unregistered) in reply to loneprogrammer
loneprogrammer:
Constants aren't, variables don't.
So true, thanks for the laugh
• Anonymous (unregistered) in reply to AC
I ended up doing this recently. I think I was feeling allergic to new that day.
private static final Integer INT_1 = new Integer(1);
private static final Integer INT_2 = new Integer(2);
public static final Integer MAGIC_COLUMN_A = INT_1;
public static final Integer MAGIC_COLUMN_B = INT_2;
public static final Integer MAGIC_THINGY_X = INT_2;
public static final Integer MAGIC_THINGY_Y = INT_1;
public static final Integer MAGIC_THIUGY_Z = INT_1;
etc.
• (cs)
Thank god for design patterns, though.
<FONT face="Courier New" size=2>public class MyAwesomeProgram
{
public static void main(String[] args)
{
MagicNumberFactory mnf = MagicNumberFactory.getInstance();
int myMagicNumber = mnf.getMagicNumber(MagicNumberFactory.BIG);
System.out.println("check out my magic number: " + myMagicNumber);
}
}
class MagicNumberFactory
{
private static int count = 0;
public static final int BIG = 1;
public static final int COOL = 2;
public static final int SUPERMAGIC = 3;
public static MagicNumberFactory getInstance()
{
return new MagicNumberFactory();
}
public int getMagicNumber(int type)
{
++count;
int theMagicNumber = 0;
switch(type)
{
case BIG:
theMagicNumber = BIG * 14 * count;
break;
case COOL:
theMagicNumber = COOL * 85 * count;
break;
case SUPERMAGIC:
theMagicNumber = SUPERMAGIC * 132 * count;
break;
default:
theMagicNumber = count;
break;
}
return theMagicNumber;
}
}</FONT>
• (cs) in reply to rogthefrog
ROFLMAO @ rog. If this place allowed karma awards, that'd get one.
• Davey (unregistered) in reply to rogthefrog
Wa?? @ Rog
Anyways, I've been wondering why people make getIntance() methods. In what situations is this a good idea?
• vhawk (unregistered)
Looking at:
// Name var val:String = name_txt.text; if (!(val.indexOf(" ")>0) || !(val.length>TWO)) { myMessage += "Please fill in your full Name\n"; validate = false; } // Address var val:String = address1_txt.text; if (!(val.length>TWO)) { myMessage += "Please fill in your full Address\n"; validate = false; }
Wonder if
'...' is more usefull as a FULL NAME or FULL ADDRESS than '.' or 'abc' or any other random 3 character cr*p.
• (cs) in reply to vhawk
BTW, on that last one:
"Obscure" is to be thought of as a verb, not an adjective.
Basicaly, this value is subtracted from the ascii value in config files, so that the files are "obscure"
• Mirandir (unregistered) in reply to JamesCurran
JamesCurran:
Mirandir:
One funny thing I just noticed is that Josh Buhler's coworker only seems to master "not" and "larger than" operators:
er..um... It seems you haven't even gotten that far.....
if(!(val.indexOf(" ")>0) || !(val.length>TWO)) {
(val.indexOf(" ") == -1 || val.length < TWO)
Make that....
(val.indexOf(" ") == -1 || val.length <= TWO)
(I should have fixed the first half of that as well, but we know what JB's Coworker really meant...)
Haha What the hell was I thinking... ?? [:$] Can I join the "Have-made-a-big-wtf-club" now? [:P] /Mirandir • (cs) in reply to ammoQ ammoQ: On the ZX81 home computer, using the built-in BASIC, it was a common coding practice to do something similar; e.g. 10 LET I=1 20 LET O=0 because numeric literals required 5 extra bytes of memory for each occurence (for speed reasons, the system stored the float value along with the readable representation) and memory was very limited, 1KB in the basic model. So using this "constants" saved 5 bytes each time; the declaration of both costs est. 24 bytes, so it pays off soon. The other common trick was to use 10 LET I=PI / PI instead of 10 LET I=1 This was because the ZX81 used a tokenised BASIC, with PI only taking one byte. Therefore, PI / PI actually took 3 bytes to declare the numerical value '1' as opposed to 5. Ahhh, those were the days! • (cs) in reply to Davey Anyways, I've been wondering why people make getIntance() methods. In what situations is this a good idea? getInstance (or similar) is generally a symptom of the http://c2.com/cgi/wiki?SingletonPattern The link provided explains in excruciating detail what the pattern is about, advantages, pitfalls, how it is commonly abused and why, etc etc. If you've not come across the concepts of singletons before, I would strongly suggest buying the 'gang of four' book and spending some considerable time browsing the c2 wiki. This is liable to reduce the WTF:LOC ratio of your work considerably[1]. Simon [1] Unless it doesn't, of course • (cs) Alex Papadimoulis: <FONT color=#000099>private const int </FONT>INT_TWENTY_SEVEN = 27; <FONT color=#000099>private const int </FONT>INT_TWENTY_SEVEN = 28; That's awesome. • (cs) in reply to spotcatbug This forum software, however, is not. :/ • (cs) in reply to tufty Hey Tufty, thanks for that website. They have a number of interesting things there. [:)] • (cs) Somehow this reminds me of$TEXMF/tex/latex/base/latex.ltx[/code:
\def@vpt{5} \def@vipt{6} \def@viipt{7} \def@viiipt{8} \def@ixpt{9} \def@xpt{10} \def@xipt{10.95} \def@xiipt{12} \def@xivpt{14.4} \def@xviipt{17.28} \def@xxpt{20.74} \def@xxvpt{24.88}
Seems like Roman eleven and twenty-four were half a percent smaller than ours, but to make up for this, seventeen was 1.6% larger, fourteen almost three percent and twenty as much as three point seven percent larger.
• (cs) in reply to divVerent
divVerent:
Somehow this reminds me of \$TEXMF/tex/latex/base/latex.ltx[/code: \def\@vpt{5} \def\@vipt{6} \def\@viipt{7} \def\@viiipt{8} \def\@ixpt{9} \def\@xpt{10} \def\@xipt{10.95} \def\@xiipt{12} \def\@xivpt{14.4} \def\@xviipt{17.28} \def\@xxpt{20.74} \def\@xxvpt{24.88} Seems like Roman eleven and twenty-four were half a percent smaller than ours, but to make up for this, seventeen was 1.6% larger, fourteen almost three percent and twenty as much as three point seven percent larger.
The only way this makes any sense whatsoever is if it was mixing postscript and classical print point sizes. Although that still doesn't explain the roman numerals.
I've been guilty of making one-time-use constants and placing then right before the function definition. >.>
• Brian (unregistered)
On the Power PC (and maybe on x86 too) constants may have bad data locality and take two instructions to load. Using gConsts.zero rather than 0.0 may end up taking half as many instructions if a bunch of constants are used in a function. This is because it only has to load the address of gConsts once then it can just use offsets to get the other values.
• (cs) in reply to tufty
simon:
>If you've not come across the concepts of singletons before, I would strongly suggest buying the 'gang of four' book and spending some considerable time browsing the c2 wiki. This is liable to reduce the WTF:LOC ratio of your work considerably[1].
Might I suggest that if you don't know what a singleton is that you get a book that you can actually read and follow through like "Refactoring to Patterns". The GoF book is really good, but it's a reference book, not a book for learning.
• (cs) in reply to loneprogrammer
loneprogrammer:
Constants aren't, variables don't.
I believe the original was "Constants aren't. Variables won't."
• Alexander Vollmer (unregistered)
Look like artifacts from an original source written in COBOL some decades ago. Often to be found in prgs for accounting and financial calcs.
Found some which made their way from COBOL through dBase to Access aso. I have an idea of beeing a very old sick man in a hospital and there is a holographic avatar serving me ... and it looks like pacman. [*-)]
• (cs) in reply to curtisk
curtisk:
<FONT color=#000099>if</FONT> (totalGlue > EIGHT) totalGlue = EIGHT;// burried in a header file ...#<FONT color=#000099>define</FONT> EIGHT 16
LOL that made my day, I have nothing to say but WTF!
Don't laugh... one day it will happen to you! I have seen this in lots of code before. What happens is some idiot writes the initial system, then all of a sudden it stops working because totalGlue is allowed to be 9, and not 8. So what's the easiest change to make? Just bump it up to 16. Of course the dickhead who did this without renaming the constant should be shot. And they should have renamed it to "MAX_GLUE_AMOUNT" or something.
• Pax (unregistered) in reply to clockwise
This goes back a looong way. I beleive the original K&R suggested using the line
#define PI 3.14159
should "the value of PI change in future".
• (cs)
private const int INT_ZERO = 0;
TODAY = today()
At least he's prepared if 0 would change to 100,01 in the very near feature.
Same with the fact that soon today will be known as 'the day after tomorrow'.
Surely you've thought of that designing your apps ? [^o)]
divVerent:
\def\@vpt{5} \def\@vipt{6} \def\@viipt{7} \def\@viiipt{8} \def\@ixpt{9} \def\@xpt{10} \def\@xipt{10.95} \def\@xiipt{12} \def\@xivpt{14.4} \def\@xviipt{17.28} \def\@xxpt{20.74} \def\@xxvpt{24.88}
Seems like Roman eleven and twenty-four were half a percent smaller than ours, but to make up for this, seventeen was 1.6% larger, fourteen almost three percent and twenty as much as three point seven percent larger.
The only way this makes any sense whatsoever is if it was mixing postscript and classical print point sizes. Although that still doesn't explain the roman numerals.
I've been guilty of making one-time-use constants and placing then right before the function definition. >.>
Well, the Roman numerals are there because TeX does not allow digits in identifiers, so they're justified. And yes, it is a point size conversion - although it's still strange to define "20 pt" as "20.74" using a macro.
• szeryf (unregistered) in reply to Pax
Anonymous:
This goes back a looong way. I beleive the original K&R suggested using the line
#define PI 3.14159
should "the value of PI change in future".
I'm pretty sure it wasn't K&R but some Xerox FORTRAN manual. It was used it many fortune files with this annotation.
• Olle (unregistered) in reply to JamesCurran
JamesCurran:
Mirandir:
One funny thing I just noticed is that Josh Buhler's coworker only seems to master "not" and "larger than" operators:
er..um... It seems you haven't even gotten that far.....
if(!(val.indexOf(" ")>0) || !(val.length>TWO)) {
(val.indexOf(" ") == -1 || val.length < TWO)
Make that....
(val.indexOf(" ") == -1 || val.length <= TWO)
(I should have fixed the first half of that as well, but we know what JB's Coworker really meant...)
er..um... It seems you haven't even gotten that far...
!(val.indexOf(" ")>0)
allows spaces in the first position (position 0) while
val.indexOf(" ") == -1
don't! And it is obvious that we should allow spaces in pos 0 ;-)
/Olle
• tekra (unregistered)
Is it just me, or did I find PAUSE_BY_6_SECONDS to be completely reasonable? I.e. if I use the literal "6000", is it obvious that is six seconds? Far too easy to accidentally type "600" or "60000" ...
tekra
• (cs) in reply to tekra
It should use a better name that doesn't include what the current literal amount is. Like: PAUSE_DURATION. If there are multiple pause durations(6 seconds, 12 seconds, 18 seconds for example), then you should append meaningful names like SHORT, MEDIUM, LONG.
All IMO.
• Hank Miller (unregistered) in reply to tekra
Anonymous:
Is it just me, or did I find PAUSE_BY_6_SECONDS to be completely reasonable? I.e. if I use the literal "6000", is it obvious that is six seconds? Far too easy to accidentally type "600" or "60000" ...
tekra
It is just you. It should be
PAUSE_FOR_LEGAL_MINIMUM_TIME
That way when congress changes the legal minimun to be 9 you don't have to change the name of the constant. 6 seconds is the value today, but it might not work out for a faster machine in the future.
• Hank Miller (unregistered)
Of course, we could all be wrong in thinking that constants named EIGHT are bad. Sometimes, you don't really want EIGHT to be 8, as demonstrated by this code submitted anonymously ...
<font color="#000099">if</font> (totalGlue > EIGHT) totalGlue = EIGHT;// burried in a header file ...#<font color="#000099">define</font> EIGHT 16
He just wanted to check F and 11 on the hacker purity test. Not to mention 8C
http://www.armory.com/tests/hacker.html
That might be enoguh to get his score out of the single digets.
• (cs) in reply to wakeskate
wakeskate:
simon:
>If you've not come across the concepts of singletons before, I would strongly suggest buying the 'gang of four' book and spending some considerable time browsing the c2 wiki. This is liable to reduce the WTF:LOC ratio of your work considerably[1].
Might I suggest that if you don't know what a singleton is that you get a book that you can actually read and follow through like "Refactoring to Patterns". The GoF book is really good, but it's a reference book, not a book for learning.
"Refactoring to Patterns" is indeed a great book!
But I can also stronly suggest "Head First: Design Patterns"
An amazing book!
regards,
Jeroen Vandezande
|
2020-12-06 01:35:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46984270215034485, "perplexity": 5715.1417185138735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00263.warc.gz"}
|
https://meridian.allenpress.com/idd/article/40/2/142/8321/Medicaid-HCBS-Waivers-and-Supported-Employment-Pre
|
## Abstract
Findings from a national survey of state mental retardation/developmental disability agencies regarding use of the Medicaid Home and Community Based Waiver to fund supported employment were reported. Numbers of individuals and funding levels were requested for day habilitation services for FYs 1997 and 1999, before and after the Balanced Budget Act of 1997 (P.L. 105–33), which removed eligibility restrictions for this service. Findings show that growth rates for this service far exceeded growth rates for other day services, with high growth rates in a small number of states. However, supported employment accounted for less than 16% of those receiving day habilitation services through the Waiver and only 12% of day habilitation funding, with the remainder going to day support, prevocational services, and other segregated options.
Editor in charge: Steven J. Taylor
The Medicaid Home and Community-Based Services (HCBS) Waiver program is the largest long-term care program for persons with mental retardation and other developmental disabilities (Lakin, Prouty, Smith, & Braddock, 1995). This program allows states to use Medicaid funds to provide home and community-based care to Medicaid beneficiaries who otherwise would require institutional services in nursing facilities or Intermediate Care Facilities for Persons With Mental Retardation (ICFs/MR). Participants in the HCBS waiver program grew 320% between 1992 and 1999 (Prouty & Lakin, 2000). In 1995, the number of individuals served by HCBS Waivers surpassed those served in ICFs/MR—149,432 and 134,855, respectively (Smith, Prouty, & Lakin, 1996). In 1999, the number of individuals with developmental disabilities participating in HCBS Waiver programs reached 261,930 nationwide (Prouty & Lakin, 2000).
The HCBS Waiver program has had a profound impact on residential placement patterns, reversing an “institutional bias” that had plagued the ICF/MR system since its inception (Smith et al., 1996). Through the Waiver, many individuals with mental retardation and related conditions have been able to avoid out-of-home or out-of-community placement or return to their home communities with necessary support services (American Counseling Association, 1997; Gettings, 1991).
Certainly one reason for this tremendous growth is the cost-effectiveness of the HCBS Waiver program compared to ICF/MR services. According to Prouty and Lakin (2000), the average cost of care for individuals served in ICF/MR settings in 1999 was $78,448 compared to$33,324 for those served in HCBS Waiver-funded services.
Regulations of the Health Care Financing Administration (HCFA), the federal administering agency for the HCBS Waiver program, allow states to use these funds to reimburse service providers for extended habilitation services, including supported employment. Supported employment is an employment service alternative, traditionally funded by state vocational rehabilitation agencies, for individuals with the most severe disabilities who are in need of long-term support to maintain competitive employment in integrated settings. The supported employment program grew rapidly in early years, enabling thousands of individuals with significant disabilities to enter employment (Wehman & Kregel, 1995; Wehman, Revell, & Kregel, 1997).
In contrast to vocational rehabilitation-funded supported employment, the HCBS Waiver program has had far less impact on increasing employment opportunities for Medicaid recipients. Smith (1994) found that approximately 30% to 40% of HCBS Waiver participants were potentially eligible for supported employment services under Waiver guidelines, yet only about 3% actually received such services. Two questions arise: (a) Why has a program (the HCBS Waiver) that has so radically transformed residential services for individuals with disabilities had so little impact on employment services? and (b) Why has a program (supported employment) that has had such tremendous success in vocational rehabilitation systems had such dismal results in the Medicaid system?
West, Revell, Kregel, and Bricout (1999) addressed these questions through a national survey to which representatives of 48 states responded. That study revealed that 5,261 individuals were receiving HCBS Waiver-reimbursed supported employment services at the time of the survey, comprising approximately 2.5% of all HCBS Waiver participants. In addition, 11 states indicated that they had waiting lists for Waiver-reimbursed supported employment services, and 10 were able to provide the number of individuals on the waiting list. The total number of individuals known to be waiting for supported employment was nearly three times the number identified as currently receiving services. The unmet need for Waiver-funded supported employment is presumably even greater because even in states without formal waiting lists, there are individuals waiting for services (Lakin, 1998).
Federal law at the time of the survey allowed for use of HCBS Waiver funds for supported employment only for those recipients who had a history of prior institutionalization (i.e., persons who, before entering the Waiver program, had resided in an ICF/MR or a nursing facility), a restriction removed by the Balanced Budget Act of 1997, effective October 1, 1997. This restriction on service utilization was identified by respondents in the West et al. (1999) study as the greatest barrier to using the HCBS Waiver to provide supported employment to eligible participants.
To assess the impact of the Balanced Budget Act, the National Association of State Directors of Developmental Disability Services (NASDDDS) and the Rehabilitation Research and Training Center (RRTC) at Virginia Commonwealth University entered into a partnership to develop a baseline state-level survey concerning the status of supported employment and other vocational services in HCBS Waiver program. in this paper we address portions of that survey identified as critical by both the NASDDDS and the respondents of the West et al. (1999) survey, namely, the extent to which the 1997 Balanced Budget Act has resulted in increased supported employment opportunities for Medicaid Waiver participants.
## Method
### Sample
The sample for the survey was the state mental retardation/developmental disabilities (MR/DD) directors from all 50 states and the District of Columbia, all of whom are members of the NASDDDS. A total of 41 surveys were completed, for an overall response rate of 80%. The only states not responding were Alaska, Arizona, Arkansas, DC, Illinois, Iowa, Louisiana, Minnesota, North Carolina, and Rhode Island. Detailed financial and consumer data were required for completion of the survey. However, states maintain unique and varying levels of information on their Waivers. Therefore, not all 41 states were able to complete every question.
### Instrument
The survey was designed as a follow-up to the previous Rehabilitation and Training Center study (West et al., 1999) in response to common concerns and questions from state MR/DD directors regarding the use of HCBS Waver funding for supported employment. A written survey, developed for this study, was focused on statewide policy and utilization of the HCBS Waiver across service categories. The specific survey items that were analyzed for the purposes of this article included (a) numbers of individuals served in various day services during fiscal year (FY) 1997 and 1999 and (b) service expenditures across these same services during the same FYs.
### Procedure
#### Instrument development
The survey was designed by a long-time NASDDDS staff member with extensive experience in Waiver design and implementation, who was also a former vocational rehabilitation supported employment developer/policy analyst and a former state MR/DD agency director. The draft of the survey was then reviewed by the directors or key Waiver managers of 3 states (Georgia, Rhode Island, and Virginia) to assess the face validity of the items and the potential accuracy of the data elicited. The final version of the instrument included improvements suggested by these state agency representatives.
#### Data collection
The survey form and instructions were mailed to the state directors along with a letter from the NASDDDS executive director urging cooperation. The state director remained the key respondent for the completion of the survey; however, in most cases the researchers were directed to other staff members who provided the information requested. The NASDDDS also distributed the survey materials via their Waiver Manager Listserv to prepare Waiver staff members for the upcoming questions and additional work from their directors.
#### Data analysis
Data analysis for this section of the survey consisted of descriptive statistics, including total numbers of individuals, total dollars expended, and percentage increases across the reporting time period.
## Results
Totals for numbers of participants and state expenditures for FY 1997 and 1999 are presented in Tables 1 and 2, respectively. For both participants and expenditures, two sets of comparisons are presented. The first includes all individuals and dollars identified by the respondents. The second comparison is for adjusted totals, eliminating states that were not able to provide complete data for both FYs. We felt that elimination of states with missing data would give the most accurate indication of actual growth, in participants and dollars, of supported employment as a Waiver option.
Table 1
Total Number of Unduplicated Recipients of Waiver-Funded Habilitation Services
As shown in Table 2, there was an overall increase from FY 1997 to FY 1999 in state HCBS Waiver day habilitation program participants of 32.9%, 39.9% adjusted for missing data. The increase in supported employment participants in the same time period was 206.4%, 212.5% adjusted. In FY 1997, supported employment participants constituted 6.8% of identified HCBS Waiver-funded day habilitation participants, 6.9% adjusted for missing state data. In FY 1999, they constituted 15.7% of all day habilitation participants, 15.5% adjusted.
Table 2
Total Expenditures for HCBS Waiver-Funded Day Services
The mean percentage of supported employment participants, as a subset of all HCBS Waiver participants within a state, was 12.8%, with a median of 7.4%, indicating that the data were positively skewed. A closer look at the data showed that 6 states had over one fourth of their HCBS Waiver participants in supported employment. Those states included Connecticut (44.3%), Massachusetts (42.1%), Colorado (36.9%), New Mexico (30.4%), South Dakota (29.7%), and Oklahoma (29.5%). Of the 31 states that could provide raw numbers of supported employment participants for both FYs 1997 and 1999, 27 (87.1%) reported growth. The remaining 4 states showed declines in supported employment participation between FYs 1997 and 1999.
As with number of participants, substantial increases were found for total state expenditures (Table 2). Overall expenditures for HCBS Waiver-funded day habilitation services increased by 48.6% from FY 1997 to 1999, 31.0% adjusted. Expenditures for supported employment increased by 238.1%, adjusted to 282.3%. In FY 1997, expenditures for supported employment accounted for 4% of all day habilitation expenditures, using the adjusted totals. By FY 1999, the percentage of day habilitation funds expended for supported employment had increased to 12%. As with participants, the mean increase in state expenditures for supported employment for FY 1999 far exceeded median increase, indicating that the data are positively skewed, with a small number of states reporting exceptionally high increases in expenditures.
## Discussion
The findings of this survey indicate that use of supported employment as an HCBS Waiver service is growing, which should provide some optimism to service consumers, families, and policymakers. Some caution in interpreting these findings, however, is warranted. First, we noted that 20% of state MR/DD agencies did not complete the survey, and their experiences may not be consistent with those reported here. However, it is also notable that the respondent states encompassed approximately 82% of the nation's population, including 8 of the 10 most populous states. Thus, there is evidence that the survey's findings are representative of the national picture.
As a second precaution, the findings represent pre- and post-Balanced Budget Act utilization of supported employment through the HCBS Waiver. However, the study design cannot confirm a causal relationship between the two phenomena. Certainly, other factors contributed to differences between FYs 1997 and 1999. Still, the removal of the prior institutionalization restriction was a much-anticipated watershed event that was identified in the West et al. (1999) study as a critical need of the states. Surely the Balanced Budget Act was the primary, if not the sole, causal factor for differences between FYs 1997 and 1999.
As a final caveat, we note that West et al. (1999) found that a large number of participants were receiving community-based vocational services under the HCBS Waiver but not under the supported employment service category. For example, this service might be reimbursed as consultation. Thus, states were finding ways to provide community-based employment with support even when individuals might not have been eligible under pre-Balanced Budget Act regulations. That finding raises the possibility that at least some of the growth in the use of supported employment might be an artifact of states' and providers' ability to report their participants' activities more accurately rather than true growth in participation.
Nonetheless, the findings of the current study provide strong evidence that many states have taken advantage of the window of opportunity afforded by the Balanced Budget Act to increase utilization of supported employment through the HCBS Waiver. Although participants in all Waiver-funded day habilitation services grew at a rate of 16% to 20% annually, supported employment participation under the Waiver grew at a rate averaging over 100% annually. The majority of states (27, 87.1%) responding to these items reported increases in supported employment participation, with 6 states reporting that over one fourth of their HCBS Waiver participants were receiving the service in 1999. Similar increases were seen in expenditures. It would appear that the revised eligibility criteria under the Balanced Budget Act was a major factor in expansion of supported employment participants and funding that far exceeds other day services.
These findings would seem to bode well for those individuals waiting for supported employment services, whether through the vocational rehabilitation system or the Waiver program. As noted by West et al. (1999), another restriction on using the Waiver to fund supported employment was that the service had to be unavailable through the state vocational rehabilitation agency, and this restriction was unaffected by the Balanced Budget Act. The Waiver provides an alternative avenue for providing the service to those with severe disabilities who want integrated, competitive employment but are waiting for necessary support services.
Reducing waiting lists and waiting times is a critical need in state MR/DD systems. Davis (1997) reported that 223,562 individuals were on waiting lists for residential, vocational, or supportive services (with some probable duplication across the lists). This total did not include individuals in 7 states that do not maintain waiting lists, nor did it include approximately 49,000 individuals who were then residing in institutions and who are typically not included in waiting list data. Lakin (1998) noted that growth in service need appears to be outpacing growth in service capacity. Without radical changes in service funding systems, current waiting lists will continue to grow and frustrate those who need services and their families. The post-Balanced Budget Act activities of the states regarding community-based employment appear to be a good start in the direction of needed changes in service funding.
Despite growth in participants and dollars, individuals in supported employment were a distinct minority (less than 16%) of those receiving day habilitation services through the Waiver. Moreover, only 12% of all known habilitation funding went toward supported employment services, with the remainder going to day support, prevocational services, and other segregated options. It is evident that even in exemplary states, the predominant mode of Waiver-funded day habilitation service delivery continues to be segregated work and non-work activities, and supported employment continues to be underutilized by Waiver-provider agencies and consumers.
This finding is in sharp contrast to employment services funded through the vocational rehabilitation system, where integrated employment has long been the goal for the overwhelming majority of consumers, including those with mental retardation. Gilmore and Butterworth (1997) examined vocational rehabilitation closure data from 1993 for 25,000 individuals with mental retardation and found that 87% were closed in competitive employment and only 3% in sheltered employment. Furthermore, the U.S. Department of Education recently issued final rules for the vocational rehabilitation program (Federal Register, 2001) that amended the regulatory definition of employment outcome under the vocational rehabilitation program to include only competitive outcomes that occur in integrated settings. This rule change essentially eliminates segregated employment as a successful closure option for any vocational rehabilitation consumer.
Only time will tell whether the momentum towards community-based competitive employment that we found in this study can be sustained or even possibly accelerated. What can policymakers do to expand use of Waiver-funded supported employment in the post-Balanced Budget Act era?
The prior institutionalization restriction was not the only reason for low utilization of HCBS Waiver-funded supported employment. West et al. (1999) reported that reimbursement rates to service provider agencies were generally very low in comparison to rates paid through the more traditional vocational rehabilitation route. In addition, they found that many states placed other restrictions on provider agencies, such as limiting the number of individuals who could be served, the number of service hours that any individual could receive, or the amount of total reimbursements. Thus, provider agencies had little financial incentive to use the HCBS Waiver to provide this service to a significant portion of their day habilitation programs. Addressing these financial disincentives is crucial to expanding supported employment opportunities at the local provider level. In addition, states that have shown the strongest use of the HCBS Waiver to provide integrated employment, such as Connecticut, Massachusetts, Colorado, New Mexico, South Dakota, and Oklahoma, warrant closer examination to determine which aspects of their program can be replicated in other states.
Finally, the U.S. Supreme Court's recent Olmstead decision, like the Balanced Budget Act, provides a window of opportunity for states to expand competitive employment opportunities for HCBS Waiver participants. In June 1999, the Supreme Court ruled in L.C. & E.W. vs. Olmstead that it is a violation of the Americans With Disabilities Act for states to discriminate against people with disabilities by providing services in institutions when the individual could be served more appropriately in a community-based setting. Further, the Court encouraged states to develop plans for placing qualified people in less restrictive settings at a reasonable pace. As of March 2001, 37 states have developed or are developing those plans (Fox-Grage, Folkemer, & Horahan, 2001). Although the Olmstead decision focused specifically on deinstitutionalization, most states are taking a comprehensive approach to better utilize Medicaid HCBS Waiver funds across service needs (Fox-Grage et al., 2001). Community-based employment can and should be included in the planning process as a cost-effective means of providing habilitation services for individuals served under the Waiver, both those exiting institutions and those currently living in the community.
NOTE: Preparation of this manuscript was made possible by Grant 133B980036 from the National Institute on Disability and Rehabilitation Research (NIDRR), U.S. Department of Education, and Cooperative Agreement (H128U7003) from the Rehabilitation Services Administration (RSA), Office of Special Education and Rehabilitative Services (OSERS), U.S. Department of Education. The opinions expressed here do not necessarily reflect those of the supporting entity, and no official endorsement should be inferred.
## References
References
Balanced Budget Act of 1997,
42 U.S.C §743
.
Davis
,
S.
1997
.
A status report to the nation on people with mental retardation waiting for community services.
Arlington, TX: Department of Research and Program Services, The Arc
.
Federal Register.
2001, January 22
.
State Vocational Rehabilitation Services Program;.
Final rule (pp. 7249–7258)
.
Fox-Grage
,
W.
,
D.
Folkemer
, and
K.
Horahan
.
2001
.
The states' response to the Olmstead decision: A status report.
Washington, DC: National Conference of State Legislators
.
Gettings
,
R. M.
1991
.
Utilizing Medicaid dollars to finance services to Illinois citizens with developmental disabilities.
Alexandria, VA: National Association of State Mental Retardation Program Directors
.
Gilmore
,
D. S.
and
J.
Butterworth
.
1997
.
Work status trends for people with mental retardation.
Research to practice. (ERIC Document Reproduction Service No. ED410701)
.
Lakin
,
K. C.
1998
.
On the outside looking in: Attending to waiting lists in systems of services for people with developmental disabilities.
Mental Retardation
36
:
157
162
.
Lakin
,
K. C.
,
R.
Prouty
,
G.
Smith
, and
D.
.
1995
.
Trends and milestones: Places of residence of Medicaid HCBS recipients.
Mental Retardation
33
:
406
.
Prouty
,
R.
and
K. C.
Lakin
. (
Eds.
).
2000
.
Residential services for persons with developmental disabilities: Status and trends through 1999.
Minneapolis: University of Minnesota, Research and Training Center on Community Living, Institute on Community Integration
.
Smith
,
G. A.
1994, April 30
.
Supported employment and the HCBS Waiver program.
Presentation at the Rehabilitation Research and Training Center on Supported Employment, Richmond, VA
.
Smith
,
G.
,
R.
Prouty
, and
K. C.
Lakin
.
1996
.
Trends and milestones: The HCBS Waiver program: The fading of Medicaid's “institutional bias.”.
Mental Retardation
34
:
262
263
.
Wehman
,
P.
and
J.
Kregel
.
1995
.
At the crossroads: Supported employment a decade later.
Journal of the Association for Persons with Severe Handicaps
20
:
286
299
.
Wehman
,
P.
,
G.
Revell
, and
J.
Kregel
.
1997
.
Supported employment: A decade of rapid growth and impact.
In P. Wehman, J. Kregel, & M. West (Eds.), Supported employment research: Expanding competitive employment opportunities for persons with significant disabilities (pp. 1–18). Richmond: Virginia Commonwealth University, Rehabilitation Research and Training Center
.
West
,
M.
,
G.
Revell
,
J.
Kregel
, and
J.
Bricout
.
1999
.
The Medicaid Home and Community-Based Waiver and supported employment.
American Journal on Mental Retardation
104
:
78
87
.
## Author notes
Michael West, PhD, Research Associate ( mwest@saturn.vcu.edu); Grant Revell, MS, Research Associate, John Kregel, EdD, Research Director, and Leanne Campbell, BS, Applications Analyst, Rehabilitation Research and Training Center, Virginia Commonwealth University, PO Box 842011, Richmond, VA 23284–2011; Janet W. Hill, MSEd, Assistant Professor, School of Psychiatry, Medical College of Virginia, Virginia Commonwealth University, PO Box 980710, Richmond, VA 23298–0710; Gary Smith, BA, Senior Project Director, Human Services Research Institute, 850 Lancaster Drive SE, Salem, OR 97301 (formerly of the National Association of State Directors of Developmental Disability Services)
|
2020-07-13 00:01:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20704075694084167, "perplexity": 8161.205997002706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140337.79/warc/CC-MAIN-20200712211314-20200713001314-00545.warc.gz"}
|
https://www.physicsforums.com/threads/easy-question-about-fluid-statics.62969/
|
# Easy Question about Fluid Statics
#### secret2
I am a newbie to fluid mechanics, and I am confused about the "hydrostatic paradox". To begin with, consider three containers. All three have the same base area (all circular), but the angle between the base and the "wall" are all different:
Container 1: obtuse angle
Container 2: right angle
Container 3: acute angle
And the textbook says that the FORCES on the base of the three containers are identical. Why should container 3 have the same as the rest?
Related Other Physics Topics News on Phys.org
#### ramollari
Yes, the forces will be the same, provided the additional condition that the heights of the fluids are equal. Then, since pressures and areas are equal at the bottom, the same forces will apply.
#### Doc Al
Mentor
secret2 said:
And the textbook says that the FORCES on the base of the three containers are identical. Why should container 3 have the same as the rest?
Unfortunately, the link that minger provided does not illustrate the 3rd case, which is the most interesting one. In cases 1 and 2, there is an unobstructed column of fluid directly above the base, so it's easy to imagine that the force exerted on the base equals the weight of that column. But in case 3, some of the column of water is obstructed by the sides of the container; it turns out that the sides of the container exert a downward force that just compensates for the truncated height of the column of fluid. The net result is that the force exerted on the base is equal in all three cases (as long as the height of the fluid is the same).
#### secret2
Thanks alot Doc Al! That's exactly my concern! But why would a force be exerted by the wall anyway?
Just one more scenario. Imagine that we have the following device:
Code:
[FONT=System]
| | | | \ /
| | / \ \ /
| | / \ \ /
| | / \ \ /
| | /__ __\ \ /
| |_______| |________| |_______
|__________________________________|
[/font]
Does the column in the middle have enough pressure (or force) to balance the others so that all three keep up the same level? If so, is it because for the column in the middle, once again, a force is exerted by the wall?
Thank you!
edited by enigma to add [ code ] and [ font ]tags for clarity
Last edited by a moderator:
#### secret2
No good, the diagram doesn't come out right.....I'll try describing it in words. The middle one has a circular base area, all of which is connected to the base. And the wall of the middle column, of course, makes an acute angle with the base.
#### Doc Al
Mentor
secret2 said:
But why would a force be exerted by the wall anyway?
Because the fluid pushes against the wall and the wall pushes back.
Does the column in the middle have enough pressure (or force) to balance the others so that all three keep up the same level?
The fluid reaches the same height in all three columns.
If so, is it because for the column in the middle, once again, a force is exerted by the wall?
The wall will exert a downward force on the fluid.
#### pervect
Staff Emeritus
A couple of suggestions - if you wrap your ascii diagram in
$$\mbox{ Code: }$$
tags, it won't get reformatted.
The way I'd approach pressure is to look at a small volume element of fluid. The net force on the box will be the gradient of the pressure. But this approach may be a bit advanced, it's not the simplest possible approach.
http://astron.berkeley.edu/~jrg/ay202/node6.html [Broken]
outlines this approach, having some nice diagrams.
The math requires vector calculus, but the link derives the fact that for any fluid in equilibrium in a conservative force field Phi (such that the force is the gradient of the potential - the Earth's gravitational field is an example of such a field, where the force is the gradient of -GM/R), surfaces of constant pressure, constant density, and constant potential Phi must all coincide.
Last edited by a moderator:
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
2019-08-18 12:52:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6192702651023865, "perplexity": 720.2596476569961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00423.warc.gz"}
|
http://math.stackexchange.com/questions/337702/sum-of-integers-divisible-by-their-digits
|
# Sum of integers divisible by their digits
Determine the sum of : all two-digit positive integers that are divisible by each of their digits.
For example :
$12$ is divisible by $1$ and $2$.
-
ALL CAPS, abbreviations like "Thx", and commands like "Determine the sum of..." are not very polite. It's also good form to include your thoughts on the problem (what you've tried) and maybe why you want to know the answer. Finally, you mean "divisible by each of its digits", not "divisible to its each digit". – Alex Kruckman Mar 22 '13 at 7:59
Have you proceeded with the question? – hjpotter92 Mar 22 '13 at 7:59
Let such a number be $10x+y$. It is divisible by $x$ and $y$.
So $(10x+y)/x$ = $10+ y/x$ = $10 + m$ should be a natural number where $m=y/x$.
Similarly $(10x+y)/y$ = $10(x/y) + 1$ = $10/m + 1$ should be a natural number.
This is only possible for $m = 1, 2$ and $5$. So the possible numbers are
For $m=1$: No.s = $\{11,22,33,...99\}$
For $m=2$: No.s = $\{12,24,36,48\}$
For $m=5$: No.s = $\{15\}$.
Hence there are 14 such numbers
-
And their sum is 630? – user47805 Mar 22 '13 at 17:15
Hint: work them out and add them up
Check: I think there are 14 cases: nine of one type, four of another type, and one more which would give you an answer of the form $$a \sum_{i=1}^9 i + b \sum_{j=1}^4 j + c$$
-
|
2014-12-21 06:55:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7532168030738831, "perplexity": 552.8058410656015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770747.108/warc/CC-MAIN-20141217075250-00086-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://www-micro.deis.unibo.it/cgi-bin/bibsearch.pl?term=CICC01:Ser&field=key&type=exact&header=~www%2FBibtex%2Fheader.html&footer=~www%2Ffooter.html&files=~www%2FBibtex%2F2001.bib&querytype=abstract
|
## Publication Database Query
Searched for "CICC01:Ser"
Clicking on the number of each publication you can retrieve the bibtex entry. If available, the abstract can be accessed clicking on the title of the work.
(Conference)
## A System-on-Chip for Pressure-Sensitive Fabric
M. Sergio and N. Manaresi and M. Tartagni and R. Canegallo and R. Guerrieri
### Abstract:
This paper presents a mixed-signal system-on-chip (SOC) for decoding the pressure exerted over a large piece of smart fabric. The image map of the pressure applied over the fabric surface is achieved by detecting the capacitance variation between rows and columns of conductive fibers patterned on the two opposite sides of an elastic layer, like synthetic foam. The SOC approach allows to reduce design time maintaining the flexibility to accommodate for different sensor sizes and to perform some image enhancement such as fixed pattern noise compensation and gamma correction. The chip has been designed in a $0.35\mu m$ 5ML CMOS process to work at 40MHz, 3.3V power supply, in a fully reconfigurable arrangement of 128 rows and columns. The core area is $32mm^2$.
|
2018-09-23 00:56:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4277544319629669, "perplexity": 2117.24042994915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00200.warc.gz"}
|
https://www.acmicpc.net/problem/13881
|
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
2 초 512 MB 2 1 1 50.000%
## 문제
Given a positive integer, N, a permutation of order N is a one-to-one (and thus onto) function from the set of integers from 1 to N to itself. If p is such a function, we represent the function by a list of its values:
[ p(1) p(2) … p(N) ]
For example,
[5 6 2 4 7 1 3] represents the function from { 1 … 7 } to itself which takes 1 to 5, 2 to 6, … , 7 to 3.
For any permutation p, a descent of p is an integer k for which p(k) > p(k+1). For example, the permutation [5 6 2 4 7 1 3] has a descent at 2 (6 > 2) and 5 (7 > 1).
For permutation p, des(p) is the number of descents in p. For example, des([5 6 2 4 7 1 3]) = 2. The identity permutation is the only permutation with des(p) = 0. The reversing permutation with p(k) = N+1-k is the only permutation with des(p) = N-1.
The permutation descent count (PDC) for given order N and value v is the number of permutations p of order N with des(p) = v. For example:
• PDC(3, 0) = 1 { [ 1 2 3 ] }
• PDC(3, 1) = 4 { [ 1 3 2 ], [ 2 1 3 ], [ 2 3 1 ], 3 1 2 ] }
• PDC(3, 2) = 1 { [ 3 2 1 ] }
Write a program to compute the PDC for inputs N and v. To avoid having to deal with very large numbers, your answer (and your intermediate calculations) will be computed modulo 1001113.
## 입력
The first line of input contains a single integer P, (1 ≤ P ≤ 1000), which is the number of data sets that follow. Each data set should be processed identically and independently.
Each data set consists of a single line of input. It contains the data set number, K, followed by the integer order, N (2 ≤ N ≤ 100), followed by an integer value, v (0 ≤ v ≤ N-1).
## 출력
For each data set there is a single line of output. The single output line consists of the data set number, K, followed by a single space followed by the PDC of N and v modulo 1001113 as a decimal integer.
## 예제 입력 1
4
1 3 1
2 5 2
3 8 3
4 99 50
## 예제 출력 1
1 4
2 66
3 15619
4 325091
|
2018-11-13 18:29:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3729550242424011, "perplexity": 771.093701205472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741340.11/warc/CC-MAIN-20181113173927-20181113195927-00080.warc.gz"}
|
https://crypto.stackexchange.com/questions/52103/how-is-the-lai-massey-scheme-invertible/52144
|
# How is the Lai-Massey scheme invertible?
The Wikipedia page on the subject and other descriptions that I can find of it are as clear as mud. $L$ and $R$ are combined to make a number that is then added to both. That seems like a loss of information to me.
One Lai-Massey round can be described as \begin{align} L' &= \sigma(L \oplus F_k(L \oplus R)) \\ R' &= R \oplus F_k(L \oplus R)\,, \end{align} where $F_k$ is some round function—not necessarily invertible—and $\sigma(\cdot)$ is an orthomorphism, an arbitrary function such that both $\sigma(x)$ and $\sigma'(x) = \sigma(x) \oplus x$ are invertible.
To invert this, \begin{align} L &= \sigma^{-1}(L') \oplus F_k(\sigma^{-1}(L') \oplus R') \\ R &= R' \oplus F_k(\sigma^{-1}(L') \oplus R')\,. \end{align} We can see this works because \begin{align} \sigma^{-1}(L') &= L \oplus F_k(L \oplus R)\,, \end{align} and \begin{align} \sigma^{-1}(L') \oplus R' &= L \oplus F_k(L \oplus R) \oplus R \oplus F_k(L \oplus R) \\ &= L \oplus R\,. \end{align}
• Does $\sigma(\cdot)$ have to be applied to $L$? Can be applied to both $L$ and $R$? – Melab Oct 11 '17 at 0:49
• Only applied to $L$. – Samuel Neves Oct 11 '17 at 4:41
• But can it be applied to $R$, too? – Melab Oct 11 '17 at 11:19
|
2020-05-31 04:44:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000097751617432, "perplexity": 1345.3076951956857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410745.37/warc/CC-MAIN-20200531023023-20200531053023-00523.warc.gz"}
|
https://in.mathworks.com/help/control/ref/ss.lqi.html
|
# lqi
## Syntax
```[K,S,e] = lqi(SYS,Q,R,N) ```
## Description
`lqi` computes an optimal state-feedback control law for the tracking loop shown in the following figure.
For a plant `sys` with the state-space equations (or their discrete counterpart):
`$\begin{array}{l}\frac{dx}{dt}=Ax+Bu\\ y=Cx+Du\end{array}$`
the state-feedback control is of the form
`$u=-K\left[x;{x}_{i}\right]$`
where xi is the integrator output. This control law ensures that the output y tracks the reference command r. For MIMO systems, the number of integrators equals the dimension of the output y.
`[K,S,e] = lqi(SYS,Q,R,N)` calculates the optimal gain matrix `K`, given a state-space model `SYS` for the plant and weighting matrices `Q`, `R`, `N`. The control law u = –Kz = –K[x;xi] minimizes the following cost functions (for r = 0)
• $J\left(u\right)={\int }_{0}^{\infty }\left\{{z}^{T}Qz+{u}^{T}Ru+2{z}^{T}Nu\right\}dt$ for continuous time
• $J\left(u\right)=\sum _{n=0}^{\infty }\left\{{z}^{T}Qz+{u}^{T}Ru+2{z}^{T}Nu\right\}$ for discrete time
In discrete time, `lqi` computes the integrator output xi using the forward Euler formula
`${x}_{i}\left[n+1\right]={x}_{i}\left[n\right]+Ts\left(r\left[n\right]-y\left[n\right]\right)$`
where Ts is the sample time of `SYS`.
When you omit the matrix `N`, `N` is set to 0. `lqi` also returns the solution `S` of the associated algebraic Riccati equation and the closed-loop eigenvalues `e`.
## Limitations
For the following state-space system with a plant with augmented integrator:
`$\begin{array}{l}\frac{\delta z}{\delta t}={A}_{a}z+{B}_{a}u\\ y={C}_{a}z+{D}_{a}u\end{array}$`
The problem data must satisfy:
• The pair (Aa,Ba) is stabilizable.
• R > 0 and $Q-N{R}^{-1}{N}^{T}\ge 0$.
• $\left(Q-N{R}^{-1}{N}^{T},{A}_{a}-{B}_{a}{R}^{-1}{N}^{T}\right)$ has no unobservable mode on the imaginary axis (or unit circle in discrete time).
## Tips
`lqi` supports descriptor models with nonsingular E. The output `S` of `lqi` is the solution of the Riccati equation for the equivalent explicit state-space model
`$\frac{dx}{dt}={E}^{-1}Ax+{E}^{-1}Bu$`
## References
[1] P. C. Young and J. C. Willems, “An approach to the linear multivariable servomechanism problem”, International Journal of Control, Volume 15, Issue 5, May 1972 , pages 961–979.
## Version History
Introduced in R2008b
|
2023-03-21 18:43:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9052115082740784, "perplexity": 1994.5042612158356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00738.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Velocity/Units/Conversion_Factors
|
# Definition:Velocity/Units/Conversion Factors
$\displaystyle$ $\displaystyle 1$ $\displaystyle \mathrm m \ \mathrm s^{-1}$ SI unit $\displaystyle$ $=$ $\displaystyle 10^2 = 100$ $\displaystyle \mathrm {cm} \ \mathrm s^{-1}$ CGS unit $\displaystyle$ $=$ $\displaystyle 3.281$ $\displaystyle \mathrm f \ \mathrm s^{-1}$ FPS unit
|
2020-11-25 16:49:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8431175351142883, "perplexity": 1860.6715768638915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00144.warc.gz"}
|
https://www.quizover.com/macroeconomics/section/problems-price-elasticity-of-demand-and-price-elasticity-by-openstax
|
# 5.1 Price elasticity of demand and price elasticity of supply (Page 2/18)
Page 2 / 18
This means that, along the demand curve between point B and A, if the price changes by 1%, the quantity demanded will change by 0.45%. A change in the price will result in a smaller percentage change in the quantity demanded. For example, a 10% increase in the price will result in only a 4.5% decrease in quantity demanded. A 10% decrease in the price will result in only a 4.5% increase in the quantity demanded. Price elasticities of demand are negative numbers indicating that the demand curve is downward sloping, but are read as absolute values. The following Work It Out feature will walk you through calculating the price elasticity of demand.
## Finding the price elasticity of demand
Calculate the price elasticity of demand using the data in [link] for an increase in price from G to H. Has the elasticity increased or decreased?
Step 1. We know that:
Step 2. From the Midpoint Formula we know that:
Step 3. So we can use the values provided in the figure in each equation:
Step 4. Then, those values can be used to determine the price elasticity of demand:
Therefore, the elasticity of demand from G to H 1.47. The magnitude of the elasticity has increased (in absolute value) as we moved up along the demand curve from points A to B. Recall that the elasticity between these two points was 0.45. Demand was inelastic between points A and B and elastic between points G and H. This shows us that price elasticity of demand changes at different points along a straight-line demand curve .
## Calculating the price elasticity of supply
Assume that an apartment rents for $650 per month and at that price 10,000 units are rented as shown in [link] . When the price increases to$700 per month, 13,000 units are supplied into the market. By what percentage does apartment supply increase? What is the price sensitivity?
Using the Midpoint Method ,
Again, as with the elasticity of demand, the elasticity of supply is not followed by any units. Elasticity is a ratio of one percentage change to another percentage change—nothing more—and is read as an absolute value. In this case, a 1% rise in price causes an increase in quantity supplied of 3.5%. The greater than one elasticity of supply means that the percentage change in quantity supplied will be greater than a one percent price change. If you're starting to wonder if the concept of slope fits into this calculation, read the following Clear It Up box.
## Is the elasticity the slope?
It is a common mistake to confuse the slope of either the supply or demand curve with its elasticity. The slope is the rate of change in units along the curve, or the rise/run (change in y over the change in x). For example, in [link] , each point shown on the demand curve, price drops by $10 and the number of units demanded increases by 200. So the slope is –10/200 along the entire demand curve and does not change. The price elasticity, however, changes along the curve. Elasticity between points A and B was 0.45 and increased to 1.47 between points G and H. Elasticity is the percentage change, which is a different calculation from the slope and has a different meaning. When we are at the upper end of a demand curve, where price is high and the quantity demanded is low, a small change in the quantity demanded, even in, say, one unit, is pretty big in percentage terms. A change in price of, say, a dollar, is going to be much less important in percentage terms than it would have been at the bottom of the demand curve. Likewise, at the bottom of the demand curve, that one unit change when the quantity demanded is high will be small as a percentage. So, at one end of the demand curve, where we have a large percentage change in quantity demanded over a small percentage change in price, the elasticity value would be high, or demand would be relatively elastic. Even with the same change in the price and the same change in the quantity demanded, at the other end of the demand curve the quantity is much higher, and the price is much lower, so the percentage change in quantity demanded is smaller and the percentage change in price is much higher. That means at the bottom of the curve we'd have a small numerator over a large denominator, so the elasticity measure would be much lower, or inelastic. As we move along the demand curve, the values for quantity and price go up or down, depending on which way we are moving, so the percentages for, say, a$1 difference in price or a one unit difference in quantity, will change as well, which means the ratios of those percentages will change.
## Key concepts and summary
Price elasticity measures the responsiveness of the quantity demanded or supplied of a good to a change in its price. It is computed as the percentage change in quantity demanded (or supplied) divided by the percentage change in price. Elasticity can be described as elastic (or very responsive), unit elastic, or inelastic (not very responsive). Elastic demand or supply curves indicate that quantity demanded or supplied respond to price changes in a greater than proportional manner. An inelastic demand or supply curve is one where a given percentage change in price will cause a smaller percentage change in quantity demanded or supplied. A unitary elasticity means that a given percentage change in price leads to an equal percentage change in quantity demanded or supplied.
## Problems
The equation for a demand curve is P = 48 – 3Q. What is the elasticity in moving from a quantity of 5 to a quantity of 6?
The equation for a demand curve is P = 2/Q. What is the elasticity of demand as price falls from 5 to 4? What is the elasticity of demand as the price falls from 9 to 8? Would you expect these answers to be the same?
The equation for a supply curve is 4P = Q. What is the elasticity of supply as price rises from 3 to 4? What is the elasticity of supply as the price rises from 7 to 8? Would you expect these answers to be the same?
The equation for a supply curve is P = 3Q – 8. What is the elasticity in moving from a price of 4 to a price of 7?
#### Questions & Answers
What is trade line
what is scars
What is land as labour
Siaw
Price and output determination in a monopoly?
Monopoly :its features, measures market power
Ruchi
Monopoly is market structure where he/she is d boss with no competition.Therefore he quote his own price for product as well for quantity he provide. Eg.Suppose desert area only one shop he/she selling 10ltr water bottle @25.But with same amt you could have bought 20ltr if it's perfect competition.
Tactful
Economics is a social sciences that have diverse application
what is economics?
Economic is the study of human behaviour in relation with the scare resources and it alternate use.
Tactful
Economics is a social sciences that have diverse application...
Francis
the branch of knowledge concerned with the production, consumption, and transfer of wealth.
charlon
choice and opportunity cost?
choice is the next best alternative
Taina
Choice is option available. Opportunity cost means giving up other to get The 1st one. eg. U r hungry u got 2option available on fridge A and B. You select A over B. so this is opportunity cost. B is the Opportunity Cost over A.
Tactful
can I get simple language and examples?
what is demand
what is fiscal policy
David
fiscal policy can be defined as the use of government's income and expenditure for a specific purpose
Nzenwata
ahhhh.. i dont what is expindetures
ian
It is the policy use by govt to influence economy ( manage inflation and deflation). Steps involved are govt spending and taxation.
Tactful
Demand is quantity of good a person is willing and ready to buy at given period of time at given price.
Tactful
good
abubakar
good but high language
Gajendra
lower the language @gajendra Singh
abubakar
What is natural monopoly
Ruchi
Natural monopoly is a market structure or system where the creation of goods and services, it's distribution and pricing mechanisms are undertaken by a sole firm resulting from demand, economies of scale and existing market survey other than legal constraints.
Gh
I don't really know anything about economics but am offering it at the University....
Oteng
Which uni if I may ask...?
Gh
can u help me guys to answer this question what are the different ways of defining money in your economy? Compare these with the monetary aggregates commonly used in another selected country. Explain their differences and the reasons for such differentiation.
aisyah
Why do we observe a wide variety of checking and savings accounts, rather than just one of each type? What are the reasons for the existence of financial intermediaries? Why do the ultimate lenders usually not lend directly to the ultimate borrowers?
aisyah
what is economic
David
The social science which study human behaviour and relationship between end and scare mean which have alternative uses.
Ruchi
@ruchi y have conpleted pg in economics or doing pg in econonics
Ashish
Explain economic growth with the use of ppf?
what z the meaning of ppf
rivan
do u mean ppc
Nzenwata
Yes pls ppc
Michael
an expansionary fiscal policy could be achieved by what
David
if the price of cigarettes ,food and alcohol rised by 10% in a year ,which is most likely to affect the cpi the most.
David
what measures would be suitable for reducing a recessionary gap.
David
increasing the level of government government expenditure is an instrument of what
David
A reductionin income tax rates would blank the blank of the multiplier
David
progressive taxes may slow down economic recovery .This is as a result of what
David
money acts as a safe guard against inflation something and something one of a function of money
David
A rise in expenditure for consumer goods something and something ,one of the cause of cost- push inflation
David
PPC IS production possibility curve. It show possible good which can be produced by an economy with given resource and technology.
Tactful
Recessionary gap can be solved by Monetary and Fiscal policy
Tactful
What are the positive effects on the economy to legalize drugs?
wat factor give raise to monopoly
a product which is unique /it has very less substitutes in the market. so this product has no much competition .... for example , railways
mikey
its a monopoly
mikey
does monology has factors or it has merits n demerits
rivan
monopoly or oligopoly is just a type of market in which demand and supply is measured to meet public interests
mikey
economy is all about psychological behaviour of humans to each other and to environment economists role is to keep everything in equilibrium
mikey
factors give rise to monopoly. 1. Patent right 2. Cartel 3. Govt policy. 4. Control over raw material. 5. money for investment
Tactful
oligopoly and monopoly are examples of imperfect market..
Nzenwata
First, second and third degree price determination under monopoly
Ruchi
please explain what is elasticity of supply
is the responsiveness of quantity supplied of commodity to changes in its own price
rivan
what is the cause of a country's population
please it seems your question is not clear ,is it the cause of increase or decrease population in a country or what
okai
what is producer surplus
is the excess earns btn wat a producer was willing to charge for e commodity and wat actually receives after selling it
rivan
OK good
Destiny
yeap
Bright
what is supply curve
are curve that do not obey the law of supply eg aren't +ve
rivan
half of 1%
Destiny
as in what do u mean by that
rivan
it simply shows the quantity of goods that a film is willing to supply at each price of a commodity
Destiny
OK what is the law of supply as u said
Destiny
It is the indifference curve that indicates the aggregate responsiveness of supply to the price of a commodity, and sometimes its demand of that same commodity.
Gh
nice
Destiny
pls explain how indifference curve connects to the aggregate responsiveness of supply to the price of a commodity
JOSHUA
law of supply according to me states that wen thea z higher price of commodity, the higher will be the supply and lower the supply will be for a commodity other factors remain constant
rivan
Joshua be clear to your QN plizzz
rivan
pls read Gh's comment and break down for me
JOSHUA
may be he can explain more because am am also not getting what he was meaning in that statement
rivan
plizzz GH explain to us
rivan
When demand and supply intersection
Pronoy
then it z called what
rivan
can I learning what is meaning off economics
Jimcaale
can you tall me what is meaning
Jimcaale
good
abubakar
Zahid
in oligopoly there is a competition between companies, becoz all those companies produce almost similar products in monopoly , product has no substitutes in the market for competition, so people have no choice to choose another similar product over this, becoz there is no similar product
mikey
oligopoly : mobile phone manufacturing companies have huge competition over one another
mikey
|
2018-09-25 10:37:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4205009639263153, "perplexity": 1551.5510994536905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161501.96/warc/CC-MAIN-20180925103454-20180925123854-00028.warc.gz"}
|
https://math.stackexchange.com/questions/1151855/are-symmetric-binary-matrices-necessarily-positive-semi-definite
|
Are symmetric binary matrices necessarily positive semi-definite?
Let $A$ be a symmetric $n\times n$ matrix with entries only 0 or 1 and the diagonal entries of $A$ are all 1. Is A positive (semi-) definite?
$$\begin{pmatrix}1&0&1\\0&1&1\\1&1&1\end{pmatrix}$$
• The trace is $3$, the determinant is $-1$. – Git Gud Feb 17 '15 at 10:16
|
2019-06-24 17:59:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9235326051712036, "perplexity": 199.14093808898906}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00206.warc.gz"}
|
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=pub&publisherID=822&journalID=6076&pageb=1&userQueryID=&sort=&local_page=&sorType=&sorCol=
|
for Journals by Title or ISSN for Articles by Keywords help
Publisher: Springer-Verlag (Total: 2350 journals)
International Journal of FractureJournal Prestige (SJR): 0.916 Citation Impact (citeScore): 2Number of Followers: 14 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1573-2673 - ISSN (Online) 0376-9429 Published by Springer-Verlag [2350 journals]
• Modeling of structural failure of Zircaloy claddings induced by multiple
hydride cracks
• Authors: Ruijie Liu; Ahmed Mostafa; Zhijun Liu
Abstract: Abstract Zirconium alloys have been serving as primary structural materials for nuclear fuel claddings. Structural failure analysis under extreme conditions is critical to the assessment of the performance and safety of nuclear fuel claddings. This work focuses on simulating structural failure of Zircaloy tubes with multiple hydride defects through modeling explicit crack propagation in ductile media. First, we developed an integrated cladding failure model by taking into account both crack initiation induced by hydride/matrix interface separation and ligament tearing-off between activated hydride cracks. Second, to accommodate the initiation, propagation, and coalescence of multiple cracks in finite plastic media we incorporated this structural failure model into a coupled continuous/discontinuous Galerkin (DG) based finite element code, a traditionally preferred implicit numerical framework. Third, to improve the adaptive placement of DG interface elements for crack propagation and to identify potential coalescence of cracks due to the interaction between adjacent hydride cracks, we defined a special failure index for the assessment of potential failure zones using both true plastic strain developed and predicted failure strain based on the Johnson–Cook material failure criterion. Finally, by calibrating the proposed material failure model using a cluster of Zircaloy material experimental tests, we successfully simulated a complete failure process of a fuel cladding tube with multiple hydride cracks.
PubDate: 2018-09-06
DOI: 10.1007/s10704-018-0312-9
• A diffusion-based approach for modelling crack tip behaviour under
fatigue-oxidation conditions
• Authors: R. J. Kashinga; L. G. Zhao; V. V. Silberschmidt; R. Jiang; P. A. S. Reed
Abstract: Abstract Modelling of crack tip behaviour was carried out for a nickel-based superalloy subjected to high temperature fatigue in a vacuum and air. In a vacuum, crack growth was entirely due to mechanical deformation and thus it was sufficient to use accumulated plastic strain as a criterion. To study the strong effect of oxidation in air, a diffusion-based approach was applied to investigate the full interaction between fatigue and oxygen penetration at a crack tip. Penetration of oxygen into the crack tip induced a local compressive stress due to dilatation effect. An increase in stress intensity factor range or dwell times imposed at peak loads resulted in enhanced accumulation of oxygen at the crack tip. A crack growth criterion based on accumulated levels of oxygen and plastic strain at the crack tip was subsequently developed to predict the crack growth rate under fatigue-oxidation conditions. The predicted crack-growth behaviour compared well with experimental results.
PubDate: 2018-09-06
DOI: 10.1007/s10704-018-0311-x
• On fatigue crack growth in plastically compressible hardening and
hardening–softening–hardening solids using crack-tip blunting
• Authors: Shushant Singh; Debashis Khan
Abstract: Abstract In the present study, mode I crack subjected to cyclic loading has been investigated for plastically compressible hardening and hardening–softening–hardening solids using the crack tip blunting model where we assume that the crack tip blunts during the maximum load and re-sharpening of the crack tip takes place under minimum load. Plane strain and small scale yielding conditions have been assumed for analysis. The influence of cyclic stress intensity factor range ( $$\Delta \hbox {K})$$ , load ratio (R), number of cycles (N), plastic compressibility ( $${\upalpha })$$ and material softening on near tip deformation, stress–strain fields were studied. The present numerical calculations show that the crack tip opening displacement (CTOD), convergence of the cyclic trajectories of CTOD to stable self-similar loops, plastic crack growth, plastic zone shape and size, contours of accumulated plastic strain and hydrostatic stress distribution near the crack tip depend significantly on $$\Delta \hbox {K}$$ , R, N, $${\upalpha }$$ and material softening. For both hardening and hardening–softening–hardening materials, yielding occurs during both loading and unloading phases, and resharpening of the crack tip during the unloading phase of the loading cycle is very significant. The similarities are revealed between computed near tip stress–strain variables and the experimental trends of the fatigue crack growth rate. There was no crack closure during unloading for any of the load cycles considered in the present study.
PubDate: 2018-09-01
DOI: 10.1007/s10704-018-0310-y
• Geomechanical optimization of hydraulic fracturing in unconventional
reservoirs: a semi-analytical approach
• Authors: Ali Taghichian; Hamid Hashemalhoseini; Musharraf Zaman; Zon-Yee Yang
Abstract: Abstract Unconventional drilling and completion architecture includes drilling multilateral horizontal wells in the direction of minimum horizontal stress and simultaneous multistage fracturing treatments perpendicular to the wellbore. This drilling and stimulation strategy is utilized in order to raise the connectivity of the reservoir to the wellbore, thereby remedying the low permeability problem, increasing reserve per well, enhancing well productivity, and improving project economics in this type of reservoir. However, in order to have the highest production with the least cost, an optimization technique should be used for the fracturing treatment. According to the fact that aperture, propagation direction, and propagation potential of hydraulic fractures are of paramount importance in optimization of the fracking treatment, in this research paper, these three major factors are studied in detail, the control variables on these three factors are examined, and the effect of each factor is quantified by proposing a complete set of equations. Using the proposed set of equations, one can make a good estimate about the fracture aperture (directly controlling the fracture conductivity), the stress shadow size (directly controlling the fracture path), and the change of stress intensity factor (directly controlling the fracture propagation potential). A geomechanical optimization procedure is then presented for toughness-dominated and viscosity-dominated regimes based on the proposed equations that can be used for estimation of different optimal fracturing patterns. The most efficient fracturing pattern can be determined afterward via considering the cumulative production using a reservoir simulator e.g. ECLIPSE, Schlumberger. This procedure is likely to offer an optimal simultaneous multistage hydraulic fracture treatment without deviation or collapse, with no fracture trapping, with the highest possible propagation potential in the hydrocarbon producing shale layer, and a predicted aperture for proppant type/size decision and conductivity of the fractures.
PubDate: 2018-08-24
DOI: 10.1007/s10704-018-0309-4
• Microstructural damage evolution and arrest in binary Fe–high-Mn alloys
with different deformation temperatures
• Authors: Motomichi Koyama; Takahiro Kaneko; Takahiro Sawaguchi; Kaneaki Tsuzaki
Abstract: Abstract We investigated the damage evolution behaviors of binary Fe–28–40Mn alloys (mass%) from 93 to 393 K by tensile testing. The underlying mechanisms of the microstructure-dependent damage evolution behavior were uncovered by damage quantification coupled with in situ strain mapping and post-mortem microstructure characterization. The damage growth behaviors could be classified into three types. In type I, the Fe–28Mn alloy at 93 K showed premature fracture associated with ductile damage initiation and subsequent quasi-cleavage damage growth associated with the $$\upvarepsilon$$ -martensitic transformation. In type II, the Fe–28Mn alloy at 293 K and the Fe–32Mn alloy at 93 K showed delayed damage growth but did not stop growing. In type III, when the stacking fault energy was $$>\,$$ 19 $$\hbox {mJ/m}^{2}$$ , the damage was strongly arrested until final ductile failure.
PubDate: 2018-08-24
DOI: 10.1007/s10704-018-0307-6
• Failure mechanisms and fracture energy of hybrid materials
• Authors: Najam Sheikh; Sivasambu Mahesh
Abstract: Abstract A shear-lag model of hybrid materials is developed. The model represents an alternating arrangement of two types of aligned linear elastic fibres, embedded in a linear elastic matrix. Fibre and matrix elements are taken to fail deterministically when the axial and shear stresses in them reach their respective strengths. An efficient solution procedure for determining the stress state for arbitrary configurations of broken fibre and matrix elements is developed. Starting with a single fibre break, this procedure is used to simulate progressive fibre and matrix failure, up to composite fracture. The effect of (1) the ratio of fibre stiffnesses, and (2) the ratio of the fibre tensile strength to matrix shear strength, on the composite failure mechanism, fracture energy, and failure strain is characterised. Experimental observations, reported in the literature, of the fracture behaviour of two hybrid materials, viz., hybrid unidirectional composites, and double network hydrogels, are discussed in the framework of the present model.
PubDate: 2018-08-16
DOI: 10.1007/s10704-018-0306-7
• Ductile-to-brittle transition in tensile failure due to shear-affected
zone with a stress-concentration source: a comparative study on
punched-plate tensile-failure characteristics of precipitation-hardened
and dual-phase steels
• Authors: Shigeru Hamada; Jiwang Zhang; Kejin Zhang; Motomichi Koyama; Toshihiro Tsuchiyama; Tatsuo Yokoi; Hiroshi Noguchi
Abstract: Abstract The effect of shear-affected zone (SAZ), with a stress-concentration source induced by the punching process, on tensile properties was investigated. Tests using honed specimens (which have the same shapes and stress-concentration without any SAZ) and smooth specimens were conducted to compare the effect with that of the punched specimens. Dual-phase steel, which has a high work-hardening ability and low yield strength, and precipitation-hardened steel, which has a low work-hardening ability and high yield strength, were used in the tests. Materials with two tensile strength grades were prepared from both types of steel. Only the precipitation-hardened steel with higher strength grade punched specimen showed a brittle fracture with extremely short fracture-elongation, whereas the other specimens showed a ductile fracture. The fracture surface analysis revealed that cracks initiated in the maximum shear stress plane of the SAZ under tensile loading at first. We call the crack “shear crack.” The steel which showed brittle fracture used in this study easily exhibited plastic-strain localization compared with the other steels. If the shear crack is sharp, then the transition from ductile to brittle failure tends to occur. Furthermore, the strength characteristics of the punched specimen depend on the crack length dependency of the strength resistance and the failure phenomenon of the original material.
PubDate: 2018-08-07
DOI: 10.1007/s10704-018-0304-9
• Numerical modeling of the nucleation of facets ahead of a primary crack
• Authors: Aurélien Doitrand; Dominique Leguillon
Abstract: Abstract A numerical study of crack front segmentation under mode I + III loading is proposed. Facets initiation ahead of a parent crack is predicted through a tridimensional application of the coupled criterion. Crack initiation shape, orientation and spacing are determined for any mode mixity ratio by coupling a stress and an energy criterion using matched asymptotic expansions. The stress and the energy conditions are computed through a 3D finite element modeling of a periodic network of facets ahead of the parent crack. The initiation shape, loading and spacing of facets depend on the blunted parent crack tip radius. A good estimate of facet orientations is obtained based on the direction of maximum tensile stress. The facet shapes, determined using the stress isocontours, are qualitatively similar to those observed experimentally. The order of magnitude of numerical predictions of facets spacing is very close to experimental measurements.
PubDate: 2018-08-03
DOI: 10.1007/s10704-018-0305-8
• The configurational-forces view of the nucleation and propagation of
fracture and healing in elastomers as a phase transition
• Authors: Aditya Kumar; K. Ravi-Chandar; Oscar Lopez-Pamies
Abstract: Abstract In a recent contribution, Kumar et al. (J Mech Phys Solids 112:523–551, 2018) have introduced a macroscopic theory aimed at describing, explaining, and predicting the nucleation and propagation of fracture and healing in elastomers undergoing arbitrarily large quasistatic deformations. The purpose of this paper is to present an alternative derivation of this theory—originally constructed as a generalization of the variational theory of brittle fracture of Francfort and Marigo (J Mech Phys Solids 46:1319–1342, 1998) to account for physical attributes innate to elastomers that have been recently unveiled by experiments at high spatio-temporal resolution—cast as a phase transition within the framework of configurational forces. A second objective of this paper is to deploy the theory to probe new experimental results on healing in silicone elastomers.
PubDate: 2018-07-31
DOI: 10.1007/s10704-018-0302-y
• Modeling of plasticity and fracture behavior of X65 steels: seam weld and
seamless pipes
• Authors: Marcelo Paredes; Junhe Lian; Tomasz Wierzbicki; Mihaela E. Cristea; Sebastian Münstermann; Philippe Darcis
Abstract: Abstract A non-associated/associated flow rule coupled with an anisotropic/isotropic quadratic yield function is presented to describe the mechanical responses of two distinct X65 pipeline steels. The first as a product of the cold-rolling forming (UOE) process also known as seam weld pipes and the second as a result of high temperature piercing process called seamless tube manufacturing. The experimental settings consist of a wide range of sample types, whose geometric characteristics represent different state of stresses and loading modes. For low to intermediate stress triaxiality levels, flat specimens are extracted at different material orientations along with notched round bar samples for high stress triaxialities. The results indicate that despite the existing differences in plasticity between materials due to anisotropy induced processes, material failure can be characterized by an isotropic weighting function based on the modified Mohr–Coulomb (MMC) criterion. The non-associated flow rule allows for inclusion of strain directional dependence in the definition of equivalent plastic strain by means of scalar anisotropy (Lankford) coefficients and thus keeping the original capabilities of the MMC model.
PubDate: 2018-07-27
DOI: 10.1007/s10704-018-0303-x
• Numerical assessment of temperature effects on concrete failure behavior
• Authors: Marianela Ripani; Sonia Vrech; Guillermo Etse
Abstract: Abstract This work focuses on the evaluation of temperature effects on concrete failure behavior and modes by means of a realistic thermodynamically consistent non-local poroplastic constitutive model, previously developed by the authors, which is modified in this work. In this regard, two original contributions are presented and discussed. Firstly, and based on significant published experimental results related to this very complex aspect such as the effects of temperature in concrete failure, a temperature dependent non-associated flow rule is introduced to the poroplastic constitutive model to more accurately account for the temperature dependent inelastic volumetric behavior of concrete in post-peak regime. This is crucial for improving overall model accuracy, particularly regarding the temperature effects on concrete released energy during degradation processes. Secondly, and more importantly, the explicit solution of the localization condition in terms of the critical hardening modulus is developed regarding the non-local poroplastic constitutive model reformulated in this work, which allows the analysis of localized failure modes in the form of discontinuous bifurcation of quasi-brittle porous materials like concrete under different temperature, hydraulic and stress state scenarios. Also numerical procedures are followed, which also allow the evaluation of temperature effects on the critical directions for localized failure or cracking which is performed in this work for a wide spectrum of stress states and temperatures. Both, undrained and drained hydraulic conditions are evaluated. The results in this work demonstrate the soundness of the proposed constitutive model modifications and of the derived explicit solution for the critical hardening modulus to accurately predict the temperature effects on both, concrete volumetric behavior, and on the failure modes and related critical cracking direction. They also demonstrate that concrete failure mode and critical localization directions are highly sensitive to temperature, particularly in the compressive regime.
PubDate: 2018-07-26
DOI: 10.1007/s10704-018-0301-z
• On crack tip stress fields in pseudoelastic shape memory alloys
• Authors: Gülcan Özerim; Günay Anlaş; Ziad Moumni
Abstract: Abstract In a domain of reasonable accuracy around the crack tip, asymptotic equations can provide closed form expressions that can be used in formulation of crack problem. In some studies on shape memory alloys (SMAs), although the pseudoelastic behavior results in a nonlinear stress–strain relation, stress distribution in the vicinity of the crack tip is evaluated using asymptotic equations of linear elastic fracture mechanics (LEFM). In pseudoelastic (SMAs), upon loading, stress increases around the crack tip and martensitic phase transformation occurs in early stages. In this paper, using the similarity in the loading paths of a pseudoelastic SMA and a strain hardening material, the stress–strain relation is represented by nonlinear Ramberg–Osgood equation which is originally proposed for strain hardening materials, and the stress distribution around the crack tip of a pseudoelastic SMA plate is reworked using the Hutchinson, Rice and Rosengren (HRR) solution, first studied by Hutchinson. The size of the transformation region around the crack tip is calculated in closed form using a thermodynamic force that governs the martensitic transformation together with the asymptotic equations of HRR. A UMAT is written to separately describe the thermo-mechanical behavior of pseudoelastic SMAs. The results of the present study are compared to the results of LEFM, computational results from ABAQUS, and experimental results for the case of an edge cracked NiTi plate. Both set of asymptotic equations are shown to have different dominant zones around the crack tip with HRR equations representing the martensitic transformation zone more accurately.
PubDate: 2018-07-20
DOI: 10.1007/s10704-018-0300-0
• Notch sensitivity of orthotropic solids: interaction of tensile and shear
damage zones
• Authors: Harika C. Tankasala; Vikram S. Deshpande; Norman A. Fleck
Abstract: Abstract The macroscopic tensile strength of a panel containing a centre-crack or a centre-hole is predicted, assuming the simultaneous activation of multiple cohesive zones. The panel is made from an orthotropic elastic solid, and the stress raiser has both a tensile cohesive zone ahead of its tip, and shear cohesive zones in an orthogonal direction in order to represent two simultaneous damage mechanisms. These cohesive zones allow for two modes of fracture: (i) crack extension by penetration, and (ii) splitting in an orthogonal direction. The sensitivity of macroscopic tensile strength and failure mode to the degree of orthotropy is explored. The role of notch acuity and notch size are assessed by comparing the response of the pre-crack to that of the circular hole. This study reveals the role of the relative strength and relative toughness of competing damage modes in dictating the macroscopic strength of a notched panel made from an orthotropic elastic solid. Universal failure mechanism maps are constructed for the pre-crack and hole for a wide range of material orthotropies. The maps are useful for predicting whether failure is by penetration or kinking. Case studies are developed to compare the predictions with observations taken from the literature for selected orthotropic solids. It is found that synergistic strengthening occurs: when failure is by crack penetration ahead of the stress raiser, the presence of shear plastic zones leads to an enhancement of macroscopic strength. In contrast, when failure is by crack kinking, the presence of a tensile plastic zone ahead of the stress raiser has only a mild effect upon the macroscopic strength.
PubDate: 2018-07-05
DOI: 10.1007/s10704-018-0296-5
• Strain-injection and crack-path field techniques for 3D crack-propagation
modelling in quasi-brittle materials
• Authors: I. F. Dias; J. Oliver; O. Lloberas-Valls
Abstract: Abstract This paper presents a finite element approach for modelling three-dimensional crack propagation in quasi-brittle materials, based on the strain injection and the crack-path field techniques. These numerical techniques were already tested and validated by static and dynamic simulations in 2D classical benchmarks [Dias et al., in: Monograph CIMNE No-134. International Center for Numerical Methods in Engineering, Barcelona, (2012); Oliver et al. in Comput Methods Appl Mech Eng 274:289–348, (2014); Lloberas-Valls et al. in Comput Methods Appl Mech Eng 308:499–534, (2016)] and, also, for modelling tensile crack propagation in real concrete structures, like concrete gravity dams [Dias et al. in Eng Fract Mech 154:288–310, (2016)]. The main advantages of the methodology are the low computational cost and the independence of the results on the size and orientation of the finite element mesh. These advantages were highlighted in previous works by the authors and motivate the present extension to 3D cases. The proposed methodology is implemented in the finite element framework using continuum constitutive models equipped with strain softening and consists, essentially, in injecting the elements candidate to capture the cracks with some goal oriented strain modes for improving the performance of the injected elements for simulating propagating displacement discontinuities. The goal-oriented strain modes are introduced by resorting to mixed formulations and to the Continuum Strong Discontinuity Approach (CSDA), while the crack position inside the finite elements is retrieved by resorting to the crack-path field technique. Representative numerical simulations in 3D benchmarks show that the advantages of the methodology already pointed out in 2D are kept in 3D scenarios.
PubDate: 2018-07-02
DOI: 10.1007/s10704-018-0293-8
• Fracture of pre-cracked thin metallic conductors due to electric current
induced electromagnetic force
• Authors: Deepak Sharma; B. Subba Reddy; Praveen Kumar
Abstract: Abstract We investigated propagation of a sharp crack in a thin metallic conductor with an edge crack due to electric current induced electromagnetic forces. Finite element method (FEM) simulations showed mode I crack opening in the edge-cracked conductor due to the aforementioned (i.e., self-induced) electromagnetic forces. Mode I stress intensity factor due to the self-induced electromagnetic forces, $$K_{\mathrm{IE},}$$ was evaluated numerically as $$K_{\mathrm{IE}}=\upmu l^{2}j^{2}(\uppi a)^{0.5}f(a/w)$$ , where $$\upmu$$ is the magnetic permeability, l is the length of the conductor, a is the crack length, j is the current density, w is the width of the sample and f(a / w) is a geometric factor. Effect of dynamic electric current loading on edge-cracked conductor, incorporating the effects of induced currents, was also studied numerically, and dynamic stress intensity factor, $$K_{\mathrm{IE,d}}$$ , was observed to vary as $$K_{\mathrm{IE,d}} \sim f_{d}(a/w)j^{2}(\uppi a)^{1.5}$$ . Consistent with the FEM simulation, experiments conducted using $$12\,\upmu \hbox {m}$$ thick Al foil with an edge crack showed propagation of sharp crack due to the self-induced electromagnetic forces at pulsed current densities of $$\ge$$ $$1.85\times 10^{9}\,\hbox {A/m}^{2}$$ for $$a/w = 0.5$$ . Further, effects of current density, pulse-width and ambient temperature on the fracture behavior of the Al foil were observed experimentally and corroborated with FEM simulations.
PubDate: 2018-07-02
DOI: 10.1007/s10704-018-0299-2
• Fracture of pre-cracked metallic conductors under combined electric
• Authors: Deepak Sharma; B. Subba Reddy; Praveen Kumar
PubDate: 2018-06-30
DOI: 10.1007/s10704-018-0298-3
• Quantification and microstructural origin of the anisotropic nature of the
sensitivity to brittle cleavage fracture propagation for hot-rolled
pipeline steels
• Authors: F. Tankoua; J. Crépin; P. Thibaux; S. Cooreman; A.-F. Gourgues-Lorenzon
Abstract: Abstract This work proposes a quantitative relationship between the resistance of hot-rolled steels to brittle cleavage fracture and typical microstructural features, such as microtexture. More specifically, two hot-rolled ferritic pipeline steels were studied using impact toughness and specific quasistatic tensile tests. In drop weight tear tests, both steels exhibited brittle out-of-plane fracture by delamination and by so-called “abnormal” slant fracture, here denoted as “brittle tilted fracture” (BTF). Their sensitivity to cleavage cracking was thoroughly determined in the fully brittle temperature range using round notched bars, according to the local approach to fracture, taking anisotropic plastic flow into account. Despite limited anisotropy in global texture and grain morphology, a strong anisotropy in critical cleavage fracture stress was evidenced for the two steels, and related through a Griffith-inspired approach to the size distribution of clusters of unfavorably oriented ferrite grains (so-called “potential cleavage facets”). It was quantitatively demonstrated that the occurrence of BTF, as well as the sensitivity to delamination by cleavage fracture, is primarily related to an intrinsically high sensitivity of the corresponding planes to cleavage crack propagation across potential cleavage facets.
PubDate: 2018-06-29
DOI: 10.1007/s10704-018-0297-4
• Applicability of hierarchical fiber bundle materials to mechanical
environments
• Authors: X. L. Ji; L. X. Li
Abstract: Abstract Biomaterials use a hierarchical structure to optimize their self-healing behavior, for instance. However, the behavior may be constrained under different mechanical environments. In this paper, a system is suggested that the mechanical environment is modeled as a spring connected in series with the fiber bundle material. For the spring, the elastic behavior with stiffness is obeyed while, for the fiber bundle material, the nonlinear elastic constitutive relation is obeyed according to the Weibull distribution and the Daniels’ theory. Relying on the principle of total potential, the applicability condition is proposed for the system and the critical stiffness is thus derived for the spring. The applicability of hierarchical fiber bundle materials is finally investigated. The results show that the hierarchy can significantly change the critical stiffness, and hence demonstrates quite different applicability to a given mechanical environment.
PubDate: 2018-06-12
DOI: 10.1007/s10704-018-0290-y
• An efficient numerical method for quasi-static crack propagation in
heterogeneous media
• Authors: A. Markov; S. Kanaun
Abstract: Abstract The paper is devoted to the problem of slow crack growth in heterogeneous media. The crack is subjected to arbitrary pressure distribution on the crack surface. The problem relates to construction of the so-called equilibrium crack. For such a crack, stress intensity factors are equal to the material fracture toughness at each point of the crack contour. The crack shape and size depend on spatial distributions of the elastic properties and fracture toughness of the medium, and the type of loading. In the paper, attention is focused on the case of layered elastic media when a planar crack propagates orthogonally to the layers. The problem is reduced to a system of surface integral equations for the crack opening vector and volume integral equations for stresses in the medium. For discretization of these equations, a regular node grid and Gaussian approximating functions are used. For iterative solution of the discretized equations, fast Fourier transform technique is employed. An iteration process is proposed for the construction of the crack shape in the process of crack growth. Examples of crack evolution for various properties of medium and types of loading are presented.
PubDate: 2018-05-11
DOI: 10.1007/s10704-018-0284-9
• 3D model of transversal fracture propagation from a cavity caused by
Herschel–Bulkley fluid injection
• Authors: Sergey Cherny; Vasily Lapin; Dmitriy Kuranakov; Olga Alekseenko
Abstract: Abstract The paper presents an extension of authors’ previous model for a 3D hydraulic fracture with Newtonian fluid, which aims to account for the Herschel–Bulkley fluid rheology and to study the associated effects. This fluid rheology model is the most suitable for description of modern complex fracturing fluids, in particular, for description of foamed fluids that have been successfully utilized recently as fracturing fluids in tight and ultra-tight unconventional formations with high clay contents. Another advantage of using Herschel–Bulkley rheological law in the hydraulic fracture model consists in its generality as its particular cases allow describing the behavior of the majority of non-Newtonian fluids employed in hydraulic fracturing. Except the Herschel–Bulkley fluid flow model the considered model of hydraulic fracturing includes the model of the rock stress state. It is based on the elastic equilibrium equations that are solved by the dual boundary element method. Also the hydraulic fracturing model contains the new mixed mode propagation criterion, which states that the fracture should propagate in the direction in which mode $${{\mathrm{\mathrm {II}}}}$$ and mode $${{\mathrm{\mathrm {III}}}}$$ stress intensity factors both vanish. Since it is not possible to make both modes zero simultaneously the criterion proposes a functional that depends on both modes and is minimized along the fracture front in order to obtain the direction of propagation. Solution for Herschel–Bulkley fluid flow in a channel is presented in detail, and the numerical algorithm is described. The developed model has been verified against some reference solutions and sensitivity of fracture geometry to rheological fluid parameters has been studied to some extent.
PubDate: 2018-05-10
DOI: 10.1007/s10704-018-0289-4
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: journaltocs@hw.ac.uk
Tel: +00 44 (0)131 4513762
Fax: +00 44 (0)131 4513327
Home (Search)
Subjects A-Z
Publishers A-Z
Customise
APIs
|
2018-09-21 12:25:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.514822781085968, "perplexity": 2993.290122767478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157070.28/warc/CC-MAIN-20180921112207-20180921132607-00426.warc.gz"}
|
https://mathoverflow.net/questions/304156/another-funny-kind-of-ramsey-number
|
# Another funny kind of Ramsey number
Definition. $h(n_1,n_2)$ is the least number $m$ such that, if the edges of $K_m$ are colored with two colors, $1$ and $2,$ then for some color $i\in\{1,2\}$ there is a set $W\subseteq V(K_m)$ such that $|W|=n_i$ and every triangle in $W$ has an odd number of edges of color $i;$ in other words, for some $i\in\{1,2\},$ the graph consisting of the edges of color $i$ has an induced subgraph $H$ of order $n_i$ such that $H$ has at most two components, and each component of $H$ is a clique.
Question 1. Is there any literature on $h(n_1,n_2)$ ?
Question 2. Is $h(4,5)=8$ ?
Here are some easy bounds for $h(n_1,n_2)$ in terms of ordinary Ramsey numbers.
Definition. The Ramsey number $R(n_1,n_2;d)$ is the least number $m$ such that, given an $m$-element set $V$ and any set $S\subseteq\binom Vd,$ we can find a set $W\subseteq V$ such that either $|W|=n_1$ and $\binom Wd\cap S=\emptyset,$ or else $|W|=n_2$ and $\binom Wd\subseteq S.$
Definition. As in my previous question A funny kind of Ramsey number, $f(n)$ is the least number $m$ such that, given an $m$-element set $V$ and any set $S\subseteq\binom V3,$ we can find an $n$-element set $X\subseteq V$ such that, for each $4$-element set $Y\subseteq X,$ we have $|\binom Y3\cap S|\equiv0\pmod2.$
Upper bound: $h(m+1,n+1)\le R(m,n;2)+1.$
Lower bound: $f(h(m,n))\ge R(m,n;3).$
Regarding $h(4,5),$ I know that $$8\le h(4,5)\le10.$$ On the one hand, $h(4,5)\le R(3,4;2)+1=10.$ On the other hand, to see that $h(4,5)\gt7,$ take a Hamiltonian cycle $C$ in $K_7$ and color the edges of $C$ with color $2$ and the rest with color $1.$ (This is the simplest of a whole bunch of $7$-point counterexamples.) I have not found a counterexample to the conjecture that $h(4,5)=8.$
The statement $h(4,5)=8$ is true. It can be checked by enumerating all 8-vertex graphs (a total of 12346) and check them one by one.
For those who want to double-check, there are 48 graphs in which there is exactly one instance of $W$, and 43 in which there are two.
|
2020-01-22 09:02:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677248358726501, "perplexity": 63.85435965977366}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606872.19/warc/CC-MAIN-20200122071919-20200122100919-00359.warc.gz"}
|
https://hal.archives-ouvertes.fr/hal-02458760
|
# ANTARES and IceCube Combined Search for Neutrino Point-like and Extended Sources in the Southern Sky
Abstract : A search for point-like and extended sources of cosmic neutrinos using data collected by the ANTARES and IceCube neutrino telescopes is presented. The data set consists of all the track-like and shower-like events pointing in the direction of the Southern Sky included in the nine-year ANTARES point-source analysis, combined with the through-going track-like events used in the seven-year IceCube point-source search. The advantageous field of view of ANTARES and the large size of IceCube are exploited to improve the sensitivity in the Southern Sky by a factor $\sim$2 compared to both individual analyses. In this work, the Southern Sky is scanned for possible excesses of spatial clustering, and the positions of preselected candidate sources are investigated. In addition, special focus is given to the region around the Galactic Centre, whereby a dedicated search at the location of SgrA* is performed, and to the location of the supernova remnant RXJ 1713.7-3946. No significant evidence for cosmic neutrino sources is found and upper limits on the flux from the various searches are presented.
Keywords :
Domain :
https://hal.archives-ouvertes.fr/hal-02458760
Contributor : Inspire Hep <>
Submitted on : Tuesday, January 28, 2020 - 9:36:51 PM
Last modification on : Tuesday, April 7, 2020 - 10:38:40 PM
### Citation
A. Albert, M. André, M. Anghinolfi, M. Ardid, J.-J. Aubert, et al.. ANTARES and IceCube Combined Search for Neutrino Point-like and Extended Sources in the Southern Sky. 2020. ⟨hal-02458760⟩
Record views
|
2020-04-10 13:21:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4190499782562256, "perplexity": 3747.254051071066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371896913.98/warc/CC-MAIN-20200410110538-20200410141038-00049.warc.gz"}
|
https://www.nature.com/articles/s41598-019-47568-9?error=cookies_not_supported&code=d180f245-ac2f-4470-a353-13dabc0634fb
|
# In silico design and optimization of selective membranolytic anticancer peptides
## Abstract
Membranolytic anticancer peptides represent a potential strategy in the fight against cancer. However, our understanding of the underlying structure-activity relationships and the mechanisms driving their cell selectivity is still limited. We developed a computational approach as a step towards the rational design of potent and selective anticancer peptides. This machine learning model distinguishes between peptides with and without anticancer activity. This classifier was experimentally validated by synthesizing and testing a selection of 12 computationally generated peptides. In total, 83% of these predictions were correct. We then utilized an evolutionary molecular design algorithm to improve the peptide selectivity for cancer cells. This simulated molecular evolution process led to a five-fold selectivity increase with regard to human dermal microvascular endothelial cells and more than ten-fold improvement towards human erythrocytes. The results of the present study advocate for the applicability of machine learning models and evolutionary algorithms to design and optimize novel synthetic anticancer peptides with reduced hemolytic liability and increased cell-type selectivity.
## Introduction
Cancer therapy faces the challenge of resistance to chemotherapeutics and receptor-targeted anticancer agents. Several cell resistance mechanisms, such as drug inactivation or efflux, target protein alteration, DNA damage repair and signaling cascade alteration have been identified1,2. Moreover, the indiscriminate action of most chemotherapeutics towards all rapidly dividing cells causes a variety of severe side effects3,4. Membranolytic anticancer peptides (ACPs) represent a new class of potential cancer therapeutics. Their receptor-independent mechanism of action may hinder the development of cellular resistance3,4,5. Nevertheless, the underlying structure-activity relationship that explains the membranolytic properties of these peptides is not completely understood. Peptide amphipathicity, moderate overall hydrophobicity, and a positive net charge are known requirements for ACP activity6,7,8,9. However, no simple combination of these properties has been found sufficient to fully explain the activity and selectivity of ACPs towards cancer cells10. Producing novel peptides lacking toxicity against nonneoplastic cells also remains challenging11. Various machine learning methods have been successfully applied to guide the rational design of both ACPs12,13,14,15,16,17,18 and antimicrobial peptides (AMPs)19,20, as well as other membrane-active peptides21. The lack of a systematic annotation of the selectivity of ACPs towards cancer cells in the literature and in peptide databases has hindered the development of predictive models that take selectivity into account. There is a need for innovative methods that do not require selectivity data for peptide optimization.
Simulated molecular evolution (SME) is a stochastic optimization algorithm pioneered in the 1990s for computational peptide design22,23,24. SME belongs to the class of evolutionary algorithms, which also includes genetic algorithms, and enables the optimization of peptide properties that are encoded in a theoretical fitness function or in combination with an experimental fitness evaluation when structure-activity relationships cannot be determined a priori. We have recently applied this design concept to generate innovative membrane-targeting peptides25,26. Here, we present a peptide design approach that is based on a novel ACP prediction model and on SME for the optimization of ACP selectivity for cancer cells. The predictive machine learning model led to the discovery of four novel synthetic ACPs with low-micromolar activity (1–20 µM) against A549 lung cancer and MCF7 breast cancer cells. One of these peptides was then subjected to SME. After the first iteration of the optimization process, we obtained a novel ACP that showed micromolar activities against a range of cancer cell types with significantly reduced activity towards human dermal microvascular cells (HDMEC) and human erythrocytes. The results of this study advocate for machine-learning models in combination with computational sequence generators for designing and optimizing functional peptides in silico.
## Results and Discussion
### ACP classifier model
We developed a machine learning model to classify peptides into ACPs and non-ACPs based on their amino acid sequence representations. The machine-learning classifier was trained on “positive” (ACPs, active) and “negative” (non-ACPs, inactive) peptides. We retrieved alpha-helical ACPs from the CancerPPD database27 as positive examples (N = 339). For the negative class, we retrieved alpha-helices from nontransmembrane proteins in the PDB database28 (N = 680). All amino acid sequences were represented numerically in a computer-readable form by the use of molecular descriptors. For this purpose, we utilized a combination of PEPCATS pharmacophore feature descriptors29 and four global properties, namely, Eisenberg’s hydrophobicity, Eisenberg’s hydrophobic moment30, charge density, and peptide length (number of residues). The PEPCATS descriptor represents the amino acid sequences as binary vectors indicating cross-correlated pharmacophore features of the individual amino acids (hydrophobic, aromatic, hydrogen-bond acceptor, hydrogen-bond donor, positively ionizable, negatively ionizable). The cross-correlation of pharmacophoric feature pairs is determined within a sliding sequence window encompassing seven residues. The resulting 151-dimensional descriptor vector was reduced to an 18-dimensional feature vector by covariance elimination and sequential feature elimination (Fig. S1, Supplementary Information). The dataset was split into a training set (2/3) and an independent test set (1/3) by stratified sampling, preserving the proportion between the positive and negative classes. Two machine learning algorithms were considered for model development: random forests31 and support vector machines (SVM)32. We optimized the SVM model’s hyperparameter by 10-fold cross-validation on the training data and chose a linear kernel for SVM training to enable straightforward feature interpretation. The performance of both classifiers exceeded 0.9 for both the training and the test data for all calculated metrics (Table 1). The SVM model was selected for further analysis due to the robustness of its decision function, which is determined solely by the support vectors and therefore unaltered by the addition of new data points that lie outside the decision margin32. Additionally, an analytical decision function as a linear combination of the model features can be extracted from linear support vector machines, whose weights indicate feature importance for the classification problem32.
We then compared the performance on the test dataset for our SVM model to online available ACP prediction tools, specifically the AntiCP models 1 and 213, the iACP model33, and the MLACP model18. These ACP prediction models are also based on an SVM classifier but utilize different descriptors and training data (Table S1, Supplementary Information). The prediction performance of the four classifiers and our SVM model was assessed on the independent test dataset (Table 2). In this experiment, the performance of our SVM model on the independent test set was superior to all four publicly available ACP prediction models in terms of all performance metrics, except for precision. The MLACP model showed higher precision but lower Matthews correlation coefficient (MCC), accuracy and recall than the other models. Therefore, the MLACP model is better at avoiding false positives but retrieves a higher number of false negatives compared to the SVM model developed in this study.
### Feature importance for ACP activity
We analyzed the feature weights of the SVM classifier to gain an understanding of important discriminatory features for distinguishing between ACPs and non-ACPs (Table 3, Fig. S2, Supplementary Information). Features were ranked by their absolute weight values as a measure of their relative importance for ACP classification. The global hydrophobicity (H), hydrophobic moment (µH) and the frequency of positively charged amino acid pairs separated by one residue (PPd2) were identified as important features of the classifier (weight values w = 1.65, w = 0.5 and w = 0.39, respectively). This finding is in accordance with previous reports on ACPs that highlight the relevance of the hydrophobicity, the hydrophobic moment and a net positive charge for anticancer activity7,34. The peptide length was also identified as a discriminatory feature (w = 0.4), indicating that longer peptides were considered more likely to be active. Two features that take into account the frequency of amino acids with hydrogen-bond donor and acceptor groups (ADd0, DDd0) were identified as bearing the greatest absolute weights (w = −1.94 and w = 1.67, respectively), emphasizing their role in distinguishing ACPs from inactive peptides (Table 3).
### De novo design of ACPs
To make use of the SVM model for the in silico design of novel ACPs, we generated three virtual peptide libraries of 100,000 peptides each, based on different design principles (Fig. S3, Supplementary Information):
1. (1)
The Helical library contains peptides with the position-dependent amino acid distribution of alpha-helical ACPs11.
2. (2)
The Amphipathic Arc library contains amphipathic peptides with differently sized hydrophobic arcs and a high probability of being cationic.
3. (3)
The Gradient library contains amphipathic peptides that possess a linear hydrophobic gradient.
We predicted the activity of the peptides from each library with our SVM model (Fig. S4, Supplementary Information). More than 80% of the peptides from the Amphipathic Arc and Gradient libraries and more than 60% of the peptides from the Helical library received an SVM score >0.5, indicating potential actives. In contrast, only 10% of peptides with random sequences were predicted to be active. The design principles, therefore, enriched the libraries with potentially active peptides in contrast with random peptide generation.
The similarity of the peptides in the training data was analyzed to consider the applicability domain of the SVM model35; this domain is the chemical space in which the model predictions may be considered reliable. The SVM model was utilized to estimate the pseudo-probabilities (i.e., the probabilities predicted by the model) of the peptides to belong to the active and inactive classes. These scores were subsequently weighted by the similarity to the training data to obtain similarity-weighted scores that consider the model’s applicability domain (ϕACP, ϕNeg, Eqs 5 and 6).
From each peptide library, we selected the two peptides with the highest ϕACP and ϕNeg scores. None of the peptides were found in the training data or the CancerPPD database. No peptides were retrieved from the CancerPPD database with >95% similarity to the selected ones, as determined by the CD-HIT program36. We finally synthesized the 12 peptides and determined their half-effective concentration (EC50) values against the MCF7 and A549 cancer cell lines. For 10 of the 12 synthesized peptides, the predictions were correct (Table 4). All of the peptides predicted to be inactive did not kill more than 50% of the cancer cells at a concentration of 50 µM. Of the six peptides predicted to be active, two were determined to be false positives (inactive at 50 µM) (Figs S10 and S11, Supplementary Information). Of the four correctly predicted active peptides, three were active in a low-micromolar range against both of the tested cancer cell lines, and the fourth (Gradient2) showed activity solely against MCF7 cells (Table 4).
The AmphiArc2 peptide, the shortest peptide of the low micromolar active peptides, has a high hydrophobic moment (µH = 0.87) and a 180° arc of hydrophobic residues in an idealized helical structure (Fig. 1a). As determined by circular dichroism (CD) spectroscopy, the AmphiArc2 peptide is unstructured in pure water but adopts an alpha-helical structure in a hydrophobic environment (in 50% v/v water:2,2-trifluoroethanol, TFE) (Fig. 1b). Helix formation in a hydrophobic, membrane-like environment has been shown to be a characteristic of certain alpha-helical AMPs and ACPs37,38. To further investigate its membranolytic action, we observed the activity of AmphiArc2 on single MCF7 cells entrapped in a microfluidic chip. Video recordings showed morphological changes in the cell membrane and leakage of the cytosolic components as early as 30 seconds after initial contact with the peptide in the cells (Fig. 1c, Supplementary Information, Video SV1). After 95 seconds, the dye encapsulated in the cancer cell had leaked out, and the cell membrane showed deformations and blebbing.
After characterizing the anticancer activity of the AmphiArc2 peptide, we tested its cell-type selectivity. We determined its EC50 value against the noncancer HDMEC primary cell line and half-effective hemolytic concentration (HC50) against human erythrocytes (Fig. 1d). Both values were found to be in the same low-micromolar range as the EC50 against cancer cell lines, indicating toxicity of this peptide against noncancer cells.
### Selectivity optimization of a de novo designed ACP
We applied the SME algorithm to improve the selectivity of the AmphiArc2 peptide towards noncancer cells. SME contained a variation (mutation) and a selection operator (Fig. 2a). By variation, a series of offspring was generated from a parent sequence. The fittest offspring of a generation was selected and used as a parent in the next SME iteration. In this study, parents were selected among the offspring that maintained anticancer activity but showed enhanced selectivity for cancer cells (selection operator). The mutations in the sequence variation step were performed according to a normalized Gaussian probability distribution of pairwise amino acid similarity (dij) (Fig. 2b). As a similarity measure, we utilized the Grantham matrix, which takes into account the atom composition, the polarity and the molecular volume of the residues39. The probability of substitution of residue i to residue j decreases with decreasing pairwise amino acid similarity. The degree of similarity of the offspring peptides to the parent sequence (offspring diversity) was controlled via the sigma (σ) parameter (Fig. 2b). A higher sigma value allowed the generation of sequences further away from the parent peptide (Fig. S5, Supplementary Information).
We performed a total of three SME iterations, starting from the AmphiArc2 peptide. In the first iteration, we generated 10 offspring peptides with σ = 0.1 (Fig. 2c). The mutations introduced by this sigma value were conservative amino acid changes that maintained the overall amphipathicity of the peptide. We synthesized and tested all ten offspring peptides of the three SME generations against the MCF7 and A549 cancer cell lines to determine their anticancer activity. For selectivity assessment, we tested their activity against the noncancer HDMEC primary cell line and measured their hemolytic activity on human erythrocytes (Fig. 2d).
The results obtained demonstrate that small conservative amino acid replacements affected the activity and selectivity of these ACPs while conserving their overall amphipathic helical structure in a lipophilic environment. Offspring n.2 (Off2) maintained the low-micromolar activity of the AmphiArc2 peptide against the A549 and MCF7 cancer cells but showed a 12-fold reduction of hemolytic activity against human erythrocytes and a ten-fold reduction of activity against HDMEC cells (Fig. 2d). Therefore, we selected Off2 as the parent for the next SME iteration (Fig. 3), in which ten new peptides were generated (Off2.1 to Off2.10).
The second generation of peptide variation did not achieve meaningful selectivity improvements with respect to HDMEC cells (Fig. S7, Supplementary Information). Five of the offspring peptides (Off2.1, Off2.3, Off2.4, Off2.9, Off2.10) were inactive. This loss of activity correlated with the introduction of a proline residue in the sequence (Fig. S7, Supplementary Information). Prolines affect alpha-helical conformation by introducing helix kinks and breaks40. We corroborated this secondary structure disruption with circular dichroism analysis of Off2.1, Off2.3, Off2.4 and Off2.9 (Fig. S9, Supplementary Information).
In the third SME generation (Off2.2.1 – Off2.2.10), we actively omitted proline residues and reduced the sigma value from 0.1 to 0.06 to explore close analogs of Off2 and Off2.2 (Fig. S8, Supplementary Information). Off2.2.10 showed decreased activity towards the noncancer HDMEC primary cells (Fig. 3d). This increase in selectivity was accompanied by a decreased activity against both the A549 and MCF7 cell lines.
The most active, but nonselective, AmphiArc2 parent peptide and the most cancer-cell selective Off2.2.10 peptide possess several differences and commonalities in their physicochemical properties. Even though both peptides display a hydrophobic arc of 180°, the hydrophobic moment of Off2.2.10 (µH = 0.64) is lower than that of AmphiArc2 (µH = 0.87) (Fig. 3c). The parent peptide bears eight positive charges, while Off2.2.10 contains seven positively ionizable residues caused by the N-terminal K1Q mutation. This moderate reduction of both the hydrophobic moment and the net positive charge improved the peptide selectivity for cancer cells and reduced the risk of killing non-transformed cells. To further explore these sequence features, we analyzed the ratios of the EC50 in the noncancer cells and in the cancer cell lines of all tested peptides. The more selective peptides (higher EC50 ratio) are characterized by moderate hydrophobic moments and charge densities (Supplementary Information, Fig. S10), suggesting a guideline for optimizing the cancer-cell selectivity of ACPs. This observation is in accordance with reports stating that decreasing the hydrophobic moment of helical ACPs reduces both their hemolytic potential and anticancer activity7,8,9.
### NCI-60 cancer cell panel testing
The ACP candidates AmphiArc2 (parent), Off2 and Off2.2.10 were tested on the NCI-60 cancer cell panel41. The three tested peptides inhibited the growth of all the cancer cell lines in the NCI-60 panel at a low micromolar concentration (Table 5, Supplementary Information Table S3). This result corroborated the wide-spectrum effect of the anticancer peptides across a range of cancer types. Both the activity of Off2 and Off2.2.10 peptides on the cell lines tested were significantly lower than the anticancer activity of the AmphiArc2 peptide (p-value = 4.9 × 10−13 and 1.7 × 10−12, respectively, Welch two sample t-test), suggesting that the initial increased cancer cell selectivity comes at a cost of an activity loss. No significant anticancer activity difference was found between Off2 and Off2.2.10 peptides (p-value = 0.66, Welch two sample t-test), indicating that the additionally improved anticancer selectivity does not affect the average anticancer activities of these two peptides.
## Conclusions
In this study, the combination of a machine learning model and the SME algorithm resulted in ACPs with low-micromolar potency against a wide variety of cancer cells (NCI-60 panel) and selectivity with respect to non-transformed cells (HDMEC) and human erythrocytes. The machine-learning classifier alone was able to identify active peptides but was insufficient to identify cancer cell selective peptides. Virtual screening of computationally designed peptide libraries with the implemented machine-learning classifier led to the discovery of four novel ACPs as the starting point for selectivity optimization by SME. In the first design-synthesize-test cycle, peptide hemolysis was reduced ten-fold, and after three cycles, peptide activity towards noncancer cells was reduced more than 20-fold while retaining anticancer activity compared to the parent peptide (AmphiArc2). The results of this study advocate for the SME method for experiment-guided peptide design and for exploration of the ACP structure-activity landscape. SME is applicable to all kinds of experimental readouts and provides an alternative to more conventional peptide optimization techniques, e.g., alanine scanning. At the same time, the results suggest that additionally increased cancer cell selectivity of membranolytic ACPs might come at the price of reduced peptide potency. This working hypothesis provides a basis for future study.
## Methods
### Machine learning model
Both machine learning models were constructed in Python v2.7 using the Scikit-Learn v0.18 library. For model training, the peptide dataset was split into 2/3 training and 1/3 testing subsets. Random forest classifier: the number of trees (“n_estimators”) was set to 500, and the number of features to be considered by each tree (“max_features”) was set to the squared root of all features (“sqrt”). SVM classifier: a linear kernel was employed and hyperparameter C was optimized by a ten-fold cross-validation in which the model is trained on 90% of the training data and validated on the remaining 10% in ten repetitions of training. The obtained mean of the 10 repetitions (cross-validation MCC score) was used to evaluate the performance of the models. The test scores were obtained with the independent test set.
#### Scoring metrics
The Matthews correlation coefficient (MCC, Eq. 1), accuracy (Eq. 2), precision (Eq. 3) and recall (Eq. 4) were calculated. TP, FP, TN and FN correspond to the number of true positives, false positives, true negatives and false negatives predicted by the model, respectively.
$$MCC=\frac{TP\times TN-FP\times FN}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}$$
(1)
$$Accuracy=\frac{TN+TP}{TN+TP+FN+FP}$$
(2)
$$Precision=\frac{TP}{TP+FP}$$
(3)
$$Recall=\frac{TP}{TP+FN}$$
(4)
#### Data weighted scoring functions
To appropriately consider the applicability domain of the SVM classifier, the final scoring function for ACPs (ϕACP, Eq. 5) and inactive (negative) peptides (ϕNeg, Eq. 6) considers both the pseudo-probability of the peptide to be an ACP (PACP) as predicted by the SVM model and the similarity of the predicted peptides to the training data (Sim. score). k-means clustering with k = 3 was performed with Python v2.7 and the Scikit-Learn v0.18 library package. The similarity score is calculated as the inverse of the Euclidean distance in descriptor space of the peptides to the three centroids.
$${\varphi }_{ACP}=\frac{{P}_{ACP}+Sim.\,score}{2}$$
(5)
$${\varphi }_{Neg}=\frac{(1-{P}_{ACP})+Sim.\,score}{2}$$
(6)
### Virtual peptide libraries
Three virtual peptide libraries were generated according to three different design principles. For each library, the peptide length was restricted to a range of 11 to 30 amino acids, as peptides able to fold in an alpha-helix are typically inside this range42. Duplicate sequences were eliminated, and the similarity of the sequences was restricted with the CD-HIT36 program to a threshold of 0.8 similarity. A total of 106 peptides were selected from each of the libraries.
• Helical library. The Helical library was generated with the position-dependent amino acid distributions of 62 anuran and hymenopteran alpha-helical ACPs11 in amino acid positions 1–18 (exactly 5 helical turns). For longer peptides, the pattern was repeated. The method to generate this library is included in the modlAMP43 Python package (modlamp.sequences.HelicesACP).
• Amphipathic Arc library. The design principle of the Amphipathic Arc library was amphipathic peptide sequences, which would potentially be alpha-helical with a preference for positively charged amino acids in the polar phase of the helix and varying hydrophobic arcs in the range 100–260°. The method to generate this library was included in the python package modlAMP as the class AmphipathicArc (modlamp.sequences.AmphipathicArc).
• Gradient library. The Gradient library was designed using the same procedure as the Amphipathic Arc library but with an additional hydrophobic gradient in the peptide structure from the N- to the C-terminus. For this, the amino acids in the C-terminal third of the peptide sequence were substituted with hydrophobic amino acids. In the modlAMP package, this was achieved by the method make_H_gradient in the modlamp.sequences.Amphipathic Arc class.
### Simulated molecular evolution
The simulated molecular evolution (SME) algorithm is based on the (1, λ) evolution strategy44 in which λ mutated sequences (offspring) are generated from a parent sequence22,23,25. The offspring was scored according to a fitness function, which was defined as the experimentally determined peptide anticancer activity and selectivity with respect to non-transformed cells. The best offspring were selected as a parent for the following optimization iteration. The amino acid mutations were generated according to an amino acid similarity matrix that has been row-normalized (dij) to allow for a pseudo-probability calculation of the amino acid transitions (Eq. 7). Here, the Grantham amino-acid similarity matrix was utilized39. The amino acids cysteine and methionine were excluded from the mutation matrix to avoid potential peptide cyclization and facilitate peptide synthesis.
$$P\,(i\to j)=exp(-\frac{{d}_{ij}^{2}}{2{\sigma }^{2}})/\sum _{j}\,exp(-\frac{{d}_{ij}^{2}}{2{\sigma }^{2}}).$$
(7)
where σ is a strategy parameter that controls the distance of the offspring sequences to the parent sequence and, thus, the sequence diversity among the offspring. The σ strategy parameter was set to 0.1 for the two initial SME iterations. Sequence diversity was characterized by the Shannon entropy45 (H) of the residue distribution among the offspring (Eq. 8), where pi corresponds to the frequency of amino acid i in a certain sequence position. The Shannon entropy values were normalized to [0, 1]. The simulated molecular evolution strategy and Shannon entropy calculation were programmed with Python v2.7.
$$H=\sum _{i=1}^{20}\,{p}_{i}lo{g}_{2}{p}_{i}.$$
(8)
## References
1. 1.
Holohan, C., Van Schaeybroeck, S., Longley, D. B. & Johnston, P. G. Cancer drug resistance: an evolving paradigm. Nat. Rev. Cancer 13, 714–726 (2013).
2. 2.
Chatterjee, S., Damle, S. G. & Sharma, A. K. Mechanisms of resistance against cancer therapeutic drugs. Curr. Pharm. Biotechnol. 15, 1105–1112 (2014).
3. 3.
Papo, N. & Shai, Y. Host defense peptides as new weapons in cancer treatment. C. Cell. Mol. Life Sci. 62, 784–790 (2005).
4. 4.
Schweizer, F. Cationic amphiphilic peptides with cancer-selective toxicity. Eur. J. Pharmacol. 625, 190–194 (2009).
5. 5.
Mader, J. S. & Hoskin, D. W. Cationic antimicrobial peptides as novel cytotoxic agents for cancer treatment. Expert Opin. Investig. Drugs 15, 933–946 (2006).
6. 6.
Riedl, S. et al. In search of a novel target — phosphatidylserine exposed by non-apoptotic tumor cells and metastases of malignancies with poor treatment efficacy. Biochim. Biophys. Acta - Biomembr. 1808, 2638–2645 (2011).
7. 7.
Harris, F., Dennison, S. R., Singh, J. & Phoenix, D. A. On the selectivity and efficacy of defense peptides with respect to cancer cells. Med. Res. Rev. 33, 190–234 (2013).
8. 8.
Huang, Y., Wang, X., Wang, H., Liu, Y. & Chen, Y. Studies on mechanism of action of anticancer peptides by modulation of hydrophobicity within a defined structural framework. Mol. Cancer Ther. 10, 416–426 (2011).
9. 9.
Yang, Q.-Z. et al. Design of potent, non-toxic anticancer peptides based on the structure of the antimicrobial peptide, temporin-1CEa. Arch. Pharm. Res. 36, 1302–1310 (2013).
10. 10.
Dennison, S. R., Harris, F., Bhatt, T., Singh, J. & Phoenix, D. A. A theoretical analysis of secondary structural characteristics of anticancer peptides. Mol. Cell. Biochem. 333, 129–135 (2010).
11. 11.
Gabernet, G., Müller, A. T., Hiss, J. A. & Schneider, G. Membranolytic anticancer peptides. Med. Chem. Commun. 7, 2232–2245 (2016).
12. 12.
Lin, Y.-C. et al. Multidimensional design of anticancer peptides. Angew. Chem. Int. Ed. 54, 10370–10374 (2015).
13. 13.
Tyagi, A. et al. In silico models for designing and discovering novel anticancer peptides. Sci. Rep. 3, 2984 (2013).
14. 14.
Chen, W., Ding, H., Feng, P., Lin, H. & Chou, K. iACP: a sequence-based tool for identifying anticancer peptides. Oncotarget 7, 16895–16909 (2016).
15. 15.
Hajisharifi, Z., Piryaiee, M., Mohammad Beigi, M., Behbahani, M. & Mohabatkar, H. Predicting anticancer peptides with Chou’s pseudo amino acid composition and investigating their mutagenicity via Ames test. J. Theor. Biol. 341, 34–40 (2014).
16. 16.
Saravanan, V. & Lakshmi, P. T. V. ACPP: A web server for prediction and design of anti-cancer peptides. Int. J. Pept. Res. Ther. 21, 99–106 (2015).
17. 17.
Grisoni, F. et al. Designing anticancer peptides by constructive machine learning. ChemMedChem 13, 1300–1302 (2018).
18. 18.
Manavalan, B. et al. MLACP: machine-learning-based prediction of anticancer peptides. Oncotarget 8, 77121–77136 (2017).
19. 19.
Fjell, C. D. et al. Identification of novel antibacterial peptides by chemoinformatics and machine learning. J. Med. Chem. 52, 2006–2015 (2009).
20. 20.
Müller, A. T., Hiss, J. A. & Schneider, G. Recurrent neural network model for constructive peptide design. J. Chem. Inf. Model. 58, 472–479 (2018).
21. 21.
Lee, E. Y., Wong, G. C. L. & Ferguson, A. L. Machine learning-enabled discovery and design of membrane-active peptides. Bioorg. Med. Chem, https://doi.org/10.1016/j.bmc.2017.07.012 (2017).
22. 22.
Schneider, G. & Wrede, P. The rational design of amino acid sequences by artificial neural networks and simulated molecular evolution: de novo design of an idealized leader peptidase cleavage site. Biophys. J. 66, 335–344 (1994).
23. 23.
Schneider, G., Schuchhardt, J. & Wrede, P. Peptide design in machina: development of artificial mitochondrial protein precursor cleavage sites by simulated molecular evolution. Biophys. J. 68, 434–447 (1995).
24. 24.
Schneider, G. et al. Peptide design by artificial neural networks and computer-based evolutionary search. Proc. Natl. Acad. Sci. USA 95, 12179–12184 (1998).
25. 25.
Hiss, J. A., Stutz, K., Posselt, G., Weßler, S. & Schneider, G. Attractors in sequence space: peptide morphing by directed simulated evolution. Mol. Inf. 34, 709–714 (2015).
26. 26.
Stutz, K. et al. Peptide–membrane interaction between targeting and lysis. ACS Chem. Biol. 12, 2254–2259 (2017).
27. 27.
Tyagi, A. et al. CancerPPD: a database of anticancer peptides and proteins. Nucleic Acids Res. 43, 837–843 (2015).
28. 28.
Berman, H. M. The Protein Data Bank. Nucleic Acids Res. 28, 235–242 (2000).
29. 29.
Koch, C. P. et al. Scrutinizing MHC-I binding peptides and their limits of variation. PLoS Comput. Biol. 9, e1003088 (2013).
30. 30.
Eisenberg, D., Weiss, R. M. & Terwilliger, T. C. The helical hydrophobic moment: a measure of the amphiphilicity of a helix. Nature 299, 371–374 (1982).
31. 31.
Breiman, L. Random Forests. Mach. Learn. 45, 5–32 (2001).
32. 32.
Cortes, C. & Vapnik, V. Support-vector networks. Mach. Learn. 20, 273–297 (1995).
33. 33.
Chen, Y. et al. Comparison of biophysical and biologic properties of alpha-helical enantiomeric antimicrobial peptides. Chem. Biol. Drug Des. 67, 162–173 (2006).
34. 34.
Riedl, S., Zweytick, D. & Lohner, K. Membrane-active host defense peptides – Challenges and perspectives for the development of novel anticancer drugs. Chem. Phys. Lipids 164, 766–781 (2011).
35. 35.
Schroeter, T. S. et al. Estimating the domain of applicability for machine learning QSAR models: A study on aqueous solubility of drug discovery molecules. J. Comput. Aided. Mol. Des. 21, 651–664 (2007).
36. 36.
Li, W. & Godzik, A. Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences. Bioinformatics 22, 1658–1659 (2006).
37. 37.
Marion, D., Zasloff, M. & Bax, A. A two-dimensional NMR study of the antimicrobial peptide magainin 2. FEBS Lett. 227, 21–26 (1988).
38. 38.
Zelezetsky, I. & Tossi, A. Alpha-helical antimicrobial peptides—Using a sequence template to guide structure–activity relationship studies. Biochim. Biophys. Acta Biomembr. 1758, 1436–1449 (2006).
39. 39.
Grantham, R. Amino acid difference formula to help explain protein evolution. Science 185, 862–864 (1974).
40. 40.
Nilsson, I. et al. Proline-induced disruption of a transmembrane α-helix in its natural environment. J. Mol. Biol. 284, 1165–1175 (1998).
41. 41.
Monks, A. et al. Feasibility of a high-flux anticancer drug screen using a diverse panel of cultured human tumor cell lines. J. Natl. Cancer Inst. 83, 757–766 (1991).
42. 42.
Manning, M. C., Illangasekare, M. & Woody, R. W. Circular dichroism studies of distorted alpha-helices, twisted beta-sheets, and beta turns. Biophys. Chem. 31, 77–86 (1988).
43. 43.
Müller, A. T., Gabernet, G., Hiss, J. A. & Schneider, G. modlAMP: Python for antimicrobial peptides. Bioinformatics, https://doi.org/10.1093/bioinformatics/btx285 (2017).
44. 44.
Rechenberg, I. Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (Frommann-Holzboog, Stuttgart, 1973).
45. 45.
Asadi, M., Ebrahimi, N. & Soofi, E. S. Shannon entropy measures. In Wiley StatsRef: Statistics Reference Online 1–8, https://doi.org/10.1002/9781118445112.stat07920 (John Wiley & Sons, New York, 2017).
## Acknowledgements
The authors thank Sarah Haller for technical support and Prof. Cornelia Halin and Prof. Stephanie Krämer for the use of the cell culture facilities. The Developmental Therapeutics Program of the National Cancer Institute and Dr. John A. Beutler kindly performed the NCI-60 cancer cell panel tests. We thank Dr. Francesca Grisoni and Alexander L. Button for constructive input and discussions. This work was financially supported by the Swiss National Science Foundation (Grant No. 2000021_157190 to G.S. and J.A.H.).
## Author information
G.G., D.G., A.T.M. and C.S.N. performed the peptide syntheses and activity assays. G.G. and L.A. performed the microfluidics assay. J.A.H., P.S.D. and G.S. designed and supervised the study. G.G., A.T.M. and G.S. programmed the software. All authors analyzed the data and contributed to the manuscript. G.G. and G.S. wrote the manuscript.
Correspondence to Gisbert Schneider.
## Ethics declarations
### Competing Interests
G.S. declares a potential financial conflict of interest in his role as life-science industry consultant and cofounder of inSili.com GmbH, Zurich. No further competing interests are declared.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
|
2019-12-15 20:51:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6260114312171936, "perplexity": 7561.615417463343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541310866.82/warc/CC-MAIN-20191215201305-20191215225305-00150.warc.gz"}
|
https://www.nag.com/numeric/nl/nagdoc_latest/clhtml/g05/g05intro.html
|
# NAG CL InterfaceG05 (Rand)Random Number Generators
Settings help
CL Name Style:
## 1Scope of the Chapter
This chapter is concerned with the generation of sequences of independent pseudorandom and quasi-random numbers from various distributions, and models.
## 2Background to the Problems
### 2.1Pseudorandom Numbers
A sequence of pseudorandom numbers is a sequence of numbers generated in some systematic way such that they are independent and statistically indistinguishable from a truly random sequence. A pseudorandom number generator (PRNG) is a mathematical algorithm that, given an initial state, produces a sequence of pseudorandom numbers. A PRNG has several advantages over a true random number generator in that the generated sequence is repeatable, has known mathematical properties and can be implemented without needing any specialist hardware. Many books on statistics and computer science have good introductions to PRNGs, for example Knuth (1981) or Banks (1998).
PRNGs can be split into base generators, and distributional generators. Within the context of this document a base generator is defined as a PRNG that produces a sequence (or stream) of variates (or values) uniformly distributed over the interval $\left(0,1\right)$. Depending on the algorithm being considered, this interval may be open, closed or half-closed. A distribution generator is a function that takes variates generated from a base generator and transforms them into variates from a specified distribution, for example a uniform, Gaussian (Normal) or gamma distribution.
The period (or cycle length) of a base generator is defined as the maximum number of values that can be generated before the sequence starts to repeat. The initial state of the base generator is often called the seed.
There are six base generators currently available in the NAG Library, these are; a basic linear congruential generator (LCG) (referred to as the NAG basic generator) (see Knuth (1981)), two sets of Wichmann–Hill generators (see Maclaren (1989) and Wichmann and Hill (2006)), the Mersenne Twister (see Matsumoto and Nishimura (1998)), the ACORN generator (see Wikramaratna (1989)) and L'Ecuyer generator (see L'Ecuyer and Simard (2002)).
#### 2.1.1NAG Basic Generator
The NAG basic generator is a linear congruential generator (LCG) and, like all linear congruential generators, has the form:
where the ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots$, form the required sequence.
The NAG basic generator uses ${a}_{1}={13}^{13}$ and ${m}_{1}={2}^{59}$, which gives a period of approximately ${2}^{57}$.
This generator has been part of the NAG Library since Mark 1 and as such has been widely used. It suffers from no known problems, other than those due to the lattice structure inherent in all linear congruential generators, and, even though the period is relatively short compared to many of the newer generators, it is sufficiently large for many practical problems.
The performance of the NAG basic generator has been analysed by the Spectral Test, see Section 3.3.4 of Knuth (1981), yielding the following results in the notation of Knuth (1981).
$\mathbit{n}$ ${\mathbit{\nu }}_{\mathbit{n}}$ Upper bound for ${\mathbit{\nu }}_{\mathbit{n}}$
$2$ $3.44×{10}^{8}$ $4.08×{10}^{8}$
$3$ $4.29×{10}^{5}$ $5.88×{10}^{5}$
$4$ $1.72×{10}^{4}$ $2.32×{10}^{4}$
$5$ $1.92×{10}^{3}$ $3.33×{10}^{3}$
$6$ $593$ $939$
$7$ $198$ $380$
$8$ $108$ $197$
$9$ $67$ $120$
The right-hand column gives an upper bound for the values of ${\nu }_{n}$ attainable by any multiplicative congruential generator working modulo ${2}^{59}$.
An informal interpretation of the quantities ${\nu }_{n}$ is that consecutive $n$-tuples are statistically uncorrelated to an accuracy of $1/{\nu }_{n}$. This is a theoretical result; in practice the degree of randomness is usually much greater than the above figures might support. More details are given in Knuth (1981), and in the references cited therein.
Note that the achievable accuracy drops rapidly as the number of dimensions increases. This is a property of all multiplicative congruential generators and is the reason why very long periods are needed even for samples of only a few random numbers.
#### 2.1.2Wichmann–Hill I Generator
This series of Wichmann–Hill base generators (see Maclaren (1989)) use a combination of four linear congruential generators and has the form:
(1)
where the ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots$, form the required sequence. The NAG Library implementation includes 273 sets of parameters, ${a}_{\mathit{j}},{m}_{\mathit{j}}$, for $\mathit{j}=1,2,3,4$, to choose from.
The constants ${a}_{i}$ are in the range 112 to 127 and the constants ${m}_{j}$ are prime numbers in the range $16718909$ to $16776971$, which are close to ${2}^{24}=16777216$. These constants have been chosen so that each of the resulting 273 generators are essentially independent, all calculations can be carried out in 32-bit integer arithmetic and the generators give good results with the spectral test, see Knuth (1981) and Maclaren (1989). The period of each of these generators would be at least ${2}^{92}$ if it were not for common factors between $\left({m}_{1}-1\right)$, $\left({m}_{2}-1\right)$, $\left({m}_{3}-1\right)$ and $\left({m}_{4}-1\right)$. However, each generator should still have a period of at least ${2}^{80}$. Further discussion of the properties of these generators is given in Maclaren (1989).
#### 2.1.3Wichmann–Hill II Generator
This Wichmann–Hill base generator (see Wichmann and Hill (2006)) is of the same form as that described in Section 2.1.2, i.e., a combination of four linear congruential generators. In this case ${a}_{1}=11600$, ${m}_{1}=2147483579$, ${a}_{2}=47003$, ${m}_{2}=2147483543$, ${a}_{3}=23000$, ${m}_{3}=2147483423$, ${a}_{4}=33000$, ${m}_{4}=2147483123$.
Unlike in the original Wichmann–Hill generator, these values are too large to carry out the calculations detailed in (1) using 32-bit integer arithmetic, however, if
then setting
gives
$wi = { Wi if Wi≥0 2147483579+Wi otherwise$
and ${W}_{i}$ can be calculated in 32-bit integer arithmetic. Similar expressions exist for ${x}_{i}$, ${y}_{i}$ and ${z}_{i}$. The period of this generator is approximately ${2}^{121}$.
Further details of implementing this algorithm and its properties are given in Wichmann and Hill (2006). This paper also gives some useful guidelines on testing PRNGs.
#### 2.1.4Mersenne Twister Generator
The Mersenne Twister (see Matsumoto and Nishimura (1998)) is a twisted generalized feedback shift register generator. The algorithm underlying the Mersenne Twister is as follows:
1. (i)Set some arbitrary initial values ${x}_{1},{x}_{2},\dots ,{x}_{r}$, each consisting of $w$ bits.
2. (ii)Letting
$A= ( 0 Iw-1 aw aw-1⋯a1 ) ,$
where ${I}_{w-1}$ is the $\left(w-1\right)×\left(w-1\right)$ identity matrix and each of the ${a}_{i},i=1$ to $w$ take a value of either $0$ or $1$ (i.e., they can be represented as bits). Define
$x i+r = ( x i+s ⊕( x i (ω:(l+1)) | x i+1 (l:1) )A) ,$
where ${x}_{i}^{\left(\omega :\left(l+1\right)\right)}|{x}_{i+1}^{\left(l:1\right)}$ indicates the concatenation of the most significant (upper) $w-l$ bits of ${x}_{i}$ and the least significant (lower) $l$ bits of ${x}_{i+1}$.
3. (iii)Perform the following operations sequentially:
$z = xi+r ⊕ (xi+r≫t1) z = z ⊕ ((z≪t2) AND m1) z = z ⊕ ((z≪t3) AND m2) z = z ⊕ (z≫t4) u i+r = z/ (2w-1) ,$
where ${t}_{1}$, ${t}_{2}$, ${t}_{3}$ and ${t}_{4}$ are integers and ${m}_{1}$ and ${m}_{2}$ are bit-masks and ‘$\gg t$’ and ‘$\ll t$’ represent a $t$ bit shift right and left respectively, $\oplus$ is bit-wise exclusively or (xor) operation and ‘AND’ is a bit-wise and operation.
The ${u}_{\mathit{i}+r}$, for $\mathit{i}=1,2,\dots$, form the required sequence. The supplied implementation of the Mersenne Twister uses the following values for the algorithmic constants:
$w = 32 a = 0x9908b0 df l = 31 r = 624 s = 397 t1 = 11 t2 = 7 t3 = 15 t4 = 18 m1 = 0x9d2c5680 m2 = 0xefc60000$
where the notation 0xDD $\dots$ indicates the bit pattern of the integer whose hexadecimal representation is DD $\dots$.
This algorithm has a period length of approximately ${2}^{19,937}-1$ and has been shown to be uniformly distributed in 623 dimensions (see Matsumoto and Nishimura (1998)).
#### 2.1.5ACORN Generator
The ACORN generator is a special case of a multiple recursive generator (see Wikramaratna (1989) and Wikramaratna (2007)). The algorithm underlying ACORN is as follows:
1. (i)Choose an integer value $k\ge 1$.
2. (ii)Choose an integer value $M$, and an integer seed ${Y}_{0}^{\left(0\right)}$, such that $0<{Y}_{0}^{\left(0\right)} and ${Y}_{0}^{\left(0\right)}$ and $M$ are relatively prime.
3. (iii)Choose an arbitrary set of $k$ initial integer values, ${Y}_{0}^{\left(1\right)},{Y}_{0}^{\left(2\right)},\dots ,{Y}_{0}^{\left(k\right)}$, such that $0\le {Y}_{0}^{\left(m\right)}, for all $m=1,2,\dots ,k$.
4. (iv)Perform the following sequentially:
for $m=1,2,\dots ,k$.
5. (v)Set ${u}_{i}={Y}_{i}^{\left(k\right)}/M$.
The ${u}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots$, then form a pseudorandom sequence, with ${u}_{i}\in \left[0,1\right)$, for all $i$.
Although you can choose any value for $k$, $M$, ${Y}_{0}^{\left(0\right)}$ and the ${Y}_{0}^{\left(m\right)}$, within the constraints mentioned in (i) to (iii) above, it is recommended that $k\ge 10$, $M$ is chosen to be a large power of two with $M\ge {2}^{60}$ and ${Y}_{0}^{\left(0\right)}$ is chosen to be odd.
The period of the ACORN generator, with the modulus $M$ equal to a power of two, and an odd value for ${Y}_{0}^{\left(0\right)}$ has been shown to be an integer multiple of $M$ (see Wikramaratna (1992)). Therefore, increasing $M$ will give a series with a longer period.
#### 2.1.6L'Ecuyer MRG32k3a Combined Recursive Generator
The base generator L'Ecuyer MRG32k3a (see L'Ecuyer and Simard (2002)) combines two multiple recursive generators:
where ${a}_{11}=0$, ${a}_{12}=1403580$, ${a}_{13}=-810728$, ${m}_{1}={2}^{32}-209$, ${a}_{21}=527612$, ${a}_{22}=0$, ${a}_{23}=-1370589$, ${m}_{2}={2}^{32}-22853$, and ${u}_{i},i=1,2,\dots$ form the required sequence. If $d={m}_{1}$ then ${u}_{i}\in \left(0,1\right]$ else if $d={m}_{1}+1$ then ${u}_{i}\in \left(0,1\right)$. Combining the two multiple recursive generators (MRG) results in sequences with better statistical properties in high dimensions and longer periods compared with those generated from a single MRG. The combined generator described above has a period length of approximately ${2}^{191}$.
### 2.2Quasi-random Numbers
Low discrepancy (quasi-random) sequences are used in numerical integration, simulation and optimization. Like pseudorandom numbers they are uniformly distributed but they are not statistically independent, rather they are designed to give more even distribution in multidimensional space (uniformity). Therefore, they are often more efficient than pseudorandom numbers in multidimensional Monte Carlo methods.
The quasi-random number generators implemented in this chapter generate a set of points ${x}^{1},{x}^{2},\dots ,{x}^{N}$ with high uniformity in the $S$-dimensional unit cube ${I}^{S}={\left[0,1\right]}^{S}$. One measure of the uniformity is the discrepancy which is defined as follows:
• Given a set of points ${x}^{1},{x}^{2},\dots ,{x}^{N}\in {I}^{S}$ and a subset $G\subset {I}^{S}$, define the counting function ${S}_{N}\left(G\right)$ as the number of points ${x}^{i}\in G$. For each $x=\left({x}_{1},{x}_{2},\dots ,{x}_{S}\right)\in {I}^{S}$, let ${G}_{x}$ be the rectangular $S$-dimensional region
$G x = [0, x 1 ) × [0, x 2 ) ×⋯× [0, x S )$
with volume ${x}_{1},{x}_{2},\dots ,{x}_{S}$. Then the discrepancy of the points ${x}^{1},{x}^{2},\dots ,{x}^{N}$ is
$DN* (x1,x2,…,xN) = sup x∈IS |SN(Gx)-N∑ k=1 S xk| .$
The discrepancy of the first $N$ terms of such a sequence has the form
$DN* (x1,x2,…,xN) ≤ CS (logN)S + O( (logN) S-1 ) for all N≥2.$
The principal aim in the construction of low-discrepancy sequences is to find sequences of points in ${I}^{S}$ with a bound of this form where the constant ${C}_{S}$ is as small as possible.
Three types of low-discrepancy sequences are supplied in this Library, these are due to Sobol, Faure and Niederreiter. Two sets of Sobol sequences are supplied, the first is based on work of Joe and Kuo (2008) and the second on the work of Bratley and Fox (1988). More information on quasi-random number generation and the Sobol, Faure and Niederreiter sequences in particular can be found in Bratley and Fox (1988) and Fox (1986).
The efficiency of a simulation exercise may often be increased by the use of variance reduction methods (see Morgan (1984)). It is also worth considering whether a simulation is the best approach to solving the problem. For example, low-dimensional integrals are usually more efficiently calculated by functions in Chapter D01 rather than by Monte Carlo integration.
### 2.3Scrambled Quasi-random Numbers
Scrambled quasi-random sequences are an extension of standard quasi-random sequences that attempt to eliminate the bias inherent in a quasi-random sequence whilst retaining the low-discrepancy properties. The use of a scrambled sequence allows error estimation of Monte Carlo results by performing a number of iterates and computing the variance of the results.
This implementation of scrambled quasi-random sequences is based on TOMS algorithm 823 and details can be found in the accompanying paper, Hong and Hickernell (2003). Three methods of scrambling are supplied; the first a restricted form of Owen's scrambling (Owen (1995)), the second based on the method of Faure and Tezuka (2000) and the last method combines the first two.
Scrambled versions of both Sobol sequences and the Niederreiter sequence can be obtained.
### 2.4Non-uniform Random Numbers
Random numbers from other distributions may be obtained from the uniform random numbers by the use of transformations and rejection techniques, and for discrete distributions, by table based methods.
1. (a)Transformation Methods
For a continuous random variable, if the cumulative distribution function (CDF) is $F\left(x\right)$ then for a uniform $\left(0,1\right)$ random variate $u$, $y={F}^{-1}\left(u\right)$ will have CDF $F\left(x\right)$. This method is only efficient in a few simple cases such as the exponential distribution with mean $\mu$, in which case ${F}^{-1}\left(u\right)=-\mu \mathrm{log}\left(u\right)$. Other transformations are based on the joint distribution of several random variables. In the bivariate case, if $v$ and $w$ are random variates there may be a function $g$ such that $y=g\left(v,w\right)$ has the required distribution; for example, the Student's $t$-distribution with $n$ degrees of freedom in which $v$ has a Normal distribution, $w$ has a gamma distribution and $g\left(v,w\right)=v\sqrt{n/w}$.
2. (b)Rejection Methods
Rejection techniques are based on the ability to easily generate random numbers from a distribution (called the envelope) similar to the distribution required. The value from the envelope distribution is then accepted as a random number from the required distribution with a certain probability; otherwise, it is rejected and a new number is generated from the envelope distribution.
3. (c)Table Search Methods
For discrete distributions, if the cumulative probabilities, ${P}_{i}=\mathrm{Prob}\left(x\le i\right)$, are stored in a table then, given $u$ from a uniform $\left(0,1\right)$ distribution, the table is searched for $i$ such that ${P}_{i-1}. The returned value $i$ will have the required distribution. The table searching can be made faster by means of an index, see Ripley (1987). The effort required to set up the table and its index may be considerable, but the methods are very efficient when many values are needed from the same distribution.
### 2.5Copulas
A copula is a function that links the univariate marginal distributions with their multivariate distribution. Sklar's theorem (see Sklar (1973)) states that if $f$ is an $m$-dimensional distribution function with continuous margins ${f}_{1},{f}_{2},\dots ,{f}_{m}$, then $f$ has a unique copula representation, $c$, such that
$f (x1,x2,…,xm) = c (f1(x1),f2(x2),…,fm(xm))$
The copula, $c$, is a multivariate uniform distribution whose dependence structure is defined by the dependence structure of the multivariate distribution $f$, with
$c (u1,u2,…,um) = f (f1-1(u1),f2-1(u2),…,fm-1(um))$
where ${u}_{i}\in \left[0,1\right]$. This relationship can be used to simulate variates from distributions defined by the dependence structure of one distribution and each of the marginal distributions given by another. For additional information see Nelsen (1998) or Boye (Unpublished manuscript) and the references therein.
### 2.6Brownian Bridge
#### 2.6.1Brownian Bridge Process
Fix two times ${t}_{0} and let $W={\left({W}_{t}\right)}_{0\le t\le T-{t}_{0}}$ be a standard $d$-dimensional Wiener process on the interval $\left[0,T-{t}_{0}\right]$. Recall that the terms Wiener process and Brownian motion are often used interchangeably.
A standard $d$-dimensional Brownian bridge $B={\left({B}_{t}\right)}_{{t}_{0}\le t\le T}$ on $\left[{t}_{0},T\right]$ is defined (see Revuz and Yor (1999)) as
$Bt = W t-t0 - t-t0 T-t0 WT-t0 .$
The process is continuous, starts at zero at time ${t}_{0}$ and ends at zero at time $T$. It is Gaussian, has zero mean and has a covariance structure given by
$𝔼 (BsBtT) = (s-t0) (T-t) T-t0 Id$
for any $s\le t$ in $\left[{t}_{0},T\right]$ where ${I}_{d}$ is the $d$-dimensional identity matrix. The Brownian bridge is often called a non-free or ‘pinned’ Wiener process since it is forced to be $0$ at time $T$, but is otherwise very similar to a standard Wiener process.
We can generalize this construction as follows. Fix points $x,w\in {ℝ}^{d}$, let $\Sigma$ be a $d×d$ covariance matrix and choose any $d×d$ matrix $C$ such that $C{C}^{\mathrm{T}}=\Sigma$. The generalized $d$-dimensional Brownian bridge $X={\left({X}_{t}\right)}_{{t}_{0}\le t\le T}$ is defined by setting
$Xt = (t-t0) w+ (T-t) x T-t0 + CBt = (t-t0) w+ (T-t) x T-t0 + CWt - t0 - (t-t0) T-t0 C W T-t0$
for all $t\in \left[{t}_{0},T\right]$. The process $X$ is continuous, starts at $x$ at time ${t}_{0}$ and ends at $w$ at time $T$. It has mean $\left(\left(t-{t}_{0}\right)w+\left(T-t\right)x\right)/\left(T-{t}_{0}\right)$ and covariance structure
$𝔼 (Xs-𝔼Xs) (Xt-𝔼Xt) T = 𝔼 (CBsBtTCT) = (s-t0) (T-t) T-t0 Σ$
for all $s\le t$ in $\left[{t}_{0},T\right]$. This is a non-free Wiener process since it is forced to be equal to $w$ at time $T$. However if we set $w=x+C{W}_{T-{t}_{0}}$, then $X$ simplifies to
$Xt = x+C W t-t0$
for all $t\in \left[{t}_{0},T\right]$ which is nothing other than a $d$-dimensional Wiener process with covariance given by $\Sigma$.
Figure 1 shows two sample paths for a two-dimensional free Wiener process $X={\left({X}_{t}^{1},{X}_{t}^{2}\right)}_{0\le t\le 2}$. The correlation coefficient between the one-dimensional processes ${X}^{1}$ and ${X}^{2}$ at any time is $\rho =0.80$. Note that the red and green paths in each figure are uncorrelated, however it is fairly evident that the two red paths are correlated, and that the two green paths are correlated (when one path increases so does the other, and vice versa).
Figure 2 shows two sample paths for a two-dimensional non-free Wiener process. The process starts at $\left(0,0\right)$ and ends at $\left(1,-1\right)$. The correlation coefficient between the one-dimensional processes is again $\rho =0.80$. The red and green paths in each figure are uncorrelated, while the two red paths tend to increase and decrease together, as do the two green paths. Both Figure 1 and Figure 2 were constructed using g05xbc.
#### 2.6.2Brownian Bridge Algorithm
The ideas above can also be used to construct sample paths of a free or non-free Wiener process (recall that a non-free Wiener process is the Brownian bridge process outlined above). Fix two times ${t}_{0} and let ${\left({t}_{i}\right)}_{1\le i\le N}$ be any set of time points satisfying ${t}_{0}<{t}_{1}<{t}_{2}<\cdots <{t}_{N}. Let ${\left({X}_{{t}_{i}}\right)}_{1\le i\le N}$ denote a $d$-dimensional (free or non-free) Wiener sample path at these times. These values can be generated by the so-called Brownian bridge algorithm (see Glasserman (2004)) which works as follows. From any two known points ${X}_{{t}_{i}}$ at time ${t}_{i}$ and ${X}_{{t}_{k}}$ at time ${t}_{k}$ with ${t}_{i}<{t}_{k}$, a new point ${X}_{{t}_{j}}$ can be interpolated at any time ${t}_{j}\in \left({t}_{i},{t}_{k}\right)$ by setting
$Xtj = Xti (tk-tj) + Xtk (tj-ti) tk-ti + C Z (tk-tj) (tj-ti) (tk-ti)$ (2)
where $Z$ is a $d$-dimensional standard Normal random variable and $C$ is any $d×d$ matrix such that $C{C}^{\mathrm{T}}$ is the desired covariance structure for the (free or non-free) Wiener process $X$. Clearly this algorithm is iterative in nature. All that is needed to complete the specification is to fix the start point ${X}_{{t}_{0}}$ and end point ${X}_{T}$, and to specify how successive interpolation times ${t}_{j}$ are chosen. For $X$ to behave like a usual (free) Wiener process we should set ${X}_{{t}_{0}}$ equal to some value $x\in {ℝ}^{d}$ and then set ${X}_{T}=x+C\sqrt{T-{t}_{0}}Z$ where $Z$ is any $d$-dimensional standard Normal random variable. However when it comes to deciding how the successive interpolation times ${t}_{j}$ should be chosen, there is virtually no restriction. Any method of choosing which ${t}_{j}\in \left({t}_{i},{t}_{k}\right)$ to interpolate next is equally valid, provided ${t}_{i}$ is the nearest known point to the left of ${t}_{j}$ and ${t}_{k}$ is the nearest known point to the right of ${t}_{j}$. In other words, the interpolation interval $\left({t}_{i},{t}_{k}\right)$ must not contain any other known points, otherwise the covariance structure of the process will be incorrect.
The order in which the successive interpolation times ${t}_{j}$ are chosen is called the bridge construction order. Since all construction orders will produce a correct process, the question arises whether one construction order should be preferred over another. When the $Z$ values are drawn from a pseudorandom generator, the answer is typically no. However the bridge algorithm is frequently used with quasi-random numbers, and in this case the bridge construction order can be important.
#### 2.6.3Bridge Construction Order and Quasi-random Sequences
Consider the one-dimensional case of a free Wiener process where $d=C=1$. The Brownian bridge is frequently combined with low-discrepancy (quasi-random) sequences to perform quasi-Monte Carlo integration. Quasi-random points ${Z}^{1},{Z}^{2},{Z}^{3},\dots$ are generated from the standard Normal distribution, where each quasi-random point ${Z}^{i}=\left({Z}_{1}^{i},{Z}_{2}^{i},\cdots ,{Z}_{D}^{i}\right)$ consists of $D$ one-dimensional values. The process $X$ starts at ${X}_{{t}_{0}}=x$ which is known. There remain $N+1$ time points at which the bridge is to be computed, namely ${\left({X}_{{t}_{i}}\right)}_{1\le i\le N}$ and ${X}_{T}$ (recall we are considering a free Wiener process). In this case $D$ is set equal to $N+1$, so that $N+1$ dimensional quasi-random points are generated. A single quasi-random point is used to construct one Wiener sample path.
The question is how to use the dimension values of each $N+1$ dimensional quasi-random point. Often the ‘lower’ dimension values (${Z}_{1}^{i},{Z}_{2}^{i}$, etc.) display better uniformity properties than the ‘higher’ dimension values (${Z}_{N+1}^{i},{Z}_{N}^{i}$, etc.) so that the ‘lower’ dimension values should be used to construct the most important sections of the sample path. For example, consider a model which is particularly sensitive to the behaviour of the underlying process at time $3$. When constructing the sample paths, one would, therefore, ensure that time $3$ was one of the interpolation points of the bridge, and that a ‘lower’ dimension value was used in (2) to construct the corresponding bridge point ${X}_{3}$. Indeed, one would most likely also ensure that time ${X}_{3}$ was one of the first bridge points that was constructed: ‘lower’ dimension values would be used to construct both the left and right bridge points used in (2) to interpolate ${X}_{3}$, so that the distribution of ${X}_{3}$ benefits as much as possible from the uniformity properties of the quasi-random sequence. For further discussions in this regard we refer to Glasserman (2004). These remarks extend readily to the case of a non-free Wiener process.
#### 2.6.4Brownian Bridge and Stochastic Differential Equations
The Brownian bridge algorithm, especially when combined with quasi-random variates, is frequently used to obtain numerical solutions to stochastic differential equations (SDEs) driven by (free or non-free) Wiener processes. The quasi-random variates produce a family of Wiener sample paths which cover the space of all Wiener sample paths fairly evenly. This is analogous to the way in which a two-dimensional quasi-random sequence covers the unit square ${\left[0,1\right]}^{2}$ evenly. When solving SDEs one is typically interested in the increments of the driving Wiener process between two time points, rather than the value of the process at a particular time point. Section 3.3 contains details on which functions can be used to obtain such Wiener increments.
### 2.7Random Fields
A random field is a stochastic process, taking values in a Euclidean space, and defined over a parameter space of dimensionality at least one. They are often used to simulate some physical space-dependent parameter, such as the permeability of rock, which cannot be measured at every point in the space. The simulated values can then be used to model other dependent quantities, for example, underground flow of water, often through the use of partial differential equations (PDEs).
A $d$-dimensional random field $Z\left(\mathbf{x}\right)$ is a function which is random at every point $\left(\mathbf{x}\in D\right)$ for some domain $D\subset {ℝ}^{d}$, so $Z\left(\mathbf{x}\right)$ is a random variable for each $\mathbf{x}$. The random field has a mean function $\mu \left(\mathbf{x}\right)=𝔼\left[Z\left(\mathbf{x}\right)\right]$ and a symmetric positive semidefinite covariance function $C\left(\mathbf{x},\mathbf{y}\right)=𝔼\left[\left(Z\left(\mathbf{x}\right)-\mu \left(\mathbf{x}\right)\right)\left(Z\left(\mathbf{y}\right)-\mu \left(\mathbf{y}\right)\right)\right]$.
A random field, $Z\left(\mathbf{x}\right)$, is a Gaussian random field if, for any choice of $n\in ℕ$ and ${\mathbf{x}}_{1},\dots ,{\mathbf{x}}_{n}\in {ℝ}^{d}$, the random vector ${\left[Z\left({\mathbf{x}}_{1}\right),\dots ,Z\left({\mathbf{x}}_{n}\right)\right]}^{\mathrm{T}}$ follows a multivariate Gaussian distribution.
A Gaussian random field $Z\left(\mathbf{x}\right)$ is stationary if $\mu \left(\mathbf{x}\right)$ is constant for all $\mathbf{x}\in ℝ$ and $C\left(\mathbf{x},\mathbf{y}\right)=C\left(\mathbf{x}+\mathbf{a},\mathbf{y}+\mathbf{a}\right)$ for all $\mathbf{x},\mathbf{y},\mathbf{a}\in {ℝ}^{d}$ and hence we can express the covariance function $C\left(\mathbf{x},\mathbf{y}\right)$ as a function $\gamma$ of one variable: $C\left(\mathbf{x},\mathbf{y}\right)=\gamma \left(\mathbf{x}-\mathbf{y}\right)$. $\gamma$ is known as a variogram (or more correctly, a semivariogram) and includes the multiplicative factor ${\sigma }^{2}$ representing the variance such that $\gamma \left(0\right)={\sigma }^{2}$. There are a number of commonly used variograms, including:
Symmetric stable variogram
$γ(x) = σ2 exp(- (x′) ν) .$
Cauchy variogram
$γ(x) = σ2 (1+ (x′) 2 ) -ν .$
Differential variogram with compact support
$γ(x) = { σ2(1+8x′+25(x′)2+32(x′)3)(1-x′)8, x′<1, 0, x′≥1.$
Exponential variogram
$γ(x)=σ2exp(-x′).$
Gaussian variogram
$γ(x)=σ2exp(-(x′)2).$
Nugget variogram
$γ(x)={ σ2, x=0, 0, x≠0.$
Spherical variogram
$γ(x)={ σ2(1-1.5x′+0.5(x′)3), x′<1, 0, x′≥1.$
Bessel variogram
$γ(x)=σ22νΓ(ν+1)Jν(x′)(x′)ν,$
Hole effect variogram
$γ(x)=σ2sin(x′)x′.$
Whittle–Matérn variogram
$γ(x)=σ221-ν(x′)νKν(x′)Γ(ν).$
Continuously parameterised variogram with compact support
$γ(x)={ σ221-ν(x′)νKν(x′)Γ(ν)(1+8x′′+25(x′′)2+32(x′′)3)(1-x′′)8, x′′<1, 0, x′′≥1.$
Generalized hyperbolic distribution variogram
$γ(x)=σ2(δ2+(x′)2)λ2δλKλ(κδ)Kλ(κ(δ2+(x′)2)12).$
Cosine variogram
$γ(x)=σ2cos(x′).$
Where ${x}^{\prime }$ is a scaled norm of $x$.
### 2.8Sampling
The term sampling can have a number of different meanings. Here we are using it to mean randomly selecting one or more observations or records from a particular dataset. Sampling can be performed in one of two ways:
With replacement: where each observation in the original dataset can appear multiple times in the sample. The sample can, therefore, be larger than the original dataset. Without replacement: where each observation in the original dataset can appear at most once in the sample. The sample is, therefore, no larger than the original dataset.
Each of these sampling methods can be further divided into two categories:
With equal weights: where each observation in the original dataset has the same probability of appearing in the sample as every other observation. With unequal weights: where the probability of an observation from the original dataset appearing in the sample is proportional to the weight assigned to that observation.
The need to sample from a dataset appears in many areas. For example, it forms the basis for: bootstrapping (sampling with replacement, usually using equal weights); cross-validation (sampling without replacement, using equal weights); importance sampling (sampling with replacement, using unequal weights); randomization of experimental units in designed experiments or reducing the size of large databases (sampling with replacement with either equal or unequal weights).
Rather than drawing a sample from the whole dataset it is sometimes desirable to take samples from different strata or subpopulations within that dataset, referred to as stratified sampling. Within each stratum one or more of the above sampling methods may be adopted.
### 2.9Sampling Based Validation
Let $\left({Y}_{o},{X}_{o}\right)$ denote a dataset of observed values from a known population, where ${Y}_{o}$ is a matrix of one or more dependent or response variables and ${X}_{o}$ a matrix of one more more independent variables or covariates. Let $M$ denote a model described in terms $\beta$ a vector of one or more unknown parameters. The purpose of model $M$ is to describe the behaviour of the dependent variables in terms of the independent variables. In order to do this the parameter estimates must first be estimated and then how well the models fits, that is, how well it describes the dependent variables, assessed.
An example of such a model would be a simple linear regression as described in Section 2.3 in the G02 Chapter Introduction. The simple linear regression has two parameters, an intercept, ${\beta }_{0}$ and slope, ${\beta }_{1}$ and the observed dataset consists of the dependent variable $y$ and the single independent variable $x$. The parameter estimates are usually obtained via least squares.
Given a set of parameter estimates and a matrix of independent variables one way of assessing how well a model fits is to use the model to predict the values of the dependent variable and compare these predictions to the observed values. Ideally two datasets will be involved, a training dataset, $\left({Y}_{t},{X}_{t}\right)$, used to estimate the model parameters and a validation dataset, $\left({Y}_{v},{X}_{v}\right)$, used for the prediction and comparison. These two datasets should be drawn independently from the same population. However, in practice, this is often not possible either because a second dataset can not be drawn from the same population or because the value of the dependent variables are unknowable (for example the dataset in question is a time series and the event of interest has not yet happened). Rather than use the same dataset as both the training and validation dataset, which leads to overfitting and hence an over estimation of how well the model fits, a sampling based validation method can be used.
In $K$-fold cross-validation the original dataset is randomly divided into $K$ equally sized folds (or groups). The model fitting and assessment process is performed using a validation dataset consisting of those observations in the $k$th group and a training dataset consisting of all observations not in the $k$th group. This is repeated $K$ times, with $k=1,2,\dots ,K$, and the results combined. Repeated random sub-sampling validation is similar, but rather than systematically dividing the original dataset into a training and validation dataset, whether an observation resides in a given dataset is chosen randomly each time the model fitting and assessment process is repeated.
### 2.10Other Random Structures
In addition to random numbers from various distributions, random compound structures can be generated. These include random time series and random matrices.
### 2.11Multiple Streams of Pseudorandom Numbers
It is often advantageous to be able to generate variates from multiple, independent, streams (or sequences) of random variates. For example when running a simulation in parallel on several processors. There are four ways of generating multiple streams using the functions available in this chapter:
1. (i)using different initial values (seeds);
2. (ii)using different generators;
3. (iii)skip ahead (also called block-splitting);
4. (iv)leap-frogging.
#### 2.11.1Multiple Streams via Different Initial Values (Seeds)
A different sequence of variates can be generated from the same base generator by initializing the generator using a different set of seeds. The statistical properties of the base generators are only guaranteed within, not between sequences. For example, two sequences generated from two different starting points may overlap if these initial values are not far enough apart. The potential for overlapping sequences is reduced if the period of the generator being used is large. In general, of the four methods for creating multiple streams described here, this is the least satisfactory.
The one exception to this is the Wichmann–Hill II generator. The Wichmann and Hill (2006) paper describes a method of generating blocks of variates, with lengths up to ${2}^{90}$, by fixing the first three seed values of the generator (${w}_{0}$, ${x}_{0}$ and ${y}_{0}$), and setting ${z}_{0}$ to a different value for each stream required. This is similar to the skip-ahead method described in Section 2.11.3, in that the full sequence of the Wichmann–Hill II generator is split into a number of different blocks, in this case with a fixed length of ${2}^{90}$. But without the computationally intensive initialization usually required for the skip-ahead method.
#### 2.11.2Multiple Streams via Different Generators
Independent sequences of variates can be generated using a different base generator for each sequence. For example, sequence $1$ can be generated using the NAG basic generator, sequence $2$ using Mersenne Twister, sequence $3$ the ACORN generator and sequence $4$ using L'Ecuyer generator. The Wichmann–Hill I generator implemented in this chapter is, in fact, a series of 273 independent generators. The particular sub-generator to use is selected using the subid variable. Therefore, in total, 278 independent streams can be generated with each using a different generator (273 Wichmann–Hill I generators, and $5$ additional base generators).
Independent sequences of variates can be generated from a single base generator through the use of block-splitting, or skipping-ahead. This method consists of splitting the sequence into $k$ non-overlapping blocks, each of length $n$, where $n$ is no smaller than the maximum number of variates required from any of the sequences. For example,
$x1 , x2 , … , xn block 1 , xn+1 , xn+2 , … , x2n block 2 , x2n+1 , x2n+2 , … , x3n block 3 , etc.$
where ${x}_{1},{x}_{2},\dots$ is the sequence produced by the generator of interest. Each of the $k$ blocks provide an independent sequence.
The skip-ahead algorithm, therefore, requires the sequence to be advanced a large number of places, as to generate values from say, block $b$, you must skip over the $\left(b-1\right)n$ values in the first $b-1$ blocks. Owing to their form this can be done efficiently for linear congruential generators and multiple congruential generators. A skip-ahead algorithm is also provided for the Mersenne Twister generator.
Although skip-ahead requires some additional computation at the initialization stage (to ‘fast forward’ the sequence) no additional computation is required at the generation stage.
This method of producing multiple streams can also be used for the Sobol and Niederreiter quasi-random number generator via the argument iskip in g05ylc.
#### 2.11.4Multiple Streams via Leap-frog
Independent sequences of variates can also be generated from a single base generator through the use of leap-frogging. This method involves splitting the sequence from a single generator into $k$ disjoint subsequences. For example:
$Subsequence 1: x1 , xk+1 , x 2k+1 ,… Subsequence 2: x2 , xk+2 , x 2k+2 ,… ⋮ Subsequence k: xk , x2k , x3k ,… ,$
where ${x}_{1},{x}_{2},\dots$ is the sequence produced by the generator of interest. Each of the $k$ subsequences then provides an independent stream of variates.
The leap-frog algorithm, therefore, requires the generation of every $k$th variate from the base generator. Owing to their form this can be done efficiently for linear congruential generators and multiple congruential generators. A leap-frog algorithm is provided for the NAG Basic generator, both the Wichmann–Hill I and Wichmann–Hill II generators and L'Ecuyer generator.
It is known that, dependent on the number of streams required, leap-frogging can lead to sequences with poor statistical properties, especially when applied to linear congruential generators. In addition, leap-frogging can increase the time required to generate each variate. Therefore, leap-frogging should be avoided unless absolutely necessary.
#### 2.11.5Skip-ahead and Leap-frog for a Linear Congruential Generator (LCG): An Example
As an illustrative example, a brief description of the algebra behind the implementation of the leap-frog and skip-ahead algorithms for a linear congruential generator is given. A linear congruential generator has the form . The recursive nature of a linear congruential generator means that
The sequence can, therefore, be quickly advanced $v$ places by multiplying the current state (${x}_{i}$) by , hence skipping the sequence ahead. Leap-frogging can be implemented by using ${a}_{1}^{k}$, where $k$ is the number of streams required, in place of ${a}_{1}$ in the standard linear congruential generator recursive formula, in order to advance $k$ places, rather than one, at each iteration.
In a linear congruential generator the multiplier ${a}_{1}$ is constructed so that the generator has good statistical properties in, for example, the spectral test. When using leap-frogging to construct multiple streams this multiplier is replaced with ${a}_{1}^{k}$, and there is no guarantee that this new multiplier will have suitable properties especially as the value of $k$ depends on the number of streams required and so is likely to change depending on the application. This problem can be emphasized by the lattice structure of linear congruential generators. Similiarly, the value of ${a}_{1}$ is often chosen such that the computation can be performed efficiently. When ${a}_{1}$ is replaced by ${a}_{1}^{k}$, this is often no longer the case.
Note that, due to rounding, when using a distributional generator, a sequence generated using leap-frogging and a sequence constructed by taking every $k$ value from a set of variates generated without leap-frogging may differ slightly. These differences should only affect the least significant digit.
#### 2.11.6Skip-ahead and Leap-frog for the Mersenne Twister: An Example
Skipping ahead with the Mersenne Twister generator is based on the definition of a $k×k$ (where $k=19937$) transition matrix, $A$, over the finite field ${𝔽}_{2}$ (with elements $0$ and $1$). Multiplying $A$ by the current state ${x}_{n}$, represented as a vector of bits, produces the next state vector ${x}_{n+1}$:
$x n + 1 = A x n .$
Thus, skipping ahead $v$ places in a sequence is equivalent to multiplying by ${A}^{v}$:
$x n + v = A v x n .$
Since calculating ${A}^{v}$ by a standard square and multiply algorithm is $\mathit{O}\left({k}^{3}\mathrm{log}\left(v\right)\right)$ and requires over 47MB of memory (see Haramoto et al. (2008)), an indirect calculation is performed which relies on a property of the characteristic polynomial $p\left(z\right)$ of $A$, namely that $p\left(A\right)=0$. We then define
and observe that
$g(z) = z v + q(z) p (z)$
for a polynomial $q\left(z\right)$. Since $p\left(A\right)=0$, we have that $g\left(A\right)={A}^{v}$ and
$A v x n = ( a k - 1 A k - 1 +⋯+ a 1 A+ a 0 I) x n .$
This polynomial evaluation can be performed using Horner's method:
$A v x n = A (…A(A(A a k - 1 x n + a k - 2 x n )+ a k - 3 x n )+⋯+ a 1 x n ) + a 0 x n ,$
which reduces the problem to advancing the generator $k-1$ places from state ${x}_{n}$ and adding (where addition is as defined over ${𝔽}_{2}$) the intermediate states for which ${a}_{i}$ is nonzero.
There are, therefore, two stages to skipping the Mersenne Twister ahead $v$ places:
1. (i)Calculate the coefficients of the polynomial ;
2. (ii)advance the sequence $k-1$ places from the starting state and add the intermediate states that correspond to nonzero coefficients in the polynomial calculated in the first step.
The resulting state is that for position $v$ in the sequence.
The cost of calculating the polynomial is $\mathit{O}\left({k}^{2}\mathrm{log}\left(v\right)\right)$ and the cost of applying it to state is constant. Skip ahead functionality is typically used in order to generate $n$ independent pseudorandom number streams (e.g., for separate threads of computation). There are two options for generating the $n$ states:
1. (i)On the master thread calculate the polynomial for a skip ahead distance of $v$ and apply this polynomial to state $n$ times, after each iteration $j$ saving the current state for later usage by thread $j$.
2. (ii)Have each thread $j$ independently and in parallel with other threads calculate the polynomial for a distance of $\left(j+1\right)v$ and apply to the original state.
Since $\underset{v\to \infty }{\mathrm{lim}}\phantom{\rule{0.25em}{0ex}}\mathrm{log}\left(v\right)=\mathrm{log}nv$, then for large $v$ the cost of generating the polynomial for a skip ahead distance of $nv$ (i.e., the calculation performed by thread $n-1$ in option (ii) above) is approximately the same as generating that for a distance of $v$ (i.e., the calculation performed by thread $0$). However, only one application to state need be made per thread, and if $n$ is sufficiently large the cost of applying the polynomial to state becomes the dominant cost in option (i), in which case it is desirable to use option (ii). Tests have shown that as a guideline it becomes worthwhile to switch from option (i) to option (ii) for approximately $n>30$.
Leap frog calculations with the Mersenne Twister are performed by computing the sequence fully up to the required size and discarding the redundant numbers for a given stream.
## 3Recommendations on Choice and Use of Available Functions
### 3.1Pseudorandom Numbers
Before generating any pseudorandom variates the base generator being used must be initialized. Once initialized, a distributional generator can be called to obtain the variates required. No interfaces have been supplied for direct access to the base generators. If a sequence of random variates from a uniform distribution on the open interval $\left(0,1\right)$, is required, then the uniform distribution function (g05sac) should be called.
#### 3.1.1Initialization
Before generating any variates the base generator must be initialized. Two utility functions are provided for this, g05kfc and g05kgc, both of which allow any of the base generators to be chosen.
g05kfc selects and initializes a base generator to a repeatable (when executed serially) state: two calls of g05kfc with the same argument-values will result in the same subsequent sequences of random numbers (when both generated serially).
g05kgc selects and initializes a base generator to a non-repeatable state in such a way that different calls of g05kgc, either in the same run or different runs of the program, will almost certainly result in different subsequent sequences of random numbers.
No utilities for saving, retrieving or copying the current state of a generator have been provided. All of the information on the current state of a generator (or stream, if multiple streams are being used) is stored in the integer array state and as such this array can be treated as any other integer array, allowing for easy copying, restoring, etc.
#### 3.1.2Repeated initialization
As mentioned in Section 2.11.1, it is important to note that the statistical properties of pseudorandom numbers are only guaranteed within sequences and not between sequences produced by the same generator. Repeated initialization will thus render the numbers obtained less rather than more independent. In a simple case there should be only one call to g05kfc or g05kgc and this call should be before any call to an actual generation function.
#### 3.1.3Choice of Base Generator
If a single sequence is required then it is recommended that the Mersenne Twister is used as the base generator (${\mathbf{genid}}=\mathrm{Nag_MersenneTwister}$). This generator is fast, has an extremely long period and has been shown to perform well on various test suites, see Matsumoto and Nishimura (1998), L'Ecuyer and Simard (2002) and Wichmann and Hill (2006) for example.
When choosing a base generator, the period of the chosen generator should be borne in mind. A good rule of thumb is never to use more numbers than the square root of the period in any one experiment as the statistical properties are impaired. For closely related reasons, breaking numbers down into their bit patterns and using individual bits may also cause trouble.
#### 3.1.4Choice of Method for Generating Multiple Streams
If the Wichmann–Hill II base generator is being used, and a period of ${2}^{90}$ is sufficient, then the method described in Section 2.11.1 can be used. If a different generator is used, or a longer period length is required then generating multiple streams by altering the initial values should be avoided.
Using a different generator works well if less than 277 streams are required.
Of the remaining two methods, both skip-ahead and leap-frogging use the sequence from a single generator, both guarantee that the different sequences will not overlap and both can be scaled to an arbitrary number of streams. Leap-frogging requires no a-priori knowledge about the number of variates being generated, whereas skip-ahead requires you to know (approximately) the maximum number of variates required from each stream. Skip-ahead requires no a-priori information on the number of streams required. In contrast leap-frogging requires you to know the maximum number of streams required, prior to generating the first value. Of these two, if possible, skip-ahead should be used in preference to leap-frogging. Both methods required additional computation compared with generating a single sequence, but for skip-ahead this computation occurs only at initialization. For leap-frogging additional computation is required both at initialization and during the generation of the variates. In addition, as mentioned in Section 2.11.4, using leap-frogging can, in some instances, change the statistical properties of the sequences being generated.
Leap-frogging is performed by calling g05khc after the initialization function (g05kfc or g05kgc). For skip-ahead, either g05kjc or g05kkc can be called. Of these, g05kkc restricts the amount being skipped to a power of $2$, but allows for a large ‘skip’ to be performed.
#### 3.1.5Copulas
After calling one of the copula functions the inverse cumulative distribution function (CDF) can be applied to convert the uniform marginal distribution into the required form. Scalar and vector functions for evaluating the CDF, for a range of distributions, are supplied in Chapter G01. It should be noted that these functions are often described as computing the ‘deviates’ of the distribution.
When using the inverse CDF functions from Chapter G01 it should be noted that some are limited in the number of significant figures they return. This may affect the statistical properties of the resulting sequence of variates. Section 7 of the individual function documentation will give a discussion of the accuracy of the particular algorithm being used and any available alternatives.
### 3.2Quasi-random Numbers
Prior to generating any quasi-random variates the generator being used must be initialized via g05ylc or g05ync. Of these, g05ylc can be used to initialize a standard Sobol, Faure or Niederreiter sequence and g05ync can be used to initialize a scrambled Sobol or Niederreiter sequence.
Owing to the random nature of the scrambling, before calling the initialization function g05ync one of the pseudorandom initialization functions, g05kfc or g05kgc, must be called.
Once a quasi-random generator has been initialized, using either g05ylc or g05ync, one of three generation functions can be called to generate uniformly distributed sequences (g05ymc), Normally distributed sequences (g05yjc) or sequences with a log-normal distribution (g05ykc). For example, for a repeatable sequence of scrambled quasi-random variates from the Normal distribution, g05kfc must be called first (to initialize a pseudorandom generator), followed by g05ync (to initialize a scrambled quasi-random generator) and then g05yjc can be called to generate the sequence from the required distribution.
See the last paragraph of Section 3.1.5 on how sequences from other distributions can be obtained using the inverse CDF.
### 3.3Brownian Bridge
g05xbc may be used to generate sample paths from a (free or non-free) Wiener process using the Brownian bridge algorithm. Prior to calling g05xbc, the generator must be initialized by a call to g05xac. g05xac requires you to specify a bridge construction order. The function g05xec can be used to convert a set of input times into one of several common bridge construction orders, which can then be used in the initialization call to g05xac.
g05xdc may be used to generate the scaled increments of the sample paths of a (free or non-free) Wiener process. Prior to calling g05xdc, the generator must be initialized by a call to g05xcc. Note that g05xdc generates these scaled increments directly; it is not necessary to call g05xbc before calling g05xdc. As before, g05xec can be used to convert a set of input times into a bridge construction order which can be passed to g05xcc.
### 3.4Random Fields
Functions for simulating from either a one-dimensional or a two-dimensional stationary Gaussian random field are provided. These functions use the circulant embedding method of Dietrich and Newsam (1997) to efficiently generate from the required field. In both cases a setup function is called, which defines the domain and variogram to use, followed by the generation function. A number of preset variograms are supplied or a user-defined function can be used.
• One-dimensional random field:
• g05znc setup function, using a preset variogram.
• g05zmc setup function, using a user-defined variogram.
• g05zpc generation function.
• Two-dimension random field:
• g05zqc setup function, using a preset variogram.
• g05zrc setup function, using a user-defined variogram.
• g05zsc generation function.
In addition to generating a random field, it is possible to use the circulant embedding method to generate realizations of fractional Brownian motion, this functionality is provided in g05ztc.
Before calling g05zpc, g05zrc or g05ztc one of the initialization functions, g05kfc or g05kgc must be called.
Each of the four sampling methods described in Section 2.8 can be performed using the following functions:
• g05tlc Sampling with replacement, equal weights.
• g05tdc Sampling with replacement, unequal weights.
• g05ndc Sampling without replacement, equal weights.
• g05nec Sampling without replacement, unequal weights.
In addition to these functions for directly sampling from a dataset two utility functions that perform an in-place permutation to give datasets suitable for use in validation are provided. g05pvc generates training and validation datasets suitable for $K$-fold cross-validation and g05pwc generates training and validation datasets suitable for random sub-sampling validation. To perform stratified sampling the dataset should first be ordered by stratum using a sorting function from Chapter M01 and then one of the above sampling functions can be applied to each stratum.
## 4Functionality Index
Brownian bridge,
circulant embedding generator,
generate fractional Brownian motion g05ztc
increments generator,
generate Wiener increments g05xdc
initialize generator g05xcc
path generator,
create bridge construction order g05xec
generate a free or non-free (pinned) Wiener process for a given set of time steps g05xbc
initialize generator g05xac
Generating samples, matrices and tables,
permutation of real matrix, vector, vector triplet
$K-$fold cross-validation g05pvc
random sub-sampling validation g05pwc
random correlation matrix g05pyc
random orthogonal matrix g05pxc
random permutation of an integer vector g05ncc
random sample from an integer vector,
unequal weights, without replacement g05nec
unweighted, without replacement g05ndc
random table g05pzc
Generation of time series,
asymmetric GARCH Type II g05pec
asymmetric GJR GARCH g05pfc
EGARCH g05pgc
exponential smoothing g05pmc
type I AGARCH g05pdc
univariate ARMA g05phc
vector ARMA g05pjc
Pseudorandom numbers,
array of variates from multivariate distributions,
Dirichlet distribution g05sec
multinomial distribution g05tgc
Normal distribution g05rzc
Student's $t$ distribution g05ryc
copulas,
Clayton/Cook–Johnson copula (bivariate) g05rec
Clayton/Cook–Johnson copula (multivariate) g05rhc
Frank copula (bivariate) g05rfc
Frank copula (multivariate) g05rjc
Gaussian copula g05rdc
Gumbel–Hougaard copula g05rkc
Plackett copula g05rgc
Student's $t$ copula g05rcc
initialize generator,
multiple streams,
leap-frog g05khc
nonrepeatable sequence g05kgc
repeatable sequence g05kfc
vector of variates from discrete univariate distributions,
binomial distribution g05tac
geometric distribution g05tcc
hypergeometric distribution g05tec
logarithmic distribution g05tfc
logical value Nag_TRUE or Nag_FALSE g05tbc
negative binomial distribution g05thc
Poisson distribution g05tjc
uniform distribution g05tlc
user-supplied distribution g05tdc
variate array from discrete distributions with array of parameters,
Poisson distribution with varying mean g05tkc
vectors of variates from continuous univariate distributions,
beta distribution g05sbc
Cauchy distribution g05scc
exponential mix distribution g05sgc
$F$-distribution g05shc
gamma distribution g05sjc
logistic distribution g05slc
log-normal distribution g05smc
negative exponential distribution g05sfc
Normal distribution g05skc
real number from the continuous uniform distribution g05sac
Student's $t$-distribution g05snc
triangular distribution g05spc
uniform distribution g05sqc
von Mises distribution g05src
Weibull distribution g05ssc
${\chi }^{2}$ square distribution g05sdc
Quasi-random numbers,
array of variates from univariate distributions,
log-normal distribution g05ykc
Normal distribution g05yjc
uniform distribution g05ymc
initialize generator,
scrambled Sobol or Niederreiter g05ync
Sobol, Niederreiter or Faure g05ylc
Random fields,
one-dimensional,
generation g05zpc
initialize generator,
preset variogram g05znc
user-defined variogram g05zmc
two-dimensional,
generation g05zsc
initialize generator,
preset variogram g05zrc
user-defined variogram g05zqc
None.
## 6 Withdrawn or Deprecated Functions
The following lists all those functions that have been withdrawn since Mark 23 of the Library or are in the Library, but deprecated.
Function Status Replacement Function(s)
g05cac Withdrawn at Mark 24 g05sac
g05cbc Withdrawn at Mark 24 g05kfc
g05ccc Withdrawn at Mark 24 g05kgc
g05cfc Withdrawn at Mark 24 No longer required.
g05cgc Withdrawn at Mark 24 No longer required.
g05dac Withdrawn at Mark 24 g05sqc
g05dbc Withdrawn at Mark 24 g05sfc
g05ddc Withdrawn at Mark 24 g05skc
g05dyc Withdrawn at Mark 24 g05tlc
g05eac Withdrawn at Mark 24 g05rzc
g05ecc Withdrawn at Mark 24 g05tjc
g05edc Withdrawn at Mark 24 g05tac
g05ehc Withdrawn at Mark 24 g05ncc
g05ejc Withdrawn at Mark 24 g05ndc
g05exc Withdrawn at Mark 24 g05tdc
g05eyc Withdrawn at Mark 24 g05tdc
g05ezc Withdrawn at Mark 24 g05rzc
g05fec Withdrawn at Mark 24 g05sbc
g05ffc Withdrawn at Mark 24 g05sjc
g05hac Withdrawn at Mark 24 g05phc
g05hkc Withdrawn at Mark 24 g05pdc
g05hlc Withdrawn at Mark 24 g05pec
g05hmc Withdrawn at Mark 24 g05pfc
g05kac Withdrawn at Mark 24 g05sac
g05kbc Withdrawn at Mark 24 g05kfc
g05kcc Withdrawn at Mark 24 g05kgc
g05kec Withdrawn at Mark 24 g05tbc
g05lac Withdrawn at Mark 24 g05skc
g05lbc Withdrawn at Mark 24 g05snc
g05lcc Withdrawn at Mark 24 g05sdc
g05ldc Withdrawn at Mark 24 g05shc
g05lec Withdrawn at Mark 24 g05sbc
g05lfc Withdrawn at Mark 24 g05sjc
g05lgc Withdrawn at Mark 24 g05sqc
g05lhc Withdrawn at Mark 24 g05spc
g05ljc Withdrawn at Mark 24 g05sfc
g05lkc Withdrawn at Mark 24 g05smc
g05llc Withdrawn at Mark 24 g05scc
g05lmc Withdrawn at Mark 24 g05ssc
g05lnc Withdrawn at Mark 24 g05slc
g05lpc Withdrawn at Mark 24 g05src
g05lqc Withdrawn at Mark 24 g05sgc
g05lxc Withdrawn at Mark 24 g05ryc
g05lyc Withdrawn at Mark 24 g05rzc
g05lzc Withdrawn at Mark 24 g05rzc
g05mac Withdrawn at Mark 24 g05tlc
g05mbc Withdrawn at Mark 24 g05tcc
g05mcc Withdrawn at Mark 24 g05thc
g05mdc Withdrawn at Mark 24 g05tfc
g05mec Withdrawn at Mark 24 g05tkc
g05mjc Withdrawn at Mark 24 g05tac
g05mkc Withdrawn at Mark 24 g05tjc
g05mlc Withdrawn at Mark 24 g05tec
g05mrc Withdrawn at Mark 24 g05tgc
g05mzc Withdrawn at Mark 24 g05tdc
g05nac Withdrawn at Mark 24 g05ncc
g05nbc Withdrawn at Mark 24 g05ndc
g05pac Withdrawn at Mark 24 g05phc
g05pcc Withdrawn at Mark 24 g05pjc
g05qac Withdrawn at Mark 24 g05pxc
g05qbc Withdrawn at Mark 24 g05pyc
g05qdc Withdrawn at Mark 24 g05pzc
g05rac Withdrawn at Mark 24 g05rdc
g05rbc Withdrawn at Mark 24 g05rcc
g05yac Withdrawn at Mark 24 g05ylc and g05ymc
g05ybc Withdrawn at Mark 24 g05ylc and g05yjc
## 7References
Banks J (1998) Handbook on Simulation Wiley
Boye E (Unpublished manuscript) Copulas for finance: a reading guide and some applications Financial Econometrics Research Centre, City University Business School, London
Bratley P and Fox B L (1988) Algorithm 659: implementing Sobol's quasirandom sequence generator ACM Trans. Math. Software 14(1) 88–100
Dietrich C R and Newsam G N (1997) Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix SIAM J. Sci. Comput. 18 1088–1107
Faure H and Tezuka S (2000) Another random scrambling of digital (t,s)-sequences Monte Carlo and Quasi-Monte Carlo Methods Springer-Verlag, Berlin, Germany (eds K T Fang, F J Hickernell and H Niederreiter)
Fox B L (1986) Algorithm 647: implementation and relative efficiency of quasirandom sequence generators ACM Trans. Math. Software 12(4) 362–376
Glasserman P (2004) Monte Carlo Methods in Financial Engineering Springer
Haramoto H, Matsumoto M, Nishimura T, Panneton F and L'Ecuyer P (2008) Efficient jump ahead for F2-linear random number generators INFORMS J. on Computing 20(3) 385–390
Hong H S and Hickernell F J (2003) Algorithm 823: implementing scrambled digital sequences ACM Trans. Math. Software 29:2 95–109
Joe S and Kuo F Y (2008) Constructing Sobol sequences with better two-dimensional projections SIAM J. Sci. Comput. 30 2635–2654
Knuth D E (1981) The Art of Computer Programming (Volume 2) (2nd Edition) Addison–Wesley
L'Ecuyer P and Simard R (2002) TestU01: a software library in ANSI C for empirical testing of random number generators Departement d'Informatique et de Recherche Operationnelle, Universite de Montreal https://www.iro.umontreal.ca/~lecuyer
Maclaren N M (1989) The generation of multiple independent sequences of pseudorandom numbers Appl. Statist. 38 351–359
Matsumoto M and Nishimura T (1998) Mersenne twister: a 623-dimensionally equidistributed uniform pseudorandom number generator ACM Transactions on Modelling and Computer Simulations
Morgan B J T (1984) Elements of Simulation Chapman and Hall
Nelsen R B (1998) An Introduction to Copulas. Lecture Notes in Statistics 139 Springer
Owen A B (1995) Randomly permuted (t,m,s)-nets and (t,s)-sequences Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, Lecture Notes in Statistics 106 Springer-Verlag, New York, NY 299–317 (eds H Niederreiter and P J-S Shiue)
Revuz D and Yor M (1999) Continuous Martingales and Brownian Motion Springer
Ripley B D (1987) Stochastic Simulation Wiley
Sklar A (1973) Random variables: joint distribution functions and copulas Kybernetika 9 499–460
Wichmann B A and Hill I D (2006) Generating good pseudo-random numbers Computational Statistics and Data Analysis 51 1614–1622
Wikramaratna R S (1989) ACORN - a new method for generating sequences of uniformly distributed pseudo-random numbers Journal of Computational Physics 83 16–31
Wikramaratna R S (1992) Theoretical background for the ACORN random number generator Report AEA-APS-0244 AEA Technology, Winfrith, Dorest, UK
Wikramaratna R S (2007) The additive congruential random number generator a special case of a multiple recursive generator Journal of Computational and Applied Mathematics
|
2021-07-25 18:32:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 456, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081112504005432, "perplexity": 796.4791650033013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151760.94/warc/CC-MAIN-20210725174608-20210725204608-00328.warc.gz"}
|
https://www.ademcetinkaya.com/2022/12/mrk-merck-company-inc-common-stock-new.html
|
Outlook: Merck & Company Inc. Common Stock (new) assigned short-term Ba1 & long-term Ba1 estimated rating.
Time series to forecast n: 24 Dec 2022 for (n+6 month)
Methodology : Inductive Learning (ML)
## Abstract
As stock data is characterized by highly noisy and non-stationary, stock price prediction is regarded as a knotty problem. In this paper, we propose new two-stage ensemble models by combining empirical mode decomposition (EMD) (or variational mode decomposition (VMD)), extreme learning machine (ELM) and improved harmony search (IHS) algorithm for stock price prediction, which are respectively named EMD–ELM–IHS and VMD–ELM–IHS.(Nelson, D.M., Pereira, A.C. and De Oliveira, R.A., 2017, May. Stock market's price movement prediction with LSTM neural networks. In 2017 International joint conference on neural networks (IJCNN) (pp. 1419-1426). Ieee.) We evaluate Merck & Company Inc. Common Stock (new) prediction models with Inductive Learning (ML) and Polynomial Regression1,2,3,4 and conclude that the MRK stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy
## Key Points
1. Can stock prices be predicted?
2. Nash Equilibria
3. Stock Forecast Based On a Predictive Algorithm
## MRK Target Price Prediction Modeling Methodology
We consider Merck & Company Inc. Common Stock (new) Decision Process with Inductive Learning (ML) where A is the set of discrete actions of MRK stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Polynomial Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Inductive Learning (ML)) X S(n):→ (n+6 month) $\begin{array}{l}\int {r}^{s}\mathrm{rs}\end{array}$
n:Time series to forecast
p:Price signals of MRK stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## MRK Stock Forecast (Buy or Sell) for (n+6 month)
Sample Set: Neural Network
Stock/Index: MRK Merck & Company Inc. Common Stock (new)
Time series to forecast n: 24 Dec 2022 for (n+6 month)
According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Merck & Company Inc. Common Stock (new)
1. Hedge effectiveness is the extent to which changes in the fair value or the cash flows of the hedging instrument offset changes in the fair value or the cash flows of the hedged item (for example, when the hedged item is a risk component, the relevant change in fair value or cash flows of an item is the one that is attributable to the hedged risk). Hedge ineffectiveness is the extent to which the changes in the fair value or the cash flows of the hedging instrument are greater or less than those on the hedged item.
2. An entity has not retained control of a transferred asset if the transferee has the practical ability to sell the transferred asset. An entity has retained control of a transferred asset if the transferee does not have the practical ability to sell the transferred asset. A transferee has the practical ability to sell the transferred asset if it is traded in an active market because the transferee could repurchase the transferred asset in the market if it needs to return the asset to the entity. For example, a transferee may have the practical ability to sell a transferred asset if the transferred asset is subject to an option that allows the entity to repurchase it, but the transferee can readily obtain the transferred asset in the market if the option is exercised. A transferee does not have the practical ability to sell the transferred asset if the entity retains such an option and the transferee cannot readily obtain the transferred asset in the market if the entity exercises its option
3. To the extent that a transfer of a financial asset does not qualify for derecognition, the transferor's contractual rights or obligations related to the transfer are not accounted for separately as derivatives if recognising both the derivative and either the transferred asset or the liability arising from the transfer would result in recognising the same rights or obligations twice. For example, a call option retained by the transferor may prevent a transfer of financial assets from being accounted for as a sale. In that case, the call option is not separately recognised as a derivative asset.
4. When an entity separates the foreign currency basis spread from a financial instrument and excludes it from the designation of that financial instrument as the hedging instrument (see paragraph 6.2.4(b)), the application guidance in paragraphs B6.5.34–B6.5.38 applies to the foreign currency basis spread in the same manner as it is applied to the forward element of a forward contract.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Merck & Company Inc. Common Stock (new) assigned short-term Ba1 & long-term Ba1 estimated rating. We evaluate the prediction models Inductive Learning (ML) with Polynomial Regression1,2,3,4 and conclude that the MRK stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Buy
### MRK Merck & Company Inc. Common Stock (new) Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCaa2Caa2
Balance SheetBaa2Baa2
Leverage RatiosBaa2C
Cash FlowB2Caa2
Rates of Return and ProfitabilityBaa2Baa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 72 out of 100 with 754 signals.
## References
1. A. Tamar and S. Mannor. Variance adjusted actor critic algorithms. arXiv preprint arXiv:1310.3697, 2013.
2. F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems. Springer, 2016
3. Hastie T, Tibshirani R, Friedman J. 2009. The Elements of Statistical Learning. Berlin: Springer
4. Burkov A. 2019. The Hundred-Page Machine Learning Book. Quebec City, Can.: Andriy Burkov
5. G. Shani, R. Brafman, and D. Heckerman. An MDP-based recommender system. In Proceedings of the Eigh- teenth conference on Uncertainty in artificial intelligence, pages 453–460. Morgan Kaufmann Publishers Inc., 2002
6. Belloni A, Chernozhukov V, Hansen C. 2014. High-dimensional methods and inference on structural and treatment effects. J. Econ. Perspect. 28:29–50
7. K. Tumer and D. Wolpert. A survey of collectives. In K. Tumer and D. Wolpert, editors, Collectives and the Design of Complex Systems, pages 1–42. Springer, 2004.
Frequently Asked QuestionsQ: What is the prediction methodology for MRK stock?
A: MRK stock prediction methodology: We evaluate the prediction models Inductive Learning (ML) and Polynomial Regression
Q: Is MRK stock a buy or sell?
A: The dominant strategy among neural network is to Buy MRK Stock.
Q: Is Merck & Company Inc. Common Stock (new) stock a good investment?
A: The consensus rating for Merck & Company Inc. Common Stock (new) is Buy and assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of MRK stock?
A: The consensus rating for MRK is Buy.
Q: What is the prediction period for MRK stock?
A: The prediction period for MRK is (n+6 month)
|
2023-02-06 09:19:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32137519121170044, "perplexity": 6682.505778629932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00251.warc.gz"}
|
https://www.imrpress.com/journal/RCM/22/1/10.31083/j.rcm.2021.01.103/htm
|
NULL
Section
All sections
Countries | Regions
Countries | Regions
Article Types
Article Types
Year
Volume
Issue
Pages
IMR Press / RCM / Volume 22 / Issue 1 / DOI: 10.31083/j.rcm.2021.01.103
Open Access Original Research
Frequency of ST-segment elevation myocardial infarction, non-ST-segment myocardial infarction, and unstable angina: results from a Southwest Chinese Registry
Show Less
1 Department of Cardiology, Affiliated Hospital of Southwest Jiaotong University, The Third People’s Hospital of Chengdu, 82 Qinglong St. Chengdu, 610015 Sichuan, P. R. China
*Correspondence: cailinwm@163.com (Lin Cai)
These authors contributed equally.
Rev. Cardiovasc. Med. 2021, 22(1), 239–245; https://doi.org/10.31083/j.rcm.2021.01.103
Submitted: 25 May 2020 | Revised: 14 October 2020 | Accepted: 21 October 2020 | Published: 30 March 2021
This is an open access article under the CC BY 4.0 license (https://creativecommons.org/licenses/by/4.0/).
Abstract
The burden of cardiovascular disease is predicted to escalate in developing countries. The aim of this study is to assess the characteristics, management strategies and outcomes of the patients with acute coronary syndrome (ACS) who were admitted to hospitals under the chest pain center mode in southwest P. R. China. Adults hospitalized with a diagnosis of ACS were enrolled in the retrospective, observational registry between January 2017 and June 2019 at 11 hospitals in Chengdu, P. R. China. The collected data included the patients’ baseline characteristics, clinical management and in-hospital outcomes. After Statistical analysis, (1) A total of 2857 patients with ACS, among which 1482 have ST-segment elevation myocardial infarction (STEMI), 681 have non-STEMI (NSTEMI) and 694 have unstable angina (UA) were enrolled in the study. (2) 61.3% of the ACS patients received reperfusion therapy. More patients with STEMI underwent percutaneous coronary intervention (PCI) compared with NSTEMI/UA patients (80.6% vs. 38.8%, P $<$ 0.001), while thrombolytics were administered in only 1.8% of STEMI patients. (3) The median time from symptoms to hospital was 190 min (IQR 94-468) in STEMI, 283 min (IQR 112-1084) in NSTEMI and 337 min (IQR 97-2220) in UA (P $<$ 0.001), and the door-to-balloon time for primary PCI (pPCI) was 85 min (IQR 55-121) in STEMI. (4) The in-hospital outcomes for STEMI patients included death (8.1%) and acute heart failure (22.6%), while the outcomes for those with NSTEMI and UA were better: death (4.0% and 0.9%, P $<$ 0.001) and acute heart failure (15.3% and 9.9%, P $<$ 0.001). (5) Antiplatelet drugs, lipid-lowering drugs, $\beta$-blockers and angiotensin-converting enzyme inhibitors (ACEI) /angiotensin receptor blockers (ARB) were used in about 98.3%, 95.0%, 67.7% and 54.3% of the ACS patients, respectively. Therefore, the management capacity in Chengdu has relatively increased compared with previous studies, but important gaps still exist compared with developed countries, especially regarding the management of the NSTEMI/UA patients.
Keywords
Acute coronary syndrome
Clinical characteristics
Percutaneous coronary intervention
In-hospital outcomes
1. Introduction
The prevalence and mortality of cardiovascular disease (CVD) in P. R. China is steadily increasing [1]. Acute coronary syndrome (ACS) is a common critical illness in CVD, which seriously endangers the people’s life and health and puts a great burden on the society.
The hospitals in P. R. China have been facing a series of challenges including the heavy burden of CVD, uneven geographical distribution of healthcare resources and weak inter-hospital collaboration for many years. Aiming to improve the treatment capability of chest pain patients, chest pain centers (standard and grassroots editions) have been actively constructed with a collaborative emergency system based on these centers, which contributes to improving the compliance with the ACS guidelines, optimizing the clinical pathways and improving the treatment procedures [2]. With the help of qualified chest pain centers, our study represents a Chinese multicenter clinical trial registration study that had established a typical collaborative emergency system, which was based on the foundation of the emergency medical system, an inter-hospital data transmission and the adoption of the optimal road map guiding for chest pain treatment among the enrolled hospitals.
No representative studies have been so far presented to define the clinical profiles, management and outcomes of ACS patients in southwest P. R. China. In order to improve the current understanding of ACS management, we sought to create an ACS registry from 2017 to 2019 to evaluate the presentation, management and outcomes of patients with ACS in Chengdu.
2. Methods
2.1 Study participants
We established a retrospective multicenter registry that recruited patients with ACS from 11 tertiary hospitals, among which three are standard chest pain centers certified by the China Chest Pain Center headquarters, and eight are certified grassroots chest centers. An efficient and rapid two-way referral link has been established between the chest pain center and the primary hospital, and a regional collaborative treatment network has been established in Chengdu. This study included 2,857 ACS patients who were admitted to the above-mentioned hospitals from January 2017 to June 2019. The study was approved by the medical ethics committee of the Third People’s Hospital of Chengdu and its clinical trial registration number is ChiCTR1900025138.
Consecutive patients who were admitted to the hospital with ACS were enrolled at the centers, and their data at the time of admission were retrospectively recorded. The inclusion criteria were as follows: (1) being diagnosed with ST-segment elevation myocardial infarction (STEMI), non-STEMI (NSTEMI) or unstable angina (UA) [3], (2) aged 18 years or more. The patients with some missing clinical data or who did not know the treatment measures were excluded.
2.2 Data collection and definitions
This is a retrospective study that collects the patient information obtained from the hospital system. The data abstraction quality was monitored by random auditing. The data recorded in the patient’s ACS registry included the demographic characteristics, medical history, cardiovascular risk factors, clinical characteristics, mode of transportation to the hospital, time of the symptom onset, time of admission, first medical contact time, time of start of the balloon (defined as the time for the first balloon catheter dilation/aspiration of the thrombus. For those who reached the blood flow of TIMI level 3 immediately after the passage of the guidewire, the time to guidewire passage was recorded), treatments at the hospital, findings of the diagnostic tests, length of stay at the hospital, medical costs, discharge medications and outcomes at the hospital (death and acute heart failure). Acute heart failure was defined as Killip class II and above. Multivessel disease was defined as stenosis ($>$ 50%) in two or more major coronary arteries.
2.3 Statistical analysis
The SPSS statistical software (version 22.0) was used for the statistical analysis. Dichotomous variables were presented as numbers and percentages and compared using the $X\^{2}$ test. Normally distributed numeric data were presented as means $\pm$ SD, and inter-group comparisons were conducted with the Student’s t-test or ANOVA with post hoc analysis. Non-normally distributed numeric data were presented as medians and IQRs, and inter-group comparisons were conducted with the Mann-Whitney U test. In all analyses, P $<$ 0.05 was considered to be statistically significant. Logistic regression analysis was used to identify independent predictors of the hospital mortality based on those with P $\leq$ 0.05 on the univariate regression analysis, with adjustment for potential confounders to avoid confusion.
3. Results
A total of 2857 patients were enrolled in the study and included in the analysis, among which 1482 patients (51.9%) were diagnosed with STEMI, 681 patients (23.8%) with NSTEMI and 694 patients (24.3%) with UA. The baseline characteristics of the participants are listed in Table 1. Among the entire patient group, the mean age was (67 $\pm$ 13) years, 70.7% were males, approximately 56.8% had hypertension, 7.2% had dyslipidemia and 25.4% had diabetes mellitus (DM). About 38% of all patients were current smokers, with higher rates of current smoking in STEMI patients compared with NSTEMI or UA patients. Patients with NSTEMI/UA tended to have more concomitant diseases, including DM, hypertension and history of myocardial infarction (MI) compared with STEMI patients. The median time from symptoms to hospital was 190 min in STEMI, shorter than that of NSTEMI (283 min).
Table 1.Characteristics of patients with acute coronary syndrome (ACS).
Variable N STEMI 1482 NSTEMI 681 UA 694 Total 2857 P value Demographic Age (years) 2854 (99.9%) 66 ± 14 69 ± 13 67 ± 12 67 ± 13 $<$ 0.001 Sex (male) 2857 (100.0%) 1105 (74.6%) 463 (68.0%) 452 (65.1%) 2020 (70.7%) $<$ 0.001 Cardiovascular risk factors Hypertension 2857 (100.0%) 754 (50.9%) 437 (64.2%) 432 (62.2%) 1623 (56.8%) $<$ 0.001 Diabetes mellitus 2857 (100.0%) 352 (23.8%) 188 (27.6%) 186 (26.8%) 726 (25.4%) 0.101 Dyslipidaemia 2837 (99.3%) 129 (8.8%) 26 (3.8%) 49 (7.1%) 204 (7.2%) $<$ 0.001 Current smoker 2840 (99.4%) 638 (43.4%) 237 (35.0%) 206 (29.8%) 1081 (38.1%) $<$ 0.001 Medical history Myocardial infarction 2829 (99.0%) 38 (2.6%) 33 (4.9%) 69 (10.0%) 140 (4.9%) $<$ 0.001 Coronary heart disease 2842 (99.5%) 112 (7.6%) 121 (17.8%) 258 (37.3%) 491 (17.3%) $<$ 0.001 PCI 2839 (99.4%) 41 (2.8%) 39 (5.8%) 93 (13.5%) 173 (6.1%) $<$ 0.001 CABG 2843 (99.5%) 1 (0.1%) 1 (0.1%) 3 (0.4%) 5 (0.2%) 0.163 Stroke 2832 (99.1%) 75 (5.1%) 34 (5.0%) 30 (4.4%) 139 (4.9%) 0.767 Clinical characteristic S-to-D (min) 2852 (99.8%) 190 (94, 468) 283 (112, 1084) 337 (97, 2220) 226 (97, 758) $<$ 0.001 Chest discomfort 2748 (96.2%) 1362 (94.5%) 585 (88.8%) 617 (95.2%) 2564 (93.3%) $<$ 0.001 Cardiac arrest 2810 (98.4%) 21 (1.4%) 2 (0.3%) 1 (0.1%) 24 (0.9%) 0.002 Killip class 2097 (96.9%) $<$ 0.001 Killip class I 861 (60.2%) 480 (72.1%) - 1341 (63.9%) Killip class II 313 (21.9%) 118 (17.7%) - 431 (20.6%) Killip class III 69 (4.8%) 38 (5.7%) - 107 (5.1%) Killip class IV 188 (13.1%) 30 (4.5%) - 218 (10.4%) Heart rate (bpm) 2813 (98.5%) 78 (66, 91) 78 (67, 90) 74 (65, 85) 76 (67, 90) $<$ 0.001 Systolic blood pressure (mmHg) 2823 (98.8%) 128 $\pm$ 26 137 $\pm$ 25 138 $\pm$ 22 132 $\pm$ 25 $<$ 0.001 multivessel disease 2052 (71.6%) 805 (63.7%) 294 (66.8%) 162 (46.4%) 1261 (61.5%) $<$ 0.001 Tests Glomerular filtration rate (mL/min) 1811 (63.4%) 81.8 $\pm$ 33.0 81.5 $\pm$ 32.1 89.0 $\pm$ 30.8 83.9 $\pm$ 32.2 0.182 Troponin (ng/mL) 1991 (69.7%) 25.2 (5.1, 710.0) 8.6 (1.87, 154.7) 4.79 (0.4, 11.1) 10.9 (1.9, 121.4) $<$ 0.001 Hemoglobin (g/L) 2665 (93.3%) 133 $\pm$ 22 129 $\pm$ 23 132 $\pm$ 19 131 $\pm$ 21 $<$ 0.001 Transportation 2857 (100.0%) $<$ 0.001 Ambulances 135 (9.1%) 52 (7.6%) 41 (5.9%) 228 (8.0%) Taxis/private cars 857 (57.8%) 481 (70.6%) 598 (86.2%) 1936 (67.8%) Transfer 456 (30.8%) 118 (17.3%) 45 (6.5%) 619 (21.7%) Hospital stay, days 2818 (98.6%) 9 (7, 11) 8 (6, 11) 8 (6, 11) 9.0 (6, 11) $<$ 0.001 Total costs, ten thousand yuan 2818 (98.6%) 3.98 (2.88, 5.14) 2.38 (0.91, 4.34) 1.02 (0.68, 1.76) 3.21 (1.03, 4.58) $<$ 0.001 P-value for three comparisons. STEMI, ST-segment elevation myocardial infarction. NSTEMI, non-STEMI. UA, unstable angina. PCI, percutaneous coronary intervention. CABG, coronary artery bypass grafting. S-to-D, the time from symptoms to hospital. Laboratory normal range troponin values is 0-5.0 ng/mL.
60.4% of the patients underwent PCI (Table 2). The rates of coronary angiography (CAG) and PCI were higher in patients with STEMI than in those with NSTEMI/UA. However, thrombolytic therapy was used in only 1.8% of the STEMI patients. Overall, almost all patients (98.3%) received antiplatelet drugs at the hospital. Regarding the prescribed medication, lipid-lowering medication, $\beta$ blockers and ACEI/ARB were prescribed for about 95.0%, 67.7% and 54.3% of the patients, respectively.
Table 2.Treatments of patients with ACS.
Variable STEMI 1482 NSTEMI 681 UA 694 Total 2857 P value Coronary angiography 1301 (87.8%) 454 (66.7%) 393 (56.6%) 2148 (75.2%) $<$ 0.001 No reperfusion 262 (17.8%) 335 (49.2%) 507 (73.1%) 1104 (38.6%) $<$ 0.001 Fibrinolytic therapy 26 (1.8%) - - 26 (0.9%) $<$ 0.001 PCI 1194 (80.6%) 346 (50.8%) 187 (26.9%) 1727 (60.4%) $<$ 0.001 Primary PCI 1068 (72.1%) 190 (27.9%) 46 (6.6%) 1304 (45.6%) $<$ 0.001 Drugs Single antiplatelet drugs 75 (5.8%) 79 (13.0%) 204 (34.1%) 358 (14.3%) $<$ 0.001 Dual antiplatelet drugs 1215 (93.4%) 512 (84.3%) 377 (63.0%) 2104 (84.0%) $<$ 0.001 lipid-lowering drugs 1207 (95.9%) 576 (94.3%) 583 (93.9%) 2366 (95.0%) 0.116 β-blockers 870 (69.3%) 418 (68.6%) 394 (63.8%) 1682 (67.7%) 0.048 ACEI/ARB 627 (50.4%) 361 (59.2%) 352 (57.1%) 1340 (54.3%) 0.001 P-value for three comparisons. STEMI, ST-segment elevation myocardial infarction. NSTEMI, non-STEMI. UA, unstable angina. PCI, percutaneous coronary intervention. ACEI, angiotensin-converting enzyme inhibitors. ARB, angiotensin receptor blockers.
The critical time intervals of patients with STEMI for primary PCI are shown in Fig. 1 PCI was performed within 72 hours in 72.6% of NSTEMI and UA (NSTE-ACS) patients (Table 3).
Fig. 1.
Critical time intervals of patients with STEMI for primary PCI. S-D, the time from symptoms to hospital. S-FMC, the time from symptoms to first medical contact. S-B, the time from symptoms to start of the balloon. D-B, the time from door to start of the balloon. FMC-B, the time from first medical contact to start of the balloon.
Table 3.Time intervals of patients with NSTEMI and UA. CAG, coronary angiography.
Variable NSTEMI 681 UA 694 NSTE-ACS 1375 Door-to-CAG time (h) $<$ 2 85 (18.8%) 33 (8.4%) 118 (14.0%) $<$ 24 244 (54.1%) 118 (30.2%) 362 (43.0%) $<$ 72 349 (77.4%) 266 (68.0%) 615 (73.0%) Door-to-PCI time (h) $<$ 2 58 (16.9%) 4 (2.2%) 62 (11.7%) $<$ 24 189 (54.9%) 46 (24.9%) 235 (44.4%) $<$ 72 267 (77.6%) 117 (63.2%) 384 (72.6%) The proportion of interventional patients in angiography (%) 76.2 47.6 62.9 NSTE-ACS, NSTEMI and UA.
The in-hospital outcomes for patients with ACS were death (5.4%) and acute heart failure (17.8%), as listed in Fig. 2 and Table 4. After logistic regression analysis, independent prognosticators of in-hospital mortality in ACS patients were: STEMI, cardiac arrest, number of affected vessels $\geq$ 2, advanced age, and high Killip classification; whereas reperfusion therapy was associated lower risk of death (Fig. 3).
Fig. 2.
The in-hospital mortality of patients with ACS. The in-hospital mortality for patients with STEMI was 8.1%. Outcomes for those with NSTEMI/UA were 4.0% and 0.9%.
Fig. 3.
Logistic regression analysis was used to identify independent predictors of hospital mortality. After logistic regression analysis, independent prognosticators of in-hospital mortality in ACS patients were: STEMI, cardiac arrest, number of affected vessels $\geq$ 2, advanced age, and high Killip classification; whereas reperfusion therapy was associated lower risk of death.
Table 4.The in-hospital outcomes of patients with ACS.
Variable STEMI 1482 NSTEMI 681 UA 694 Total 2857 P value Acute heart failure 327 (22.6%) 100 (15.3%) 67 (9.9%) 494 (17.8%) $<$ 0.001 Death 120 (8.1%) 27 (4.0%) 6 (0.9%) 153 (5.4%) $<$ 0.001 Death or Acute heart failure 363 (25.0%) 110 (16.8%) 68 (10.1%) 541 (19.4%) $<$ 0.001 P-value for three comparisons.
4. Discussion
This study was conducted to document the baseline information, management strategies and in-hospital mortality of the patients admitted to hospitals with a diagnosis of ACS in Chengdu, P. R. China. In contrast to the data reported from developed countries [4, 5], we recorded more cases of STEMI than NSTEMI or UA. Patients with STEMI tended to be slightly younger than those with NSTE-ACS, with a higher proportion of men, fewer risk factors and a less frequent history of cardiac disease. However, they were more often smokers and had a higher all-cause mortality rate.
Early revascularization leads to a significant reduction in cardiovascular events [6]. As an important treatment for ACS, PCI contributes to a favorable prognosis. The rate of STEMI patients undergoing PCI was much higher in our registry (79.7%) than in previous studies in P. R. China [7, 8, 9] and the rate of primary PCI (pPCI) in STEMI patients was 72.1%, which is comparable to the percentage of 70-80% in European and American countries [10] and significantly higher than the percentages of pPCI patients in 2017 and 2018 published by the National Center for Cardiovascular Health Quality Control in mainland P. R. China (42.2% and 45.94%, respectively). The high proportion of STEMI patients with pPCI in Chengdu indicates that the construction of chest pain centers contributes to improving the medical level and quality of services provided by the hospitals. As a result, the treatment of STEMI patients has better outcomes, which is conducive to improving the prognosis of the patients. It was found that although the in-hospital mortality of patients with NSTEMI or UA may be lower than that of patients with STEMI, the mortality rate was already comparable to that of STEMI patients at 1 and 2 years of follow-up after being discharged, reflecting the importance of clinical treatment in NSTEMI and UA [11, 12].
In this study, the proportion of patients with NSTE-ACS receiving CAG (60.9%) was comparable to the percentage published by the previous Care for Cardiovascular Disease in China (CCC) study (63.1%) [13], but only 38.8% of the patients received PCI, far lower than the percentage of 58.2% in the CCC study. Besides, the proportion of patients who underwent CAG and PCI was lower than that of the developed countries [4, 14]. The proportion of patients with NSTEMI/UA who underwent PCI was even less than half of that of STEMI patients (79.7%), and the proportion of NSTEMI/UA patients who underwent CAG was also lower than that of STEMI patients. This might be because STEMI is more critical, and it also indicates that doctors pay less attention to patients with NSTEMI/UA and the management in those cases is not as active as in STEMI cases. In addition, clinicians should enhance the awareness of symptoms among NSTEMI/UA patients.
Overall, 61.3% of the ACS patients in this study received revascularization, 60.4% of which were PCI, up from two-fifths of the patients as reported in the GRACE registry [15] but there was still room for improvement compared with the PACIFIC registry in Japan [16]. Our thrombolytic rate was the lowest compared with other similar studies [7, 8, 14], this might be because PCI is considered as the main treatment of reperfusion and the current national guidelines list pPCI as the preferred treatment for acute MI. Besides, the study included secondary and tertiary hospitals, some of which are still mainly venous thrombosis treatment hospitals (especially primary hospitals) that were not included, which may underestimate the proportion of thrombosis in Chengdu.
The median time from symptom onset to hospital admission in STEMI (190 min) was better compared with previous studies conducted in P. R. China [7, 8], which may be due to the economic growth and overall development that led to an increase in the public awareness of medical treatment. However, patients with STEMI in our study took much longer to reach the hospital than the patients in developed countries (145 minutes in Germany [17] and 141 minutes in France [18]) and this could be improved. In this study, patients with NSTEMI/UA took even longer to reach the hospital, which was also demonstrated in several other large-scale studies [14, 18, 19]. Possible explanations include that NSTEMI patients were more likely to present without typical symptoms (chest pain, palpitation and dyspnea), and they tend to be older with more comorbidities including hypertension and DM, so identifying the symptoms of cardiac origin may be masked by these chronic diseases and pre-hospital delays. Our results were useful for both clinicians and patients to gain a deeper understanding of the symptoms of NSTEMI and to reduce pre-hospital delays.
The door-to-balloon time (D-B) for primary PCI was 85 min in STEMI patients, which complied with the domestic and international guidelines for the recommended D-B within 90 minutes. However, there was still a gap compared with the developed countries (63 minutes in the United States [16], 50 minutes in Germany [17] and 68 minutes in South Korea [20], and the compliance rate of having D-B within 90 minutes was only 56.8%, far lower than the standard set by the Chinese Chest Pain Center (75%)). Common reasons for delays in the D-B include delays in catheter lab access, activating catheter lab team and transfer, as well as financial reasons and having a long queue of patients, all these factors should be addressed [21]. The establishment of the chest pain center aims to optimize the treatment process of ACS and reduce the reperfusion time. In addition, the national medical security system needs to be further improved.
In this study, the in-hospital mortality of the STEMI patients was comparable to that of the European and American countries (4%-12%) [4], but there was no significant decline compared with the China PEACE-Retrospective Acute Myocardial Infarction Study (China PEACE study) [7], this may be due to the higher incidence and worse condition of the cardiogenic shock in STEMI patients in this study, which lead to a poor prognosis. Although the median time from symptoms to hospital in this study was shorter than that in the China PEACE study [7], the reperfusion time of some patients was delayed after admission, and the total myocardial ischemia time was slightly longer, which may lead to a higher mortality. Moreover, there is still space for improvement in the PCI rate of STEMI patients compared with developed countries (95.6% in Japan [22]). The lower proportion of NSTEMI/UA patients with PCI in this study compared with the CCC study may have contributed to the slightly higher in-hospital mortality rate in this study (2.4%) compared with the CCC study (1.7%) [13]. In this study, the incidence of acute heart failure and shock in NSTE-ACS patients was higher than that in CCC [13], and patients had a relatively worse prognosis. Therefore, reducing the in-hospital mortality may be achieved by raising the awareness of the disease symptoms, reducing the pre-hospital patient delay and receiving the medical care as soon as possible, i.e., reducing the total myocardial ischemia time of the patients.
The purpose of secondary prevention of ACS is to reduce the recurrence of MI and improve the patients’ quality of life. As a result, the guidelines [23] recommend treating ACS patients with antiplatelet drugs, beta-blockers, ACEI/ARB and statins after being discharged, as well as applying lifestyle and health education interventions. In our study, the utilization rates of the antiplatelet drugs and statins were high, better than those in developed countries such as Japan [22], but the rate of using $\beta$-blockers and ACEI/ARB was lower than other medications [5]. Presumably, clinicians were more cautious about prescribing $\beta$-blockers and ACEI/ARB due to their potential adverse effects on the blood pressure and their negative inotropic effect. In this study, the use of antiplatelet drugs, $\beta$-blockers and statins in STEMI patients was improved compared with the China PEACE study [7]. However, the implementation of secondary prevention guidelines for ACS patients still needs to be further strengthened.
Most of the patients were transferred to hospitals by private vehicles instead of ambulances, which has been associated with pre-hospital delays and may lead to an increased probability of cardiac arrest. Another important act to consider is to provide better education to the public by the health care professionals, especially to people at a high risk of ACS, to help them recognize that ambulances are not only simple transportation vehicles but “mobile hospitals” for early diagnosis and rescue of critical illnesses. In our study, the median hospital stay length was 8 days for NSTEMI/UA patients, longer than the reported median of 3 to 4 days in the United States or European countries [24, 25], this might be because a much smaller proportion of the patients received early PCI in our study compared with the United States or European countries, with about 30.0% of the patients receiving PCI $\geq$ 3 days after admission. Besides, the lack of appropriate rehabilitation or secondary prevention measures for patients with NSTE-ACS may have led to prolonged hospital stays.
The findings of this study should be interpreted in view of several limitations. Firstly, the prospective nature of this study may have caused some degree of bias during data collection. Secondly, our study was an observational non-randomized registry, which may be subjected to selection bias that is related to this type of clinical investigations. Finally, 11 participating hospitals were secondary or tertiary hospitals with facilities for advanced interventional therapy, which may overestimate the proportion of performed PCI and underestimate the proportion of applied thrombolytic therapy in ACS patients in Chengdu.
5. Conclusions
Management of ACS has improved significantly in Chengdu, P. R. China owing to the construction of chest pain referral centers with a collaborative emergency system. However, important gaps continue to persist in terms of outcomes when compared to developed countries. Awareness among physicians and patients may help further improve outcomes and efficiency in the management of ACS.
Author contributions
All authors have contributed significantly. Si-Yi Li and Ming-Gang Zhou contributed to the conception of the work, literature search, experimental studies, data acquisition, data analysis, manuscript preparation, manuscript editing. And Si-Yi Li was a major contributor to the writing of the manuscript. Tao Ye and Lian-Chao Cheng contributed to the conception of the work, manuscript review and approval of the final version of the manuscript. Feng Zhu, Cai-Yan Cui and Yu-Mei Zhang contributed to experimental studies, literature search and data acquisition. Lin Cai contributed to the design, literature search, experimental studies, manuscript review, approval of the final version of the manuscript, and agreement of all aspects of the work.
Ethics approval and consent to participate
The study was approved by the Medical Ethics Committee of The Third People’s Hospital of Chengdu (Ethics approval number: Chengdusanyuanlun [2019] S-67), where exempted informed consent from enrolled patients because of retrospective study protocol.
Acknowledgment
The authors thank all participants for their help and are grateful for the resources provided by the hospital. Thanks to all the peer reviewers and editors for their opinions and suggestions. We also thank EditSprings (https://www.editsprings.com/) for its language help during the preparation of this manuscript.
Funding
This study was supported by Applied Basic Research Project in Sichuan Province (No. 2018JY0126).
Conflict of interest
The authors have no conflicts of interest to declare.
Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Share
|
2022-08-15 12:37:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 65, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24195639789104462, "perplexity": 8784.135955698994}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572174.8/warc/CC-MAIN-20220815115129-20220815145129-00261.warc.gz"}
|
http://jwork.org/wiki/index.php?title=JMathLabTutorial:Statistics_(Random_Numbers)&printable=yes
|
Contents
# Random numbers
You can create vectors and matricies with random numbers using the major distributions. Look at the package [http:/jwork.org//jmathlab/doc/ random_numbers] for the description of various random-number generators.
For example, let us create a vector and a matrix with the random numbers distributed in accordance with a Poisson distribution (assuming the mean 2):
v=poisson_rnd(2,10) % create vector with 10 elements
m=poisson_rnd(2,10,3) % create matrix 10x3
printf('%f',m) % print
Learn how to graph these random numbers using histograms ("density plots") in the next section.
|
2017-11-19 19:43:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5562490224838257, "perplexity": 2178.6918097198577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805761.46/warc/CC-MAIN-20171119191646-20171119211646-00433.warc.gz"}
|
http://www.kmii-jepang.net/assassin-s-aqu/174f29-set-operations-complement
|
Practice: Basic set notation. This is called the complement, and it is used for the set difference when the first set is the universal set. Some programming languages have sets among their builtin data structures. The intersection of sets A and B (denoted by A ∩ B) is the set of elements which are in both A and B. The complement of A is the set of elements of the universal set that are not elements of A. Complement of set A is the set of all elements in the universal set U which are not in A. ex) U={integers from 1 to 10} A={3,6,9}, A={1,2,4,5,7,8,10} which are all elements from the These operators may generally be applied also to data structures that are not really mathematical sets, such as ordered lists or arrays. The complement of A is given by the expression U - A.This refers to the set of all elements in the universal set that are not elements of A. ... Universal set and absolute complement. Set Difference . If U is a universal set and X is any subset of U then the complement of X is the set of all elements of the set U apart from the elements of X. X′ = {a : a ∈ U and a ∉ A} Venn Diagram: Example: U = {1,2,3,4,5,6,7,8} A = {1,2,5,6} Then, complement of A will be; A’ = {3,4,7,8} Properties of Set Operations… In other words, let U be a set that contains all the elements under study; if there is no need to mention U, either because it has been previously specified, or it is obvious and unique, then the absolute complement of A is the relative complement of A in U:[4], The absolute complement of A is usually denoted by Here four basic operations are introduced and their properties are discussed. ),[1][2] are the elements not in A.[3]. A A vector of the same mode as x or y for setdiff and intersect, respectively, and of a common mode for union. Such a data structure behaves as a finite set, that is, it consists of a finite number of data that are not specifically ordered, and may thus be considered as the elements of a set. Next lesson. {\displaystyle A^{c}} Remember the universal set F with the elements {2, 4, 6, 8, 10, 12}? The set complement operation finds elements that are in one set but not the other. It is sometimes written B − A,[1] but this notation is ambiguous, as in some contexts it can be interpreted as the set of all elements b − a, where b is taken from B and a from A. Online set theory calculator which helps to find complement of given sets. ... Or you could view this as the relative complement-- I always have trouble spelling things-- relative complement of set B in A. Bringing the set operations together. > OPERATIONS ON SETS > Complement of a Set. The relative complement of A in B is denoted B ∖ A according to the ISO 31-11 standard. A variant \smallsetminus is available in the amssymb package. ¯ . Moreover, the Python set type deals in sets of discrete objects, not a mathematical construct that could be infinitely large, such as all natural numbers. ex) U={integers from 1 to 10} A={3,6,9}, A={1,2,4,5,7,8,10} which are all elements from the Practice: Basic set notation. e.g. In some cases, the elements are not necessary distinct, and the data structure codes multisets rather than sets. 4 CS 441 Discrete mathematics for CS M. Hauskrecht Equality Definition: Two sets are equal if and only if they have the same elements. (or Moreover, the Python set type deals in sets of discrete objects, not a mathematical construct that could be infinitely large, such as all natural numbers. {\displaystyle \complement A} Definition : The union of sets A and B, denoted by A B, is the set defined as (The common element occurs only once). Hence, A ∪ B = { x | x ∈ A OR x ∈ B }. Hence, A ∩ B = { x | x ∈ A AND x ∈ B }. Perform the operations of union, intersection, complement, and difference on sets using proper notation. A set is a collection of items. Venn diagram, invented in 1880 by John Venn, is a schematic diagram that shows all possible logical relations between different mathematical sets. Set operations Two sets can be combined in many different ways. Set ADT has operations as union, intersection, size, and complement. When all sets under consideration are considered to be subsets of a given set U, the absolute complement of A is the set of elements in U, but not in A . 1 - 6 directly correspond to identities and implications of propositional logic, and 7 - 11 also follow immediately from them as illustrated below. Here four basic operations are introduced and their properties are discussed. Application (user level) • (How the ADT used to solve a problem) o 3. 2020/12/9 …s | Union | Intersection | {\displaystyle A'} Set Operations •Let A be the set of students who live within one mile of school and let B be the set … ∁ Example − If A = { 10, 11, 12, 13 } and B = { 13, 14, 15 }, then A ∪ B = { 10, 11, 12, 13, 14, 15 }. The Wolfram Alpha widgets (many thanks to the developers) was used for the Venn Diagram Generator. Example: • {1,2,3} = {3,1,2} = {1,2,1,3,2} Note: Duplicates don't contribute anythi ng new to a set, so remove them. Hence, A' = { x | x ∉ A }. The complement of relation R can be written. It can be applied to implement set complement operation as well: \$ comm -23 <(sort set1) <(sort set2) If underlying universal set is fixed, then we denote U \ X by X' and it is called compliment of X. Complement is one of the important operations on sets which can be used to find the difference between the universal set and the given set. This is called the complement, and it is used for the set difference when the first set is the universal set. Basic properties of set operations are discussed here. Let A, B, and C be three sets. Example: • {1,2,3} = {3,1,2} = {1,2,1,3,2} Note: Duplicates don't contribute anythi ng new to a set, so remove them. Producing the complementary relation to R then corresponds to switching all 1s to 0s, and 0s to 1s for the logical matrix of the complement. Next lesson. Hence, A - B = { x | x ∈ A AND x ∉ B }. When doing set operations we often need to define a universal set, $$U$$. {\displaystyle A^{c}} The following identities capture notable properties of relative complements: A binary relation R is defined as a subset of a product of sets X × Y. For example: The intersection of the sets {1, 2, 3} and {2, 3, 4} is {2, 3}. More specifically, A'= (U - A) where Uis a universal set that contains all objects. PREVIEW ACTIVITY $$\PageIndex{1}$$: Set Operations. Enter values separated by comma(,) Set A . The following identities capture important properties of absolute complements: Relationships between relative and absolute complements: The first two complement laws above show that if A is a non-empty, proper subset of U, then {A, Ac} is a partition of U. The complement of a set is everything not in the set, but part of the 'universal set'. PREVIEW ACTIVITY $$\PageIndex{1}$$: Set Operations. R {\displaystyle {\overline {A}}} https://edudelighttutors.com/2020/10/14/sets-collection-element-member More specifically, A'= (U - A) where U is a universal set that contains all objects. A The complement of a set is in relation to the universal set for that problem. For example, suppose we have some set called “A” with elements 1, 2, 3. But the complement is … In set theory, the complement of a set A , often denoted by The objects or symbols are called elements of the set. Implementation • (Operation are actually coded. = {x | x A} U A. Without a definition of the universal set, you can't really give a standard-library definition of the complement of a set.. The truth of aRb corresponds to 1 in row a, column b. A The complement of a set A (denoted by A’) is the set of elements which are not in set A. [1], If A is a set, then the absolute complement of A (or simply the complement of A) is the set of elements not in A (within a larger set that is implicitly defined). .[5]. The difference between sets is denoted by ‘A – B’, which is the set containing elements that are in A but not in B. The Complement . If Set O {6, 8, 10}, the complement of O (Ō), is {2, 4, 12}. The complementary relation A Example − If A = { 11, 12, 13 } and B = { 13, 14, 15 }, then A ∩ B = { 13 }. Operations on sets. Definition : The union of sets A and B, denoted by A B, is the set defined as Example − If A = { x | x belongs to set of odd integers } then A' = { y | y does not belong to set of odd integers }, The Cartesian product of n number of sets A1, A2, ... An denoted as A1 × A2 ... × An can be defined as all possible ordered pairs (x1, x2, ... xn) where x1 ∈ A1, x2 ∈ A2, ... xn ∈ A_n. In Section 2.1, we used logical operators (conjunction, disjunction, negation) to form new statements from existing statements.In a similar manner, there are several ways to create new sets from sets that have already been defined. The complement of A is given by the expression U - A.This refers to the set of all elements in the universal set that are not elements of A. It refers as A c, A', A-Complement Set Theory. Set Operations Complement: The complement of a set A is the set of all elements in the universal set NOT contained in A, denoted Ā. Given a set A, the complement of A is the set of all element in the universal set U, but not in A. Hence, A' = { x | x ∉ A }. Hence A satisfies the conditions for the complement of . {\displaystyle \complement _{U}A} I used the AJAX Javascript library for the set operations. We write A - B or A \ B to denote set's B complement in set A. Comm has become a pretty useful command for operating on sets. Specification • Describes logical/abstract level. Set Operations •Generalized Intersection •The intersection of a collection of sets is the set that contains those elements that are members of every set in the collection. Sets - Basic Concepts, Set Operations (Complement, Union and Intersection) 47 mins Video Lesson . We denote a set using a capital letter and we define the items within the set using curly brackets. Hence . Abstraction levels: Three levels of abstraction (ADT) o 1. Details. Like the domain for quantifiers, it's the set of all possible values we're working with. Each of union, intersect, setdiff and setequal will discard any duplicated values in the arguments, and they apply as.vector to their arguments (and so in particular coerce factors to character vectors).. is.element(x, y) is identical to x %in% y. {\displaystyle A'} Set operations: Union, Intersection, Complement and number of elements in a set. The complement of a set A (denoted by A’) is the set of elements which are not in set A. One sort of difference is important enough to warrant its own special name and symbol. The set in which the complement is considered is thus implicitly mentioned in an absolute complement, and explicitly mentioned in a relative complement. When all sets under consideration are considered to be subsets of a given set U, the absolute complement of A is the set of elements in U, but not in A. Numbers, integers, permutations, combinations, functions, points, lines, and segments are just a few examples of many mathematical objects. Example − If we take two sets A = { a, b } and B = { 1, 2 }, The Cartesian product of A and B is written as − A × B = { (a, 1), (a, 2), (b, 1), (b, 2)}, The Cartesian product of B and A is written as − B × A = { (1, a), (1, b), (2, a), (2, b)}, Minimum operations required to set all elements of binary matrix in C++, Minimum operations to make the MEX of the given set equal to x in C++, Data Structures Stack Primitive Operations. ′ ¯ A = {Citizen Kane, Casablanca, The Godfather, Gone With the Wind, Lawrence of Arabia} Set B below contains the five best films according to TV Guide. [1] Other notations include c Complement of Sets Calculator. Set Complement. complement of set ordered pair, ordered n-tuple equality of ordered n-tuples Cartesian product of sets Contents Sets can be combined in a number of different ways to produce another set. Often not explicitly defined, but implicit based on the problem we're looking at. The union of sets A and B (denoted by A ∪ B) is the set of elements that are in A, in B, or in both A and B. 31. It follows that some programming languages may have a function called set_difference, even if they do not have any data structure for sets. How question) C++ variables: Part 1 Page 5 That is, x is an element of the intersection A ∩ B, if and only if x is both an element of A and an element of B. One sort of difference is important enough to warrant its own special name and symbol. Complement of Set. Different mathematical sets, such as ordered lists or arrays used for the of! Or or both Equations Quiz Order of operations Quiz Types of angles Quiz curly brackets union... complement let be... ) where U is set operations complement universal set and A be A set using curly brackets explicitly! Was used for the complement of set and A be A set A possible we... In row A, column B problem we 're working with by John,... Application ( user level ) • ( How the ADT used to solve problem... Three levels of abstraction ( ADT ) o 3 library for the complement of A A... Venn Diagrams for complement, and Cartesian Product the complementary relation R ¯ { \displaystyle { \bar { }... Lot more about complements in the future U be the universal set that contains all objects Intersection | operations. Looking at comma (, ) set A ( denoted by A )... Different mathematical sets of B the universal set union and Intersection ) 47 Video... Or both } U A but not the other in which the complement A... Of sets: the union of sets: the union of set operations: union, set difference when first... Sets among their builtin data structures A ’ ) is the set differences applied also to data that. To 1 in row A, column B in one set but not the.! ∩ B = { x | x ∈ B } x | x ∉ A } variables... Mode as x or Y for setdiff and intersect, respectively, and the set operations: union, Intersection. By A ’ ) is the Venn diagram Generator all possible logical relations between different sets. Find complement of A set finds elements that are not necessary distinct, and it is called the complement given! Intersection ) 47 mins Video Lesson A capital letter and we define the items within set! Uis A universal set F with the elements are not in set A the! The complementary relation R ¯ { \displaystyle { \bar { R } )... Calculator which helps to find complement of A set difference when the first set is the universal,... Difference on sets using proper notation, intersect and union Diagrams for,., complement, union and Intersection ) 47 mins Video Lesson any data codes... We define the items within the set of all possible logical relations between different mathematical,! Solve A problem ) o 3 implicit based on the problem we 're working with real,. With rows representing the elements of A with respect to U ( which U-A. Comma (, ) set A of all possible values we 're looking at ( U - A where... The future = { x | x ∈ B } B ) (! Was used for the set differences important enough to warrant its own special and. For union logical matrix with rows representing the elements { 2,.! Talk A lot more about complements in the universal set its own special name and symbol looking at and... Give A standard-library definition of the same mode as x or Y setdiff... Angles Quiz A'= ( U - A ) where U is A universal set is fixed, we! For quantifiers, it 's the set operations: union, Intersection, size, and.... Enter values separated by comma (, ) set A of Y ca n't really give A standard-library definition the... A ∩ B = { x | x ∈ B } with respect to (! ) • ( What the operations do ) o 2 consists of elements which are not distinct! Complement is denoted as A ' or AC the first set is fixed, then we U. And is the universal set is considered is thus implicitly mentioned in an Absolute complement and. Quiz Order of operations Quiz Types of angles Quiz and union may generally be applied also data! Has operations as union, Intersection, set operations we often need to A... …S | union | Intersection | > operations on sets > complement of R in x × Y 4! ' and it is used for the complement is … when doing set operations curly brackets A definition of same! To find complement of A common mode for union you ca n't really give A standard-library definition of the of. 3 set problem ; SUB TOPIC: set operations we often need to define A universal set, explicitly... When doing set operations: union, Intersection and complement B } abstraction levels Three..., \ ( \PageIndex { 1 } \ ): set operations: union, Intersection, complement, difference... A, column B, we can see ( A - B {! Basic properties of set and is the set operations: union, Intersection, set operations of... ( user level ) • ( How the ADT used to solve A problem ) o 3 are and! ∪ is employed to denote the union of set operations are discussed here special name and symbol ∪... Diagram Generator Order of operations Quiz Types of angles Quiz } is the set in which the complement of,. C be Three sets theory calculator which helps to find complement of A set probably \ ( U\.... A'= ( U - A ) and we 're looking at sometimes the complement is … when set! Elements are not in set A is the set difference when the first set the... 1, 2, 3 A disjoint B for sets A lot more about complements in amssymb. X × Y or or both \ ) first set is the universal set, you ca n't really A! All possible values we 're looking at according to the ISO 31-11 standard applied! > operations on sets using proper notation it follows that some programming languages have operators functions! Operations include set union, Intersection and complement set problem ; SUB TOPIC: set operations complement. The data structure for sets own special name and symbol structure for sets Intersection and.... Two sets x or Y for setdiff and intersect, respectively, and elements... These operators may generally be applied also to data structures that are in one set but not the other relative... Matrix with rows representing the elements are not really mathematical sets, such ordered... B ∖ A according to the ISO 31-11 standard is called the is... That shows all possible logical relations between different mathematical sets, such as ordered lists or arrays A in is! Subset, intersect and union A c, A ∩ B = { x | x A! Operations Quiz Types of angles Quiz hence A satisfies the conditions for the Venn Generator... Is A schematic diagram that shows all possible logical relations between different mathematical sets if universal... B be two sets in A relative complement ∩ B = { x | x ∈ A and be... Theory calculator which helps to find complement of A in B is denoted ∖., \ ( \PageIndex { 1 } \ ) helps to find complement of R x. A, denoted by set operations complement is A universal set, you ca n't give! Subtracting Matrices Quiz Factoring Trinomials Quiz Solving Absolute Value Equations Quiz Order operations. C, A ' = { x | x ∉ A } Solving Absolute Value Equations Quiz Order operations! Do not have any data structure codes multisets rather than sets ) is the set! In some cases, the elements are not in set A ( by. Really mathematical sets logical relations between different mathematical sets, such as ordered or... Elements are not in set A ( denoted by, is the Venn diagram, invented 1880. Or functions for computing the complement of set, you ca n't really give A definition. A satisfies the conditions for the set differences set using curly brackets these operators may be. X, and the data structure codes multisets rather than sets A.!, Intersection and complement level ) • ( How the ADT used to solve A problem ) o 2 schematic... ∪ is employed to denote the union of set, and complement intersect, respectively, of! I.E., all elements in the universal set, and difference diagram and Applications up to 3 set ;. With rows representing the elements { 2, 4, 6,,! And intersect, respectively, and it is used for the set difference when the set... Variant \smallsetminus is available in the future 3 set problem ; SUB TOPIC: set:... Hence, A ' = { x | x A } U.! C be Three sets, subset, intersect and union going to talk A lot more about complements the! Operations as union, Intersection, size, and difference on sets using proper.. Calculator which helps to find complement of A set Basic operations are discussed like the domain for quantifiers it! Of x is considered is thus implicitly mentioned in A universe U Applications to! ) where Uis A universal set, and c be Three sets applied to. A ‘ or A ∁ all elements of A set as ordered lists or arrays Basic operations are.!, A ∪ B = { x | x ∈ B } Javascript library for the of... X A } U A is thus implicitly mentioned in an Absolute complement, it... • ( What the operations do ) o 2 set using A capital letter and 're.
Hinata In Real Life Haikyuu, Almond Paste Cake Frosting, Stereo'' - Craigslist, Home Depot Protractor, Coast Cabins Manzanita, Valparai Resorts And Cottages Phone Number, Royal Icing Recipe For Cakes, Natural Light In House Design, Azulik Tulum Prices, Bruce Power Kincardine,
|
2022-10-03 19:03:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7095130681991577, "perplexity": 654.1680903163758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00610.warc.gz"}
|
https://cdsweb.cern.ch/collection/ATLAS%20Notes?ln=ka
|
# ATLAS Notes
უკანასკნელი დამატებები:
2015-04-15
17:10
Search for dark matter with the ATLAS detector / ATLAS Collaboration This paper presents the results of several searches for dark matter with the ATLAS experiment at LHC using proton-proton collisions at $\sqrt{s} = 8$~TeV. [...] ATL-PHYS-PROC-2015-013. - 2015. - 6 p. Original Communication (restricted to ATLAS) - Full text
2015-04-13
23:56
ATLAS Inner Detector Alignment Performance with February 2015 Cosmic Rays Data Results of the first alignment of the new insertable B-Layer, which was installed during the first long shutdown, are presented. [...] ATL-PHYS-PUB-2015-009. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-04-13
22:30
An imaging algorithm for vertex reconstruction for ATLAS Run-2 The reconstruction of vertices corresponding to proton--proton collisions in ATLAS is an essential element of event reconstruction used in many performance studies and physics analyses. [...] ATL-PHYS-PUB-2015-008. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-04-13
19:02
Data-driven determination of the energy scale and resolution of jets reconstructed in the ATLAS calorimeters using dijet and multijet events at $\sqrt{s\ }=8~TeV$ The response of the ATLAS calorimeter to jets is studied using data-driven techniques for proton-proton collisions at $\sqrt{s} = 8~$TeV recorded by ATLAS in 2012. [...] ATLAS-CONF-2015-017. - 2015. - 17 p. Original Communication (restricted to ATLAS) - Full text
2015-04-13
18:59
Jet energy scale and its uncertainty for jets reconstructed using the ATLAS heavy ion jet algorithm The jet energy scale (JES) and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC, reconstructed using techniques developed for jet measurements for heavy ion (HI) collisions. [...] ATLAS-CONF-2015-016. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-04-10
16:39
Looking for a hidden sector in exotic Higgs boson decays with the ATLAS experiment / Coccaro, Andrea (Department of Physics, University of Washington, Seattle) The nature of dark matter is one of the most intriguing questions in particle physics. [...] ATL-PHYS-PROC-2015-012. - 2015. - 7 p. Original Communication (restricted to ATLAS) - Full text
2015-04-06
13:22
Radiation Tolerant Electronics and Digital Processing for the Phase-I Trigger Readout Upgrade of the ATLAS Liquid Argon Calorimeters / Milic, Adriana (European Laboratory for Particle Physics, CERN) The high luminosities of $\mathcal{L} > 10^{34} \mathrm{cm}^{-2} \mathrm{s}^{-1}$at the Large Hadron Collider (LHC) at CERN produce an intense radiation environment that the detectors and their electronics must withstand. [...] ATL-LARG-PROC-2015-001. - 2015. - 6 p. Original Communication (restricted to ATLAS) - Full text
2015-03-30
15:49
Performance of the ATLAS Tile Calorimeter / Heelan, Louise (The University of Texas at Arlington) ; ATLAS Collaboration (The University of Texas at Arlington) The ATLAS Tile hadronic calorimeter (TileCal) provides highly-segmented energy measurements of incoming particles. [...] ATL-TILECAL-PROC-2015-004. - 2015. Original Communication (restricted to ATLAS) - Full text
2015-03-28
07:30
Top quark physics in the ATLAS detector: summary of Run I results / Moreno Llacer, Maria (Georg-August-Universitat Goettingen, II. Physikalisches Institut) An overview of the most recent results on top quark physics obtained using proton-proton collision data collected with the ATLAS detector at the Large Hadron Collider at $\sqrt{s}$ = 7 TeV or $\sqrt{s}$ = 8 TeV centre-of-mass energy are presented. [...] ATL-PHYS-PROC-2015-011. - 2015. - 15 p. Original Communication (restricted to ATLAS) - Full text
2015-03-28
07:02
The Upgrade of the ATLAS Tile Calorimeter Readout Electronics for Phase II / ATLAS Tile Collaboration (School of Physics, University of the Witwatersrand) ; Reed, Robert (School of Physics, University of the Witwatersrand) The Large Hadron Collider at CERN is scheduled to undergo a major upgrade, called the Phase II Upgrade, in 2022. [...] ATL-TILECAL-PROC-2015-003. - 2015. - 5 p. Original Communication (restricted to ATLAS) - Full text
დაფოკუსირდი:
ATLAS PUB Notes (2,592)
|
2015-04-19 21:05:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.865966796875, "perplexity": 4542.750719550315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639482.79/warc/CC-MAIN-20150417045719-00243-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://crypto.stackexchange.com/questions/67486/what-is-chain-voting/67492
|
# What is chain voting?
I read The Three Ballot Voting System by Rivest. This paper-based voting system can be attacked with chain voting. But can't find any description of what "Chain voting" is mentioned in section 4.9.
Would anyone explain what it is?
• Welcome to Cryptography. Could you link your resources by editing your question? – kelalaka Feb 21 '19 at 7:53
• What is "paper-based voting"? Just regular voting with physical paper ballots? If yes then I'm not sure the question as posed is on-topic here. – Maeher Feb 21 '19 at 8:05
• Ok. I will include the source – user10842694 Feb 21 '19 at 8:18
After a small search; it is a vote buying scheme.
Chain voting, a vote buying scheme in which a crook gives the voter a pre-voted ballot, the voter votes that ballot, and then after leaving the polling place, sells his blank ballot to the crook, who votes it and then gives it to the next willing participant.
This is from Douglas W. Jones web page and reference 12 of the Rivest's article points his definition.
This attack is applied in many countries (I'll not give any source). One countermeasure can be paper-based PUF. Also, some ideas from University Voting Systems Competition might be interesting to look at.
• Thanks man. I couldn't find anything related to it. I wonder how you got the source. – user10842694 Feb 21 '19 at 8:43
• Google Douglas W. Jones. Chain voting – kelalaka Feb 21 '19 at 9:04
• France has a long history of voting and voting fraud, leading to a well-thought electoral code (except recent changes for ridiculous electronic voting machines, fortunately contained to few cities). Chain voting is next to impossible because ballots are freely available at the entry of the voting place, plus are sent by mail. Thus if a person is asked to participate in such scheme, s/he can accept, take the money, and still vote as s/he wants with a single serious risk of being caught by the villain: marked ballots. These are scrutinized and discounted, + ballots are destroyed after counting. – fgrieu Feb 26 '19 at 21:23
• @fgrieu Thanks for the nice information. Nice to hear that some country has some solutions. Is there any reference that one can read. – kelalaka Feb 26 '19 at 21:25
• @kelalaka: if you can read french, the law is there, or a few clicks away. If you don't, there remains G. translate. Part of what I describe is L58, complemented by other provisions making it cheap for a candidate/party to supply ballots. Note: If you upvoted my earlier comment, that got lost to a late edit. – fgrieu Feb 26 '19 at 21:39
|
2020-01-28 19:03:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4279099404811859, "perplexity": 3112.528670940621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783000.84/warc/CC-MAIN-20200128184745-20200128214745-00328.warc.gz"}
|
https://economics.stackexchange.com/questions/4908/is-the-symmetric-equiblirium-in-congesstion-games-always-inferior-in-terms-of-so
|
# Is the symmetric equiblirium in congesstion games always inferior in terms of social-welfare?
Let $G$ be a finite, symmetric, congestion game. According to Nash theorem, a (mixed) symmetric equilibrium surely exists. Congestion games also known to admit pure-strategies Nash equilibrium as they are exact potential games.
I was wondering if a symmetric (i.e. one in which all players play a strategy with the same probability) can yield higher social-welfare (sum of payoffs for the players) than any other equilibrium for the game.
The intuition is that if all players are using the same elements, this has to incur higher cost than if they partition across multiple actions, as the payoff per player depends directly on the number of players using the same elements.
Also, is there always a single symmetric equilibrium for congestion games?
|
2020-01-27 09:15:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6136313676834106, "perplexity": 938.0877123427347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00381.warc.gz"}
|
http://physics.aps.org/synopsis-for/10.1103/PhysRevB.84.161405
|
# Synopsis: Thin-Skinned Insulators
Researchers discover a subtle proximity effect at the interface between a normal metal and an antiferromagnetic insulator.
The so-called proximity effect manifests itself as a mutual induction of physical properties from one material into an adjacent one, across their interface. In the most famous example, superconducting electron pairs are induced in a neighboring normal metal, and conversely, normal electrons in the metal permeate the superconductor. However, at the interface between a metal and an insulator, one would not expect such a behavior. Now, in a Rapid Communication published in Physical Review B, Ko Munakata and collaborators from Stanford University, California, present evidence for a subtle proximity effect that arises between a normal metal and an antiferromagnetic insulator.
The researchers compare bilayers of an ordinary metal, copper ($\text{Cu}$), grown on top of two insulators of distinct types, $\text{MgO}$ and $\text{CuO}$. The former is a conventional band insulator, while the latter is an antiferromagnetic Mott insulator, a material where strong Coulomb interactions prohibit electronic conduction. Low-temperature transport measurements show that the well-understood effects of weak localization due to material disorder are different in the two cases. Their analysis shows that the difference arises from a quenching of spin-flip scattering from trace magnetic impurities (commonly found in pure $\text{Cu}$) in the $\text{Cu}/\text{CuO}$ bilayers, which is not observed in the $\text{Cu}/\text{MgO}$ bilayers. Munakata et al. argue that the freezing of impurity spins is the consequence of an induced alternating spatial spin polarization inside $\text{Cu}$ by the antiferromagnetically ordered spins in $\text{CuO}$. These results not only demonstrate a new phenomenon but may also provide a new mechanism to control spins at solid-state interfaces and possible applications in spintronic devices. – Alex Klironomos
### Announcements
More Announcements »
## Previous Synopsis
Superconductivity
Optics
## Related Articles
Magnetism
### Focus: Electric Power from the Earth’s Magnetic Field
A loophole in a result from classical electromagnetism could allow a simple device on the Earth’s surface to generate a tiny electric current from the planet’s magnetic field. Read More »
Magnetism
### Viewpoint: Liquid Light with a Whirl
An elliptical light beam in a nonlinear optical medium pumped by “twisted light” can rotate like an electron around a magnetic field. Read More »
Spintronics
### Synopsis: How Spin Waves Bend
Researchers have verified experimentally that the reflection and refraction of spin waves at an interface follow a Snell’s-like law. Read More »
|
2016-09-30 13:34:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38146525621414185, "perplexity": 1567.5474854160966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662197.73/warc/CC-MAIN-20160924173742-00013-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0746.68056&format=complete
|
# zbMATH — the first resource for mathematics
Formal translation directed by $$LR$$ parsing. (English) Zbl 0746.68056
Summary: The notion of the syntax-directed translation was a highly influential idea in theory of the formal translation. Models for the description of a formal translation are syntax-directed translation schemes. The special case of syntax-directed translation schemes are simple syntax-directed translation schemes, which can be written in the form of translation grammars. It is possible for an arbitrary translation described by a translation grammar with $$LL(k)$$ input grammar to create one-pass translation algorithm by a simple extension of the algorithm of a syntax analysis for $$LL(k)$$ grammars. Similar approach for an $$LR(k)$$ grammar led to the result that it is possible to perform an one-pass formal translation during $$LR(k)$$ analysis only in that case when the translation grammar has a postfix property. The construction of the algorithm is studied, which can, for a particular class of translation grammars (called $$LR(k)\;R$$-translation grammars), perform one pass formal translation. The basic idea is the following: It is possible to make an extension of the algorithm of the syntax analysis for $$LR(k)$$ grammars in such a way, that the output of output symbols can be performed not only as a part of the operation reduction but also as a part of the operation shift.
##### MSC:
68N20 Theory of compilers and interpreters 68Q42 Grammars and rewriting systems
ALGOL 60
Full Text:
##### References:
[1] A.V. Aho, J.D. Ullman: Properties of syntax directed translations. J. Comput. System Sci. 3 (1969), 3, 319 - 334. · Zbl 0174.02802 · doi:10.1016/S0022-0000(69)80018-8 [2] A. V. Aho, J. D. Ullman: Translation on a context-free grammar. Inform, and Control 19 (1971), 5, 439 - 475. · Zbl 0244.68035 [3] A.V. Aho, J.D. Ullman: The Theory of Parsing, Translation and Compiling. Vol. 1. Parsing, Vol. 2. Compiling. Prentice-Hall, New York 1971, 1972. [4] K. Čulík: Well-translatable grammars and Algol-like languages. Formal Language Description Languages for Computer Programming (T. B. Steel, North-Holland, Amsterdam 1966, pp. 76 - 85. [5] E.T. Irons: A syntax directed compiler for Algol 60. Comm. ACM 4 (1961), 1, 51 - 55. · Zbl 0103.34904 · doi:10.1145/366062.366083 [6] E. T. Irons: The structure and use of the syntax-directed compiler. Annual Review in Automatic Programming 3 (R. Goodman, Pergamon Press, New York - London 1963, pp. 207 - 227. [7] J. Krai, J. Demner: Parsing as a subtask of compiling. Mathematical Foundation of Computer Science 1975 (J. Becvaf, Lecture Notes in Computer Science 32), Springer-Verlag, Berlin - Heidelberg - New York 1975. · Zbl 0328.68022 [8] P. M. Lewis, R. E. Stearns: Syntax directed transduction. J. Assoc. Comput. Mach. 15 (1968), 3, 465 - 488. · Zbl 0164.32102 · doi:10.1145/321466.321477 [9] P. M. Lewis D. J. Rozenkrantz, R. E. Stearns: Compiler Design Theory. Addison-Wesley, London 1976. [10] L. Petrone: Syntactic mappings of context-free languages. Proc. IFIP Congress 1965, Part 2, pp. 590 - 591. [11] P. Purdom, C. A. Brown: Semantic routines and $$LR(k)$$ parsers. Acta Inform. 14 (1980), 4, 229 - 315. · Zbl 0424.68052 · doi:10.1007/BF00286489 [12] S. Vere: Translation equation. Comm. ACM 13 (1970), 2, 83 - 89. · Zbl 0208.20001 · doi:10.1145/362007.362031
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-03-06 13:17:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4043412506580353, "perplexity": 2909.3138432275973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00190.warc.gz"}
|
https://www.gamedev.net/forums/topic/661439-problems-with-reversing-a-sublist-of-a-linked-list/
|
# Problems with reversing a sublist of a linked list
This topic is 1242 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
So I'm trying to implement a doubly linked list again after horribly failing the last time I tried a few years ago, and I was remined again why I disliked reversing the list through pointer exchange.
// assume ListNode is a valid type
// and head/tail are initialized properly somewhere and filled with data
ListNode *tail;
void reverse(ListNode&* start, ListNode&* end){
ListNode *c = start;
while(c && c != end->next){
std::swap(c->next, c->prev);
c = c->prev;
}
// some extra logic goes here, i'm not sure what
}
// valid function calls on the dll
I can never seem to figure out what that extra bit of logic is to correctly update the references and pointers from other nodes.
std::swap(startPoint->next, endPoint->prev);
std::swap(startPoint, endPoint);
does not seem to be enough to correct the pointers.
##### Share on other sites
EDIT: Sorry, I thought you were just going from head to end, not some positions in-between. Nevermind the below, don't have time to think of a full solution :) GL.
It looks OK to me, assuming you add the std::swap(start, end)l after the loop. You also don't need the extra while conditional. I'd do this FWIW.
void reverse(ListNode&* start, Listnode&* end)
{
ListNode* c = start;
while(c != NULL) {
ListNode* next = c->next;
c->next = c->prev; // you could use std::swap here I suppose, but I'm not as familiar with it as just doing it myself.
c->prev = next;
c = next;
}
ListNode* tempStart = start;
start = end;
end = tempStart;
}
I would assume using std::swap would work as well like you have, but I'm not familiar with it so I just do it myself, and I would also assume std::swap (start, end) would be the right call too. You can always add some temps just to check it does what you think.
Edited by BeerNutts
##### Share on other sites
Thous kinds of problems are best solved with a pencil and paper - don't annoy your brain with bizarre gymnastics it is not built for.
This is supposed to turn a list of: X-start-B-C-end-Y into X-end-C-B-start-Y.
void reverse(ListNode* start, ListNode* end) {
assert(at && end);
ListNode *at = start, _start = start->prev, _end = end->next;
while(at != end) {
std::swap(at->next, at->prev);
at = at->prev;
}
std::swap(end->next, end->prev);
end->prev = _start;
start->next = _end;
if(_start) _start->next = end;
if(_end) _end->prev = start;
}
... my guess. Have fun covering it with tests.
edit:
"ListNode&*" ... wut? Did you mean "ListNode*&"?
edit2:
(what compiler can make sense out of "ListNode&*"? ... and could someone explain that to me :/ ) Anyway, question: what is supposed to happen here "reverse(foo->prev, bar->prev)"? The path you seem to try leads to a dark place.
Wrap your list implementation in a class, so "reverse" can update the list start and end points when needed (else clauses for the example code). Edited by tanzanite7
|
2018-02-23 01:08:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22855119407176971, "perplexity": 4365.74053004714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814300.52/warc/CC-MAIN-20180222235935-20180223015935-00733.warc.gz"}
|
https://myschool.ng/classroom/mathematics/9
|
### If log 10 to base 8 = X, evaluate log 5 to base 8 in...
If log 10 to base 8 = X, evaluate log 5 to base 8 in terms of X.
• A. $$\frac{1}{2}$$X
• B. X-$$\frac{1}{4}$$
• C. X-$$\frac{1}{3}$$
• D. X-$$\frac{1}{2}$$
##### Explanation
$$log_810$$ = X = $$log_8{2 x 5}$$
$$log_82$$ + $$log_85$$ = X
Base 8 can be written as $$2^3$$
$$log_82 = y$$
therefore $$2 = 8^y$$
$$y = \frac{1}{3}$$
$$\frac{1}{3} = log_82$$
taking $$\frac{1}{3}$$ to the other side of the original equation
$$log_85 = X-\frac{1}{3}$$
explanation courtesy of Oluteyu and Ifechuks
#### Contributions (91)
cotterell
4 years ago
If log10 to base 8 = X
I.e log5 - base8
Log(5 - 8)
X = log3
Applying the law of indices which if 3^0 = 1, a = 1
So log3 = 1
Applying the rule of zero power law
Log3^-1 = 1/3 I.e X=log1/3
Follow
• kareem: I like this method
6 months ago
• Yinkz: dope 1
5 months ago
Shozlee
5 years ago
Woahhhhhh ! All cool..............
Log 10 base 8 = X
log 5 base 8 = log 10/2
" law of log
log 10 base 8 - log 2 base 8
since log 10 is 'X' lets equate log 2 base 8 to 'Y'
log 2 base 8 = Y
2^1 = 8^y ==== 2^1 = 2^3y
equating the powers
1=3y ===== y = 1/3
recall X - Y
log 5 base 8 = X-(1/3)
Follow
• Perosky: Gud
1 year ago
• OLAMIPLENTY: U try
8 months ago
• Favour: Whao dat is awesome
7 months ago
• vicent: Woaaa!!
Log10 to base 8 =x and in place of x(log5 to base 8)
Since the are of the same base
therefore..
Log10 to base 8 =log5 to base 8
Log10 dividing by 5 to base 8
Log2 to base 8(breaking down the base to base 2 it 2^3=8)
Log2to base 2^3
Therefore....3log2 to base 2 =1(applying algorithm to its own base)
3*1
3,,
5 months ago
Tahywoh
5 years ago
Hmm,my people,let's just think. If we simplify log 5 base8 log 2 base 8,we will get log 10 base 8 and we have been given that log 10 base 8 = X. Now let's move on.
Log 5 base 8 log 2 base 8 = X. Are u with me?,,we must be careful here. Can you still remember the rule that says loga^2 base a^3 gives 2/3 log a base a which is eaqual to 2/3 X 1 = 2/3.
Can you still undstd. Go along with me my brother and sisster,
so let's apply d same thing here, log 2 ^1 base 8. Do not forget that 8 can also be written as 2^3.
You got it? If u are with me,am sure u're undstding a lilltle bit.
So we have log 2^1 base 2 ^3 to be 1/3 log 2 base 2 which will now give us 1/ 3 X 1 = 1/3.
U understand me,don't relax pls,keep going along with me.
Recall that we say log5 base 8 X log 2 base 8 = X.
Now that we know our log 2 base 8 = 1/3,so we put into d equation to have..
Log 5 base 8 1/ 3 = X.
Therefore,log 5 base 8 = X - 1/3.
Follow
MUHAMMED SODIQ
6 years ago
since log5 to base 8 = x, and x = log10 to base 8
i.e log5 to base 8 =log10 to base8
then x =log5 to base 8 - log10 to base 8, 5/10=1/2,i.e 2^-1 and 8=2^3,= -1/3=x
Follow
Log8 10=X is given.
Log8 5 = log8(10/2).
Then log10 - log2 = X - log2.
(-log8 2 =X) 8(x) =2
2^(3y) = 2^(1)
Eliminate 2, e.g (2^((3x)) =2^(1)) Then you are left with 3x =1.
= 3x =1
x=1/3
.∴ x=-1/3
By bringing down the - negative sign. The answer is x=-(1/3)
Follow
peterpan20
2 years ago
we are told that we should calculate the distance of the foot of the ladder from the wall. firstly is that we go back to our question that says a ladder 6m long, which mean that our ladder is 6m long and leans against a vertical wall.we all know a vertical line is either a standing line or a slanting line but in this question we are actually given that it is a vertical wall .so to solve this question we are given 60° with the wall ,which mean that the angle will be inbetween the hypoth and adjacent or the angle will be facing the opposite side of the triangle .now our hypo is 6m long will our opposite side of the angle is not given but we can call it "x" sintheta=opposite/hypo
opposite=x,hypoth=6,theta=60 or √3/2
sin 60°=x/6 or √3/2=x/6 so when you cross multiply, it
will be x=6√3/2 ,so 2 divided by 6 will give us 3 therefore the answer is 3√3 .
Follow
Smallking
3 years ago
Brilliantly done, keep it up
Follow
Ezichi
9 months ago
that answer is wromg
Follow
• Endurance: So very wrong.d ans is A
5 months ago
fidel91
3 years ago
pls,were z d 2 coming from,d 2 z confusing me o
Follow
Ben Godwin
3 years ago
for your jamb CBT test question and answer contact Mr Jackson on 09052169005. for your help and how to go about answering it
Follow
pharteemarh
6 years ago
pls we nid an undastandable nd simple solution.
Follow
• ebytex: just add me,let give more xplaination
6 years ago
sunday felicia
3 years ago
WOW!!! U ARE GUYZ ARE WONDERFUL NOE I GOT IT
Follow
it can also be done this way
log10 to base 8 is the same as log5x2 base 8
therefore log5+log2=x,
so, log2 to base 8 is the same as log 2 to base 2 raise to the power of 3
then, 1/3log2 to base 2 will be 1/3
since lo5 to base 3 is x, then x+1/3 will be x=-1/3.
Follow
Felix Brown
6 years ago
Log8 raise to power 10 minus log8 raise to power 5 equal to X gives log8 raise to power 2 equal to X. Therefore, 8 raise to power x = 2 raise to power one. Then, 2 raise to power 3x= 2 raise to power one . Then, 2 cancel on both sides wht will b left is 3x=1. X=1\3 when X crosses equality sign ,d ansa will b X-1\3. Tanx nd God bless una
Follow
• Lexeric: tnks bro now I understand
1 year ago
sammy6829
4 years ago
very wrong sis.
Follow
#### Post your Contribution
Please don't post or ask to join a "Group" or "Whatsapp Group" as a comment. It will be deleted. To join or start a group, please click here
|
2019-08-22 01:51:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4435971677303314, "perplexity": 5164.879434539506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316555.4/warc/CC-MAIN-20190822000659-20190822022659-00540.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?t=44061
|
## Half-Life
$\frac{d[R]}{dt}=-k[R]; \ln [R]=-kt + \ln [R]_{0}; t_{\frac{1}{2}}=\frac{0.693}{k}$
Linh Vo 2J
Posts: 61
Joined: Sat Apr 28, 2018 3:00 am
### Half-Life
When we talk about half-life, does this imply that there is normal life? I'm confused about what half-life is and when we would want to calculate it?
eden tefera 2B
Posts: 39
Joined: Fri Sep 28, 2018 12:21 am
### Re: Half-Life
Half life is the time that it takes for a substance to decompose into half of its intial state, no matter the state.
eden tefera 2B
Posts: 39
Joined: Fri Sep 28, 2018 12:21 am
### Re: Half-Life
To add, half-life is important to determine how quickly quickly unstable atoms undergo, or how long stable atoms survive, radioactive decay.
Rosha Mamita 2H
Posts: 63
Joined: Fri Sep 28, 2018 12:19 am
### Re: Half-Life
half life refers to the time it takes for a substance to decay by one-half of its initial amount. Half life and its equation are used to determine how old a substance is, the initial amount of a substance, or the final amount of a substance after a period of time
Cole Doolittle 2K
Posts: 30
Joined: Fri Sep 28, 2018 12:20 am
### Re: Half-Life
Typically we would use the half-life formula to determine how long an element has been decaying.
Ashe Chen 2C
Posts: 31
Joined: Mon Jan 07, 2019 8:23 am
### Re: Half-Life
In addition, half life is often used for carbon dating of samples, for example fossils, to determine from the amount of decay the sample has undergone. There is no normal life because that would imply that it is the time it takes for the entire sample to decay.
Yiting_Gong_4L
Posts: 69
Joined: Fri Sep 28, 2018 12:25 am
### Re: Half-Life
Half life is used to determine the time it takes for a substance to decompose into half its initial amount.
Brian Chang 2H
Posts: 65
Joined: Fri Sep 28, 2018 12:17 am
### Re: Half-Life
Linh Vo 2J wrote:When we talk about half-life, does this imply that there is normal life? I'm confused about what half-life is and when we would want to calculate it?
Half-life is defined as the time taken for the radioactivity of a specified isotope to fall to half its original value.
Simply put, it's the time taken for half of our sample to decay. ($\Delta t$ when x goes to x/2)
|
2020-10-25 10:28:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5447043180465698, "perplexity": 2401.155015846218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107888931.67/warc/CC-MAIN-20201025100059-20201025130059-00649.warc.gz"}
|
https://www.physicsforums.com/threads/telescope-lens-upgrade.83163/
|
#### vincentm
Gold Member
IS it possible for this one?
I was given an older Meade telescope today the specs:
StarQuest
Model: 60AZ-M
60mm (2.4") Altazimuth refratcing telescope
Lens specs:
- 5x24mm viewfinder
- SR4mm H12.5mm and H25mm eyepieces (0.965" barrel diameter)
- 2x Barlow lenses
What can i upgrade on this thing? It's quite old
Related Astronomy and Astrophysics News on Phys.org
#### russ_watters
Mentor
That's an el-cheapo/K-Mart special and not really worth anything. As a beginner scope it'll provide more frustration than anything else.
Those things come with a barlow lens and 4mm eyepiece so they can advertise 450x magnification (probably a 900mm focal length), but in reality, anything over about 100x will be too fuzzy and moving to fast to see.
If you're feeling lucky, try viewing the moon and Jupiter (look south just after sunset for Jupiter) using the 12.5mm eyepiece.
Last edited:
#### vincentm
Gold Member
Okay, that's what i was thinking, thanks for the info. I'll be looking forward to dropping $300-$400 on a new telescope then, next month. Any suggestions?
#### turbo
Gold Member
What would you like to do with the 'scope? Given the humidity and atmospheric turbulence you can experience there (sandwiched between the ocean and the mountains), you might want to attend a few star parties given by local amateur observing groups and see what those folks are concentrating on observing, and what equipment they favor. Someone on the east coast of the US or in Japan might like to see if they can catch a comet (with suitable telescope) and put their name on it. Someone in a very dry area might favor a very different design than someone from an area prone to dewing, fog, etc. You can see what I'm getting at, I hope. Here in Maine, summer is not always that great (haze and pollution from upwind), but there are nights that are very good. If you can "suit up" for those cold (-10 to -20F often) February nights and store the scope in a very cold space so it doesn't need lots of cool-down time, winter skies can be fantastic! Sometimes the Aurorae swamp out the sky, and you have to just take pictures of the colored curtains, sheets, and rays, but that can be fun too - it just sucks when you were planning to take a very long guided photo of a cherished deep-sky object and you glance up from the guiding eyepiece and notice that the sky is so bright you could probably read a newspaper out there. :yuck:
BTW, for anybody who lives near 45 deg latitude or higher in either hemisphere, if you do not have bright aurorae washing out your sky periodically, you do NOT have optimal skies for viewing faint extended objects. You are in an area where light pollution, haze, etc is pervasive, and might want to think about lunar, solar, planetary observations, double stars, etc. There is lots of great stuff to look at, but don't expect to be knocked out by even M31 under crappy seeing conditions. In the northern hemisphere, if you do not easily see M31 naked-eye most times when you observe, you are in a less-than-good location. In northern Maine (especially with a dry Canadian high coming through) M31 is a beacon - you cannot fail to see it. The fact that it made #31 on Messier's list is probably a testiment to the poor quality of Paris' air when he made his observations.
Last edited:
#### Chronos
Gold Member
Turbo is the resident expert on scopes here, earned by shear willpower in the face of winters' fury - or perhaps the cumulative effects of frostbite. I like short tubes, be they cat's or low f-ratio light buckets [my personal favorite]. They are portable, do not require bedrock mounts that weigh tons - and chicks dig deep sky views with wide relief angle eyepieces .
#### vincentm
Gold Member
well i got it to my friends condo who lives on the 19th floor of her building, and has a decent view of some stars and set it up. But the viewfinder is inverted. WHY? Also i was only given one eyepiece and i can't possibly see anything through it, the viewing hole on it is too damn small.
#### turbo
Gold Member
vincentm said:
well i got it to my friends condo who lives on the 19th floor of her building, and has a decent view of some stars and set it up. But the viewfinder is inverted. WHY? Also i was only given one eyepiece and i can't possibly see anything through it, the viewing hole on it is too damn small.
The viewfinder is inverted because that's the way simple refractors are. You can buy a Porro prism to correct the view, but that's not worth the money and it degrades the image in what is already a VERY tiny little instrument.
Now to the eyepiece: You have an ocular that gives an unrealistically small exit pupil and probably an unrealistically high power (very common in small cheap scopes!).
#### (Q)
Vincent
What you're asking for would cost 5-10X what you're willing to spend. You can get an entry level scope and a few cheap eyepieces for that price, but that's about it.
A decent cassegrain telescope will cost you about $1000 and is probably the best bang for buck for deep-sky and photography. A newtonian may a little cheaper but will be much larger,heavier and therefore more difficult to mount. You can get a small 80-100mm APO refractor for$400-$900 for the chinese scopes and a lot more money (many thousands) for larger aperture and better quality. The small APO take good pics but are only good for planetary and moon, they don't have the aperture for deep-sky. The most important thing about photography is the mount, and you'll be spending$1000 plus for a decent mount. Add more money for tracking, which you must have for photography, and a lot more money for GOTO capabilities, if you want the mount to find the objects for you.
Then there are all the accessories that you didn't think about, which can run to equal or more than what I've already outlined. Some peoples eyepiece collections cost more than the scopes and mounts they own.
Or, you can go all out and get a 28" truss dob with all the tracking and GOTO that will do all you ask for a mere $20,000. CCD cameras can cost anywhere from$100 (Celestron NexImage) to $10,000 and more. Or, you can get an adapter for your SLT or digital camera that mounts right to an eyepiece. Cheap, and not that effective. Check out Astromart, they are the largest website for buying and selling telescope gear, they also have a forum that you can read and sign up to ask questions. #### turbo Gold Member vincentm said: Thanks alot Turbo-1, I'm just gonna drop$400-$500 on a good telescope. What do you recommend, refractor or a reflecting telescope? I wanna view deep sky objects and photograph the planets. what camera would be good to use? My total budget for both will be around$800
For faint, extended objects like nebulae and galaxies, you will want as much aperture as possible, which should make you consider a Dobsonian, given your budget. For planetary observing, or double stars, a small refractor or catadioptric might be nice. Please be aware that for high magnification viewing or for photography, you will need a decent equatorial mount with a drive and that is not going to happen on your budget unless you stumble into a deal on the used market. You might want to seek out a local astronomy club, attend a meeting or two, and let the other members know that you are in the market for a scope. You might get a good deal that way, and you get to try it out first. Bonus!
#### Chronos
Gold Member
I agree with Turbo. You will get more bang for the buck from a dobson in your price range. Actually, it's not terribly difficult to build your own mount. I buried a 4" pipe in the backyard to mount my first scope [it worked very well]. Portable mounts are a little more costly, but doable. The easiest way is to build a platform on wheels you can haul around in a truck or minivan. A little design ingenuity is all that's required.
#### vincentm
Gold Member
Is it possible for one to make his own telescope? I love making things. Any info on THAT?
#### turbo
Gold Member
vincentm said:
Is it possible for one to make his own telescope? I love making things. Any info on THAT?
It is not only possible but quite do-able. You might want to buy a dobsonian kit that supplies the primary and secondary mirrors and mounts and a focusser and build your own tube, rockerbox mount with lazy-susan base and pvc pivots.
This will get you going pretty quickly. If you enjoy that experience, you might want to grind and figure your own primary mirror for your next telescope. You can buy mirrors to build a short focal-length light bucket, and if you decide to build another newtonian, you can grind and figure a long focal length primary for that. It's easier to figure a long focal length primary for a newtonian than a short focal length primary, and it will give you superior planetary views if you do a good job.
#### vincentm
Gold Member
turbo-1 said:
It is not only possible but quite do-able. You might want to buy a dobsonian kit that supplies the primary and secondary mirrors and mounts and a focusser and build your own tube, rockerbox mount with lazy-susan base and pvc pivots.
This will get you going pretty quickly. If you enjoy that experience, you might want to grind and figure your own primary mirror for your next telescope. You can buy mirrors to build a short focal-length light bucket, and if you decide to build another newtonian, you can grind and figure a long focal length primary for that. It's easier to figure a long focal length primary for a newtonian than a short focal length primary, and it will give you superior planetary views if you do a good job.
Reason why i asked is becasue ther eis a guy down the street from me who makes them for a living and takes custom orders. I went to him and asked he could make me one tomy liking (based on that i can provide proper info) as to how i would like my telescope. Thank you turbo-1 you ar ea great help i am starting to like PF and thought i'd contribute a bit and doneate. :D
#### turbo
Gold Member
What kinds of telescopes does he make, and does he grind and figure his own optics? You may be able to get a really nice scope this way. It would be nice to know what his strengths are, so you can get the best bang for the buck. If he is a whiz at figuring mirrors for newtonians, that gives us a nice starting point. Knowing this, we can figure out whether you want a large-aperture dobsonian, a long-focal length planetary newtonian with curved spider vanes and a small secondary (great for planetary views), or maybe something in between.
If he buys optical components and assembles optical tube assemblies and mounts, that's another level of competence (one which you may be able to approach with a decent Dobsonian kit).
Let me know what he usually builds, and I'll try to help you get a decent scope. By the way, I have some massive copier lenses that might make good objectives for short focal length (but big aperture) finders. Once you figure out what kind of scope you want to build or buy, let's figure out what kind of finderscope (or low-power second scope) you might want to have, and I'll send you an objective to build that with.
#### vincentm
Gold Member
turbo-1 said:
What kinds of telescopes does he make, and does he grind and figure his own optics? You may be able to get a really nice scope this way. It would be nice to know what his strengths are, so you can get the best bang for the buck. If he is a whiz at figuring mirrors for newtonians, that gives us a nice starting point. Knowing this, we can figure out whether you want a large-aperture dobsonian, a long-focal length planetary newtonian with curved spider vanes and a small secondary (great for planetary views), or maybe something in between.
If he buys optical components and assembles optical tube assemblies and mounts, that's another level of competence (one which you may be able to approach with a decent Dobsonian kit).
Let me know what he usually builds, and I'll try to help you get a decent scope. By the way, I have some massive copier lenses that might make good objectives for short focal length (but big aperture) finders. Once you figure out what kind of scope you want to build or buy, let's figure out what kind of finderscope (or low-power second scope) you might want to have, and I'll send you an objective to build that with.
Well i just spoke to him on my lunch hour, and he said he doesn't like to do it too much now, but it can be a possibility. I forgot to ask him what kind of scopes he can build but he does grind and figure his own optics, his shop is quite old and dusty and he is an older man in his 70's i can tell he knows his stuff. anyways he has a bushnell dobsonian telescope (114mm) that he can sell me for $150. It has a bulb-ish base and is moved fairly easy. i suck at words sometimes, so it's hard for me to go into detail on how this scope looks but i'm sure you get the idea #### turbo Gold Member It sounds like the Bushnell might be the same scope that Edmunds Scientific calls the Astro-scan. Is this it? They're$200 new with a couple of eyepieces.
http://www.scientificsonline.com/product.asp_Q_pn_E_3002001
You might want to join a local astronomy club, or at least go to a meeting or two as a prospective member and see what telescopes the other members have outgrown or no longer have the space to store. You might get a REAL deal that way.
#### vincentm
Gold Member
i think so, but i don't know of any links, or clubs here in seattle. Maybe you can provide a linky or two?
#### turbo
Gold Member
vincentm said:
i think so, but i don't know of any links, or clubs here in seattle. Maybe you can provide a linky or two?
OK, but it isn't that hard to type "astronomy club" and Seattle into Google. (teach a man to fish.....)
http://www.seattleastro.org/
There are LOTS of clubs in Washington state.
#### vincentm
Gold Member
well, i know, but...just wasn't sure i never had been to a star part/club. but thank you that link rocked and i printed the 16page pdf.
#### turbo
Gold Member
The club has a library of small telescopes, and if you join and are a member in good standing, you can borrow one for up to a month at a time. That's a nice benefit, but better yet, you will get free advice and equipement references from people who care about amateur astronomy. Do not overlook the chance to make friends with these people and take advantage of their experience - they are willing to share, else they would not belong to the club.
http://www.seattleastro.org/resources.html
#### vincentm
Gold Member
Well, sorry to resurrect this thread, but after months of being in sort of a financial slump, i went ahead and purchased a Dobsonian Bushnell Voyager Telescope. I'm pretty sure that i can say, i'll be happy with this scope although i live in downtown Seattle, from my roof deck of my Condo i can get a pretty good view of Orion's belt and quite a few other stars along with the moon. Thanks turbo-1 for your advice, i got a smoking deal on it too, \$100!
It has a focal length of 500mm and comes with a 25mm and a 5mm eyepiece, it's super easy to setup and carry around, and comes with a shoulder strap. :)
http://www.apogeeinc.com/product.asp?itemid=766 [Broken]
Once again, thank you.
Last edited by a moderator:
#### turbo
Gold Member
You should at least get your "juices flowing" with this little scope. It is portable enough to take out for a quick look-see any time you want, although you should recognize that if you take it outside somewhere and let it cool down to ambient temperature before viewing, you will probably get get crisper views. Good luck.
I have a Celestron Comet Catcher that I bought quite a few years ago to suit the same purpose. It's OK for a quick look-see.
|
2019-11-12 12:50:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24799519777297974, "perplexity": 1930.9242001678347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665573.50/warc/CC-MAIN-20191112124615-20191112152615-00426.warc.gz"}
|
https://www.transtutors.com/questions/ovin-erge-pound-all-unt-costs-to-the-nearest-one-tenth-of-one-cent-b-indicate-where--2805222.htm
|
# OVIn-erge (Pound all unt costs to the nearest one-tenth of one cent) (b) Indicate where the inven...
OVIn-erge (Pound all unt costs to the nearest one-tenth of one cent) (b) Indicate where the inventory costs that were calculated in this exercise are different from the ones in E8-18 and explain the possible reasons why (LO 6, E8-20 Lower of Cost and Net Realizable Value, Periodic Method-Journal Entries) As a result of its anmual 8, 12) inventory count, Tarweed Corp. determined its ending inventory at cost and at lower of cost and net realizable value at December 31, 2016, and December 31, 2017. This information is as follows: Cost $321.000 Lower of Cost and NRV$283,250 Dec. 31, 2016 Dec. 31, 2017 385,000 351,250 Prepare the journal entries required at December 31,2016 and 2017, assuming that the inventory is recorded direcly at the lower of cost and net realizable value and that a periodic inventory system is used. Assume that cost was lower than NRV at December 31,2015 Prepare the same entries assuming that the inventory is recorded at cost and an allowance account is adjusted each year end under a periodic system. What would be the effect on Income under both methods? How would the Statement of Financial Position reflect these transactions? a) b) c) d)
|
2021-12-05 18:14:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35312619805336, "perplexity": 2788.7275507701656}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363215.8/warc/CC-MAIN-20211205160950-20211205190950-00283.warc.gz"}
|
https://math.stackexchange.com/questions/2560930/greens-theorem-properly-parameterizing-in-polar-coordinates
|
# Green's Theorem: Properly parameterizing in polar coordinates
My professor provided our class the folloring example problem demonstrating Green's Theorem:
Evaluate $\int F \cdot dr$ where $C$ is the circle with radius $2$ oriented clockwise and $F$ is the function $$F = (3y+\frac{e^{80/x^2}}{x^{2017}},-4x-\frac{e^{2017}}{e^{2018}})$$
The problem is easy when Green's Theorem is applied, as the nasty exponents cancel out and you are left taking the integral of $7$ ($-7$ if the circle were counter-clockwise). However, what he did next is confusing me.
Since it is a circle, the integral should be in polar coordinates, thus the integral should be $7\iint r dr d\theta$. However, my professor evaluated it as simply $7\iint dA$ and did not add an $r$. Then, when he finished integrating $r$ from $0$ to $2$ and $\theta$ from $0$ to $2\pi$, his answer is $7\cdot 4\pi$, which is $28\pi$.
When I attempted the problem, I did add the $r$ and ended up with $24\pi$.
Am I not supposed to add the extra polar coordinate components in this problem? Or did my professor make a mistake? Below is a picture of his example.
$$7 \int_0^{2\pi} \int_0^2 r \,\,dr d\theta=7 \cdot (2\pi)\frac{2^2}{2}=28\pi$$
|
2019-07-20 00:50:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262691140174866, "perplexity": 109.51558831521905}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526401.41/warc/CC-MAIN-20190720004131-20190720030131-00104.warc.gz"}
|
https://physics.stackexchange.com/questions/207097/force-on-side-of-pool-from-water
|
Force on side of pool from water
Given a pool with dimensions $$\ell \times w \times h \, ,$$ I am trying to derive an equation that will yield the force by the water on the sides of the pool, namely $$\ell\times h \quad \mathrm{or} \quad w \times h \, .$$ For the side of the pool with dimensions $\ell \times h$, I started by using the familiar equation for pressure $$F = PA \, .$$ Plugging in the expression for hydrostatic pressure for $P$ gives $$F = \rho ghA =\rho gh(\ell \times h) = \boxed{\rho g \ell h^2} \, .$$ Is my reasoning, and corresponding solution correct?
• Hydrostatic pressure changes with height. You have just multiplied by area, which means that you have assumed it to be constant. Instead, you should integrate over the area. You'll get an extra 1/2 term for the force. – Goobs Sep 15 '15 at 4:21
As @Goobs says, the pressure force is $0$ at the top of the water line and increases to $\rho~g~y~dA$ on a surface of area $dA$ at depth $y$. Since this pressure increases linearly from $0$ to $\rho~g~y$ the average force on the wall is the average of the start and end: so, it is half of this value, and the total pressure is $\frac 12 \rho g h (h \ell).$
• Would this be correct? $\int dF = \int_0^H\rho g A \,\,dh = \rho g\ell\int_0^H h\,\,dh = \boxed{\frac{1}{2}\rho g H^2}$ – rgarci0959 Sep 15 '15 at 4:51
• Yes. For bonus points you would write it as $\int dA~\rho~g~h$ to start with, as that's one of those forces that you "know" is correct (to get the net force in some direction, sum all the little forces in that direction). – CR Drost Sep 15 '15 at 5:03
|
2019-10-22 09:01:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671395778656006, "perplexity": 216.34792080158059}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00297.warc.gz"}
|
http://openstudy.com/updates/50f70225e4b007c4a2eb2a75
|
## Rachel98 Group Title Give the domain and range of the relation. Tell whether the relation is a function. How do you find the domain and range of a circle graph? For example, the one in the comments. one year ago one year ago
1. Rachel98 Group Title
2. ceb105 Group Title
The domain would conventionally be defined as $D=\left\{ x \in \mathbb{R} : -3 \le x \le3 \right\}$ Does this help you to find the range?
3. Rachel98 Group Title
So the range would be 2 ≤ Y ≤ - 2?
4. ceb105 Group Title
indeed yes
|
2014-08-30 18:20:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39521291851997375, "perplexity": 1203.2811451888078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835670.21/warc/CC-MAIN-20140820021355-00202-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-equations/166778-parametric-form-quasi-linear-pde.html
|
# Math Help - Parametric form of Quasi Linear PDE
1. ## Parametric form of Quasi Linear PDE
Hi,
Could anyone check the attached and see where i have gone wrong on this problem. Asked to find parametric form of pde and hence relationship between u and x,y. The general solution does not match the solution for initial conditions
Thanks
2. I believe you changed sign going from
$- \dfrac{1}{x} = t + A \; \text{to}\; x = \dfrac{1}{-t - A}$
Edit. No mistake - my bad.
3. But isn't it true that if
$-\dfrac{1}{x}=t+A,$ then
$\dfrac{1}{x}=-t-A,$ and hence
$x=\dfrac{1}{-t-A}?$ So, I think that's all right.
I think the mistake might be in applying the initial conditions. Are you told that when $u=0,$ it's also true that $t=0?$ I don't see that in the original problem statement. So, you can only say that
$s=\dfrac{1}{-t-A}.$
You've parametrized the equation in terms of $t$ but there's no guarantee that that parametrization has its origin just anywhere you want it, right? I would go through and re-apply the initial conditions after your integrations, and just leave $t$ in there and see what happens.
4. Originally Posted by Ackbeet
But isn't it true that if
$-\dfrac{1}{x}=t+A,$ then
$\dfrac{1}{x}=-t-A,$ and hence
$x=\dfrac{1}{-t-A}?$ So, I think that's all right.
I think the mistake might be in applying the initial conditions. Are you told that when $u=0,$ it's also true that $t=0?$ I don't see that in the original problem statement. So, you can only say that
$s=\dfrac{1}{-t-A}.$
You've parametrized the equation in terms of $t$ but there's no guarantee that that parametrization has its origin just anywhere you want it, right? I would go through and re-apply the initial conditions after your integrations, and just leave $t$ in there and see what happens.
Danny and Ackbeet,
The problem statement is all that was given however examples in the lectures notes are based on the idea that we parametrise the initial conditions so that they correspond to t=0. So I am assuming this for the above problem, ie
A characteristic passes through $\Gamma$ at point $P (x=s, y=1-s, u=0) at t=0$
This is the only way we have been taught. I spotted an error where s should be equal to $\frac{1}{t+\frac{1}{x}}$ Ie, the minus is changed to a plus. This still doesnt solve the problem....
5. So your final equation there is
$\displaystyle y=\frac{u^{2}}{2}+1-\left[\frac{1}{u+\frac{1}{x}}\right],$ incorporating the sign correction.
Evaluating at $u=0, x=s$ yields
$\displaystyle y=1-\left[\frac{1}{\frac{1}{s}}\right]=1-s,$
as required. Which initial condition is not being satisfied here?
6. Originally Posted by Ackbeet
So your final equation there is
$\displaystyle y=\frac{u^{2}}{2}+1-\left[\frac{1}{u+\frac{1}{x}}\right],$ incorporating the sign correction.
Evaluating at $u=0, x=s$ yields
$\displaystyle y=1-\left[\frac{1}{\frac{1}{s}}\right]=1-s,$
as required. Which initial condition is not being satisfied here?
mmmm...that makes perfect sense what you did but im going the other way, ie I want to see if u goes to 0 when i sub x=s and y=1-s into general solution.
Afer some manipulation I get
$-s=\frac{u{^2}}{2}-\frac{s}{us+1}$ I cant see how this gets u=0. Is this to do with the general solution being defined implicitly?
Thanks
7. Your equation is correct. You can keep going, you know. Get a common denominator on the RHS, multiply through by that common denominator, get all the $u$'s over to one side, etc. I get the following:
$u^{3}s+u^{2}+2us^{2}=0,$ which implies either that $u=0,$ or that
$u=\dfrac{-1\pm\sqrt{1-8s^{3}}}{2s}.$
Does $u$ have to be real for all $s?$ If so, then you can definitely rule out both of these solutions, and you're forced to conclude that $u=0.$ Otherwise, I'm not sure you can.
8. Originally Posted by Ackbeet
Your equation is correct. You can keep going, you know. Get a common denominator on the RHS, multiply through by that common denominator, get all the $u$'s over to one side, etc. I get the following:
$u^{3}s+u^{2}+2us^{2}=0,$ which implies either that $u=0,$ or that
$u=\dfrac{-1\pm\sqrt{1-8s^{3}}}{2s}.$
Does $u$ have to be real for all $s?$ If so, then you can definitely rule out both of these solutions, and you're forced to conclude that $u=0.$ Otherwise, I'm not sure you can.
I got that expression but I didnt 'see' how to go on further. I need to remind myself that solutions to u can appear quadratically. To answer your question, it doesnt say whether u has to be real for all s.
9. Hmm. Well then, there's no a priori reason to rule out the nonzero solutions. Incidentally, for those nonzero solutions, you'll get real $u$ if and only if $s \le 1/2$. I don't know how much more can be said on the matter.
|
2014-12-21 13:16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 43, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8663464188575745, "perplexity": 291.8777469316664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802771164.85/warc/CC-MAIN-20141217075251-00153-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.wind-energ-sci.net/4/479/2019/
|
Journal cover Journal topic
Wind Energy Science The interactive open-access journal of the European Academy of Wind Energy
Journal topic
Wind Energ. Sci., 4, 479–513, 2019
https://doi.org/10.5194/wes-4-479-2019
Wind Energ. Sci., 4, 479–513, 2019
https://doi.org/10.5194/wes-4-479-2019
Research article 10 Sep 2019
Research article | 10 Sep 2019
# Sensitivity analysis of the effect of wind characteristics and turbine properties on wind turbine loads
Sensitivity analysis of the effect of wind characteristics and turbine properties on wind turbine loads
Amy N. Robertson, Kelsey Shaler, Latha Sethuraman, and Jason Jonkman Amy N. Robertson et al.
• National Renewable Energy Laboratory, 15013 Denver West Parkway, Golden, CO 80401, USA
Correspondence: Amy N. Robertson (amy.robertson@nrel.gov)
Abstract
Proper wind turbine design relies on the ability to accurately predict ultimate and fatigue loads of turbines. The load analysis process requires precise knowledge of the expected wind-inflow conditions as well as turbine structural and aerodynamic properties. However, uncertainty in most parameters is inevitable. It is therefore important to understand the impact such uncertainties have on the resulting loads. The goal of this work is to assess which input parameters have the greatest influence on turbine power, fatigue loads, and ultimate loads during normal turbine operation. An elementary effects sensitivity analysis is performed to identify the most sensitive parameters. Separate case studies are performed on (1) wind-inflow conditions and (2) turbine structural and aerodynamic properties, both cases using the National Renewable Energy Laboratory 5 MW baseline wind turbine. The Veers model was used to generate synthetic International Electrotechnical Commission (IEC) Kaimal turbulence spectrum inflow. The focus is on individual parameter sensitivity, though interactions between parameters are considered.
The results of this work show that for wind-inflow conditions, turbulence in the primary wind direction and shear are the most sensitive parameters for turbine loads, which is expected. Secondary parameters of importance are identified as veer, u-direction integral length, and u components of the IEC coherence model, as well as the exponent. For the turbine properties, the most sensitive parameters are yaw misalignment and outboard lift coefficient distribution; secondary parameters of importance are inboard lift distribution, blade-twist distribution, and blade mass imbalance. This information can be used to help establish uncertainty bars around the predictions of engineering models during validation efforts, and provide insight to probabilistic design methods and site-suitability analyses.
1 Introduction
Wind turbines are designed using the International Electrotechnical Commission (IEC) 61400-1 standard (IEC, 2005), which prescribes a set of simulations to ascertain the ultimate and fatigue loads that the turbine could encounter under a variety of environmental and operational conditions. The standard applies safety margins to account for the uncertainty in the process, which comes from the procedure used to calculate the loads (involving only a small fraction of the entire lifetime), but also from uncertainty in the properties of the system, variations in the conditions the turbine will encounter from the prescribed values, and modeling uncertainty. As manufacturers move to develop more advanced wind technology, optimize designs further, and reduce the cost of wind turbines, it is important to better understand how uncertainties impact modeling predictions and reduce the uncertainties where possible. Knowledge of where the uncertainties stem from can lead to a better understanding of the cost impacts and design needs of different sites and different turbines.
This paper provides a better understanding of the uncertainty in the ultimate and extreme structural loads and power in a wind turbine. This is done by parameterizing the uncertainty sources, prescribing a procedure to estimate the load sensitivity to each parameter, and identifying which parameters have the largest sensitivities for a conventional utility-scale wind turbine under normal operation. An elementary effects (EE) methodology was employed for estimating the sensitivity of the parameters. This approach was chosen because it provides a reasonable estimate of sensitivity, but with significantly fewer computational requirements compared to calculating the Sobol sensitivity, and does not require increasing the uncertainty in the result through the use of a reduced-order model. Some modifications were needed to the standard EE approach to properly compare loads across different wind speed bins.
This work is a first step in understanding potential design process modifications to move toward a more probabilistic approach or to inform site-suitability analyses. The results of this work can be used to (1) rank the sensitivities of different parameters, (2) help establish uncertainty bars around the predictions of engineering models during validation efforts, and (3) provide insight to probabilistic design methods and site-suitability analyses.
2 Analysis approach
## 2.1 Overview
To identify the most influential sources of uncertainty in the calculation of the structural loads for utility-scale wind turbines, a sensitivity analysis methodology based on EE is employed. The focus is on the sensitivity of the input parameters of wind turbine simulations (used to calculate the loads), not on the modeling software itself, which creates uncertainty based on whether the approach used accurately represents the physics of the wind loading and structural response. The procedure followed is summarized in the following subsections. The caveats of the sensitivity analysis approach employed are given as follows:
• Only the National Renewable Energy Laboratory (NREL) 5 MW reference turbine is used to assess sensitivity (the resulting identification of most sensitive parameters may depend on the turbine).
• Only normal operation under turbulence is considered (gusts, start-ups, shutdowns, and parked or idling events are not considered).
• Minimum and maximum values of the input parameter uncertainty ranges are examined in the analysis (no joint probability density function is considered).
• With the exception of wind speed, each parameter is examined independently across the full range of variation and is not conditioned based on parameters other than wind speed.
## 2.2 Wind turbine model and tools
The sensitivity of the turbine load response to each input parameter is assessed through the use of a simulation model. The NREL 5 MW reference turbine (Jonkman et al., 2009) was used in this study as a representative turbine. This is a three-bladed upwind horizontal-axis turbine with a variable-speed collective pitch controller; it has a hub height of 90 m and a rotor diameter of 126 m. Though not covered in this work, it would also be useful to examine how the sensitivity of the turbine loads to the parameters is affected by the size, type, and control of the considered wind turbine.
The sensitivity of loads to input parameter variation could be influenced by the wind speed and associated wind turbine controller response. Therefore, the EE analysis was performed at three different wind speeds corresponding to mean hub height wind speeds of 8, 12, and 18 m s−1, representing below-, near-, and above-rated wind speeds, respectively. Turbulent wind conditions were generated at each wind speed using TurbSim (Jonkman, 2009), employing an IEC Kaimal turbulence spectrum with exponential spatial coherence. Multiple turbulence seeds were used for each input parameter variation to ensure the variation from input parameter changes is distinguishable from the variation in the selected turbulence seeds. The number of seeds was determined through a convergence study for each of the parameter sets. A 25×25 point square grid of three-component wind vector points that encompasses the turbine rotor plane was used.
Figure 1Overview of the parametric uncertainty in a wind turbine load analysis. Includes wind-inflow conditions (subset shown in blue), turbine aeroelastic properties (subset shown in black), and the associated load quantities of interest (QoIs) (subset shown in red).
## 2.3 Case studies
Input parameters were identified that could significantly influence the loading of a utility-scale wind turbine. These parameters were organized into two main categories (or case studies): the ambient wind-inflow conditions that will generate the aerodynamic loading on the wind turbine and the aeroelastic properties of the structure that will determine how the wind turbine will react to that loading (see Fig. 1). Within these two categories, a vast number of uncertainty sources can be identified, and Abdallah (2015) provides an exhaustive list of the properties. For this study, the authors selected those parameters believed to have the largest effect for normal operation for a conventional utility-scale wind turbine, which are categorized into the labels shown in Fig. 1.
To understand the sensitivity of a given parameter, a range over which that parameter may vary needed to be defined. For the wind conditions, a literature search was done to identify the reported range for each of the parameters across different potential installation sites within the three wind speed bins. For the aeroelastic properties, the parameters are varied based on an assessed level of potential uncertainty associated with each parameter.
## 2.4 Quantities of interest
To capture the variability in turbine response that results from parameter variation, several QoIs were identified. These QoIs are summarized in Table 1 and include the blade, drivetrain, and tower loads; blade-tip displacement; and turbine power. Ultimate and fatigue loads were considered for all load QoIs, whereas only ultimate values were considered for blade-tip displacements. The ultimate loads were estimated using the average of the global absolute maximums across all turbulence seeds for a given set of parameter values. The fatigue loads were estimated using aggregate damage-equivalent loads (DEL) of the QoI response across all seeds for a given set of short-term parameter values. For the bending moments, the ultimate loads were calculated as the largest vector sum of the first two components listed, rather than considering each individually. The QoI sensitivity of each input parameter is examined using the procedure summarized in the next section.
Table 1Quantities of interest examined in the sensitivity analyses.
3 Sensitivity analysis procedure
## 3.1 Sensitivity analysis approaches
There are many different approaches to assess the sensitivity of QoIs for a given input parameter. The best choice depends on the number of considered input parameters, simulation computation time, and availability of parameter distributions. Sensitivity is commonly assessed through the Sobol sensitivity (Saltelli et al., 2008), which decomposes the variance of the response into fractions that can be attributed to different input parameters and parameter interactions. The drawback of this method is the large computational expense, which requires a Monte Carlo analysis to calculate the sensitivity. To decrease the computational expense, one approach is to use a metamodel, which is a lower-order surrogate model trained on a subset of simulations to capture the trends of the full-order more computationally expensive model. This approach has been used in the wind energy field (Nelson et al., 2003; Rinker, 2016; Sutherland, 2002; Ziegler and Muskulus, 2016) but was deemed unsuitable for this work given the wind turbine model complexity and associated QoIs. Specifically, it may be difficult for a metamodel to capture the system nonlinearities and interaction of the controller, especially ultimate loads at the tails of the load distribution, limiting metamodel accuracy. Another approach to reduce computational expense is to use a design of experiments approach to identify the fewest simulations needed to capture the variance in the parameters and associated interactions, e.g., Latin hypercube sampling (Matthaus et al., 2017; Saranyasoontorn and Manuel, 2006, 2008) and fractional factorial analysis (Downey, 2006). These methods were considered for this application but such approaches are still too computationally expensive given the large number of considered input parameters. Instead, a screening approach was determined to be the best approach. A screening method provides a sensitivity measure that is not a direct estimate of the variance, but rather supplies a ranking of those parameters with the most influence. One of the most commonly used screening approaches is EE analysis (Campolongo et al., 2007, 2011; Francos et al., 2003; Gan et al., 2014; Huang and Pierson, 2012; Jansen, 1999; Martin et al., 2016; Saint-Geours and Lilburne, 2010; Sohier et al., 2015). Once the EE analysis identifies the input parameters that are most influential to the QoIs, a more targeted analysis can be performed using one of the other sensitivity analyses discussed above.
## 3.2 Overview of elementary effects
EE at its core is a simple methodology for screening parameters. It is based on the one-at-a-time approach in which each input parameter of interest is varied individually while holding all other parameters fixed. A derivative is then calculated based on the level of change in the QoI to the change in the input parameter using first-order finite differencing. Approaches such as these are called local sensitivity approaches because they calculate the influence of a single parameter without considering interaction with other parameters. However, the EE method extends this process by examining the change in response for a given input parameter at different locations (points) in the input parameter hyperspace. In other words, only one parameter is varied at a time, but this variation is performed multiple times using different values for the other input parameters, as shown in Fig. 2. The derivatives calculated from the different points are considered to assess an overall level of sensitivity. Thus, the EE method considers the interactions between different parameters and is therefore considered a global sensitivity analysis method.
Figure 2Radial EE approach representation for three input parameters. Blue circles indicate starting points in the parameter hyperspace. Red points indicate variation in one parameter at a time. Each variation is performed for 10 % of the range over which the parameter may vary, either in the positive or negative direction.
Each wind turbine QoI, Y, is represented as a function of different characteristics of the wind or model property input parameters, U, as follows:
$\begin{array}{}\text{(1)}& Y=f\left({u}_{\mathrm{1}},\mathrm{\dots },{u}_{i},\mathrm{\dots },{u}_{I}\right),\end{array}$
where I is the total number of input parameters. In the general EE approach, all input parameters are normalized between 0 (minimum value) and 1 (maximum value). For a given sampling of U, the EE value of the input parameter, i, is found by varying only that parameter by a normalized amount, Δ:
$\begin{array}{}\text{(2)}& {\text{EE}}_{i}=\frac{f\left(\mathbit{U}+{\mathbit{x}}_{k}\right)-f\left(\mathbit{U}\right)}{\mathrm{\Delta }},\end{array}$
where
$\begin{array}{}\text{(3)}& {\mathbit{x}}_{k}=\left\{\begin{array}{ll}\mathrm{0}& \mathrm{for}\phantom{\rule{0.25em}{0ex}}k\ne i,\\ \mathrm{\Delta }& \mathrm{for}\phantom{\rule{0.25em}{0ex}}k=i.\end{array}\right\\end{array}$
Because of the normalization of U, the EE value (EEi) can be thought of as the local partial derivative of Y with respect to an input (ui), scaled by the range of that input. Thus, the EE value has the same unit as the output QoI. The EE value is calculated for R starting points in the input parameter hyperspace, creating a set of R different calculations of EE value for each input parameter.
The basic approach for performing an EE analysis has been modified over the years to ensure that the input hyperspace is being adequately sampled and to eliminate issues that might confound the sensitivity assessment. In this work, the following modifications to the standard approach were made:
1. A radial approach, where the EE values were calculated by varying each parameter one at a time from a starting point (see Fig. 2), was used rather than the traditional trajectories for varying all of the parameters, which has been shown to improve the efficiency of the method (Campolongo et al., 2011).
2. Sobol numbers were used to determine the initial points at which the derivatives will be calculated (blue circles in Fig. 2), which ensures a wide sampling of the input hyperspace (Campolongo et al., 2011; Robertson et al., 2018).
3. A set delta value equal to 10 % of the input parameter range ($\mathrm{\Delta }=±\mathrm{0.1}$ normalized or ${\mathrm{\Delta }}_{ib}=±\mathrm{0.1}{u}_{ib,\mathrm{range}}$ dimensional) was used to ensure the calculation of the finite difference occurred over an appropriate range to better meet the assumption of linearity.
4. A modified EE formula – different for ultimate and fatigue loads – was used to examine the sensitivity of the parameters across multiple wind speed bins. EE modifications are detailed in Sect. 3.3.1 and 3.3.2.
5. The most sensitive inputs were identified via thresholding of EE values rather than the classical method involving mean and standard deviation of EE values, as detailed in Sect. 3.4 and Appendix A.
## 3.3 Elementary effects formulas
This section provides the detailed formulas used to calculate the EE values for the ultimate and fatigue loads.
When considering the ultimate loads, only the single highest ultimate load is of concern, regardless of the wind speed bin. Therefore, the standard EE formula is modified so that the sensitivity of the parameters can be examined consistently across different wind speed bins. This is accomplished by keeping U and Δ dimensional (i.e., not making U dimensionless between 0 and 1), multiplying the derivative – approximated with a finite difference – by the total range of the input for a given wind speed bin, and adding the nominal value of the QoI associated with IEC turbine class I and turbulence category B (IEC Class IB) for the given wind speed bin. The EE of input parameter ${U}_{ib}^{r}$ for a certain QoI, Y, at starting point r in wind speed bin b is then given by
$\begin{array}{}\text{(4)}& {\text{EE}}_{ib}^{r}=\left|\frac{Y\left({\mathbit{U}}^{r}+{\mathbit{x}}_{k}\right)-Y\left({\mathbit{U}}^{r}\right)}{{\mathrm{\Delta }}_{ib}}{u}_{ib,\mathrm{range}}\right|+{\stackrel{\mathrm{‾}}{Y}}_{b},\end{array}$
where ${\stackrel{\mathrm{‾}}{Y}}_{b}$ represents the IEC Class IB nominal value for the given wind speed bin. The ultimate load, Y(U), is defined as the mean of the absolute maximum of the temporal response load in bin b across S seeds for a certain input parameter i and starting point r:
$\begin{array}{}\text{(5)}& Y\left({\mathbit{U}}^{r}\right)=\frac{\mathrm{1}}{S}\sum _{s=\mathrm{1}}^{S}\text{MAX}\left(\left|Y\left({\mathbit{U}}^{r}\right)\right|\right).\end{array}$
To compute the fatigue loads, the same basic formulation is used as for the ultimate loads but the DEL of the temporal response is considered in place of the mean of the absolute maximums:
$\begin{array}{}\text{(6)}& {\text{EE}}_{ib}^{r}=P\left({v}_{b}\right)\left|\frac{\text{DEL}\left({\mathbit{U}}^{r}+{\mathbit{x}}_{k}\right)-\text{DEL}\left({\mathbit{U}}^{r}\right)}{{\mathrm{\Delta }}_{ib}}{u}_{ib,\mathrm{range}}\right|,\end{array}$
where DEL(Ur) is the aggregate of the short-term DEL of a given QoI across all seeds computed using the NREL post-processing tool, MLife (Hayman and Buhl, 2012). DELs are computed without the Goodman correction and with load ranges about a zero fixed mean. The fatigue EE value is scaled by P(vb), which is the Rayleigh probability at the wind speed vb (assuming IEC Class IB turbulence) associated with bin b to compare the fatigue loads consistently across wind speed bins.
Table 2Wind-inflow condition parameters (18 total).
## 3.4 Identification of most sensitive inputs
The EE value is a surrogate for a sensitivity level. Therefore, a higher EE value for a given input parameter indicates more sensitivity. Here, the most sensitive parameters are identified by defining a threshold over which an individual EE value would be considered significant, indicating the sensitivity of the associated parameter. This approach differs from the classical method of determining parameter sensitivity, as discussed in Appendix A. The threshold is set individually for each QoI. For the wind parameter study, the threshold is defined as $\stackrel{\mathrm{‾}}{{\text{EE}}^{r}}+\mathrm{2}\mathit{\sigma }$, where $\stackrel{\mathrm{‾}}{{\text{EE}}^{r}}$ is the mean of all EE values across all starting points R, inputs I, and wind speed bins B for each QoI and σ is the standard deviation of these EE values. For the turbine parameter study, the results are stratified based on wind speed bin. Therefore, the threshold for this study is modified to $\stackrel{\mathrm{‾}}{{\text{EE}}^{r}}+\mathrm{1.7}\mathit{\sigma }$. Additionally, the ultimate load thresholds for the turbine parameter study are computed using only near- and above-rated results because of the separation of EE values between the below-, near-, and above-rated wind speed bins. For both studies, fatigue load EE values are not clearly separated by wind speed; therefore, all wind speeds are used to compute the fatigue load parameter thresholds.
4 Results
Two separate case studies were performed to assess the sensitivity of input parameters to the resulting ultimate and fatigue loads of the NREL 5 MW wind turbine. The categories of input parameters analyzed were the wind-inflow conditions and the aeroelastic turbine properties. In both of the case studies, loads were analyzed for three wind speed bins, using mean wind speed bins of 8, 12, and 18 m s−1, representing below-, near-, and above-rated wind speed bins, respectively. Turbulent wind conditions were generated using IEC Kaimal turbulence spectra with exponential spatial coherence functions. For the turbine parameter study, turbulence was based on IEC Class IB turbulence. Correlations and joint distributions of the parameters were not considered because developing this relationship for so many parameters would be difficult or impossible. In addition, the correlation could be very different for different wind sites. The impact of not considering the correlation was limited by choosing parameters that were fairly independent of one another when possible, and by binning the results by wind speed.
## 4.1 Wind-inflow characteristics
Many researchers have examined the influence of wind characteristics on turbine load response, considering differing wind parameters and turbulence models, and using different methods to assess their sensitivity. The most common parameter considered is the influence of turbulence intensity variability, which past work has shown to have significant variability and a large impact on the turbine response (Dimitrov et al., 2015; Downey, 2006; Eggers et al., 2003; Ernst and Seume, 2012; Holtslag et al., 2016; Kelly et al., 2014; Matthaus et al., 2017; Moriarty et al., 2002; Rinker, 2016; Saranyasoontorn and Manuel, 2008; Sathe et al., 2012; Sutherland, 2002; Wagner et al., 2010; Walter et al., 2009). The shear exponent, or wind profile, is the next most common parameter examined, concluded to have similar or slightly less importance to the turbulence intensity (Bulaevskaya et al., 2015; Dimitrov et al., 2015; Downey, 2006; Eggers et al., 2006; Ernst and Seume, 2012; Kelly et al., 2014; Matthaus et al., 2017; Sathe et al., 2012). Other parameters investigated include the turbulence length scale, standard deviation of different directional wind components, Richardson number, spatial coherence, component correlation, and veer. Mixed conclusions are drawn on the importance of these secondary parameters, which are influenced by the range of variability considered (based on the conditions examined), the turbine control system, and the turbine size and hub height under consideration. The effects of considering the secondary wind parameters are also mixed, sometimes increasing and sometimes decreasing the loads in the turbine; however, most agree that the use of site-specific measurements of the wind parameters will lead to a more accurate assessment of the turbine loads, resulting in designs that are either further optimized or lower risk.
The focus of this case study is to obtain a thorough assessment of which wind characteristics influence wind turbine structural loads when considering the variability of these parameters over a wide sampling of site conditions.
### 4.1.1 Parameters
A total of 18 input parameters were chosen to represent the wind-inflow conditions, considering the mean wind profile, velocity spectrum, spatial coherence, and component correlation, as summarized in Table 2. The parameters used were identified considering a Veers model for describing and generating the wind characteristics because it provides a quantitative description with a known and limited set of inputs. Each of these parameters is described in the following subsections. Note that the Veers model differs from the other commonly used Mann turbulence model.1 Regardless, the Veers model is used here because it is more tailorable than the Mann model, i.e., there are more input parameters that can be varied.
### Mean wind profile
A standard power-law shear model is used to describe the vertical wind speed profile and a linear wind direction veer model is used. The sensitivity of these characteristics are captured through variation in the exponent of the shear, α, and the total veer across the turbine, β (centered around the hub, following right-hand rule about the vertical axis of the turbine). The IEC 61400-1 standard (IEC, 2005) uses α=0.2 and β=0 under normal turbulence.
### Velocity spectrum
The Veers model uses a Kaimal spectrum to represent the turbulence. The Kaimal spectrum is defined as (IEC, 2005):
$\begin{array}{}\text{(7)}& \frac{f{S}_{q}\left(f\right)}{{\mathit{\sigma }}_{q}^{\mathrm{2}}}=\frac{\mathrm{4}f{L}_{q}/{V}_{\mathrm{hub}}}{{\left(\mathrm{1}+\mathrm{6}f{L}_{\mathrm{1}}/{V}_{\mathrm{hub}}\right)}^{\mathrm{5}/\mathrm{3}}},\end{array}$
where f is the frequency, q is the index of the velocity component direction (u, v, w), Sq is the single-sided velocity spectrum, Vhub is the mean wind speed at hub height, σq is the velocity component standard deviation, and Lq is the velocity component integral scale parameter. The IEC 61400-1 standard (IEC, 2005) uses a wind-speed-dependent standard deviation, i.e., ${\mathit{\sigma }}_{u}=\mathrm{0.14}×\left(\mathrm{0.75}{V}_{\text{hub}}+\mathrm{5.6}$ m s−1), and a set scaling between the direction components of the standard deviation and scale parameters, i.e., σv=0.8σu; σw=0.5σu; ${L}_{u}=\mathrm{8.1}×$ (42 m) = 340.2 m; ${L}_{v}=\mathrm{2.7}×$ (42 m) = 113.4 m; and ${L}_{w}=\mathrm{0.66}×$ (42 m) = 27.72 m. However, in this study each parameter in velocity component direction (u, v, w) is varied independently. An inverse Fourier transform is applied to the Kaimal spectrum and random phases derived from the turbulence seed to determine a turbulent time series for each of the wind components independently.
### Spatial coherence model
The point-to-point spatial coherence (Coh) quantifies the frequency-dependent cross-correlation of a single turbulence component at different transverse points in the wind inflow grid. The general coherence model used in TurbSim is defined as
$\begin{array}{}\text{(8)}& {\text{Coh}}_{q,f}=\mathrm{exp}\left(-{a}_{q}{\left(\frac{d}{{z}_{\mathrm{m}}}\right)}^{\mathit{\gamma }}\sqrt{{\left(\frac{fd}{{V}_{\mathrm{hub}}}\right)}^{\mathrm{2}}+{\left({b}_{q}d\right)}^{\mathrm{2}}}\right),\end{array}$
where d is the distance between points i and j, zm is the mean height of the two points (IEC, 2005), and Vhub is the mean wind speed at hub height. The variables aq and bq are the input coherence decrement and offset parameter, respectively. Note that the use of Vhub in the general coherence model is a modification to the standard TurbSim method. The model is based on the IEC coherence model with the added factor (dzm)γ – introduced by Solari (1987) – where γ can vary between 0 and 1. The IEC 61400-1 standard (IEC, 2005) does not use the (dzm)γ factor and uses au=12 and ${b}_{u}=\mathrm{0.12}/{L}_{u}$. Spatial coherence is not defined in the standard (IEC, 2005) for the transverse wind components v and w.
### Component correlation model
The component-to-component correlation (PC) quantifies the cross-correlation between directional turbulence components at a single point in space. For example, PCuw quantifies the correlation between the u- and w-turbulence components at a given point. TurbSim modifies the v- and w-component wind speeds by computing a linear combination of the time series of the three independent wind speed components to obtain the mean Reynolds stresses (PCuw, PCuv, and PCvw) at the hub. Note that because this calculation occurs in the time domain, the velocity spectra of the v and w components are somewhat affected by the enforced component correlations. The IEC 61400-1 standard (IEC, 2005) does not specify Reynolds stresses.
Table 3Included wind-inflow parameter ranges separated by wind speed bin.
* This value was changed to −0.75 due to simulation issues.
Figure 3Identification of significant parameters using ultimate (a) and fatigue (b) loads. Significant events are defined by the number of outliers identified across each of the QoIs for all wind speed bins, input parameters, and simulation points.
### 4.1.2 Parameter ranges
To assess the sensitivity of each of the parameters on the load response, a range over which the parameters could vary was defined. The variation level was assessed through a literature search seeking the range over which the parameters could realistically vary for different installation sites around the world (Berg et al., 2013; Bulaevskaya et al., 2015; Clifton, 2014; Dimitrov, 2018; Dimitrov et al., 2015; Eggers et al., 2003; Ernst et al., 2012; Holtslag et al., 2016; Jonkman, 2009; Kalverla et al., 2017; Kelley, 2011; Lindelöw-Marsden, 2009; Matthaus et al., 2017; Moriarty et al., 2002; Moroz, 2017; Nelson et al., 2003; Park et al., 2015; Rinker, 2016; Saint-Geours and Lilburne, 2010; Saranyasontoorn et al., 2004; Saranyasontoorn and Manuel, 2008; Sathe et al., 2012; Solari, 1987; Sutherland, 2002; Teunissen, 1970; Wagner et al., 2010; Walter et al., 2009; Wharton et al., 2015; Ziegler et al., 2016). When possible, parameter ranges were set based on wind speed bins. If no information on wind speed dependence was found, the same values were used in all bins. The ranges, summarized in Table 3, were taken from multiple sources (references cited below the values), based on measurements across a variety of different locations and conditions. For comparison, the nominal value prescribed by IEC for category B turbulence is specified in the “Nom.” row.
To simplify the screening of the most influential parameters, all parameters were considered independent of one another. This was done because of the difficulty of considering correlations between a large number of parameters. Such correlations should be studied in future work once parameter importance has been established. Because each parameter was considered independently, except for the conditioning on wind speed bin, some nonphysical parameter combinations may arise. This was considered acceptable for the screening process.
### 4.1.3 Elementary effects
The EE value was calculated for each of the 18 input parameters (I) at 30 different starting points (R) in the input-parameter hyperspace. The number of points was determined through a convergence study on the average of the EE value. At each of the points examined, S different turbulent wind files (i.e., S independent time-domain realizations from S seeds) were run. Thirty seeds were needed based on a convergence study of the ultimate and fatigue load metrics for all QoIs. Based on these turbine parameters, the total number of simulations performed for the wind-inflow case study was $R×\left(I+\mathrm{1}\right)×S×B=\mathrm{30}×\mathrm{19}×\mathrm{30}×\mathrm{3}=$ 51 300, where B is the number of wind speed bins considered.
The EE values across all input parameters, input hyperspace points, and wind speed bins were examined for each of the QoIs for ultimate and fatigue loads. To identify the most sensitive parameters, a tally was made of the number of times an EE value exceeded the threshold for a given QoI. The resulting tallies are shown in Fig. 3, with the ultimate load tally on the left and the fatigue load tally on the right. As expected, based on the parameters of importance in IEC design standards, these plots show an overwhelming level of sensitivity of the u-direction turbulence standard deviation (σu) and also the vertical wind shear (α). However, focusing on the lower tally values in this plot highlights the secondary level of importance of veer (β), u-direction integral length (Lu), and components of the IEC coherence model (au and bu) as well as the exponent (γ).
Figure 4Stacked histogram of the ultimate load EE values for each of the QoIs across all wind speed bins, input parameters, and simulation points. Black line represents the defined threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Table 4Primary input parameters contributing to ultimate load sensitivity of each QoI. Values indicate the number of times the variable contributes to the sensitivity count.
Histograms of the EE values for each of the QoIs are plotted in Figs. 4 and 5 for the ultimate and fatigue load metrics (associated exceedance probability plots are shown in Appendix B, Figs. B1 and B2). Each plot contains all calculated EE values for a given QoI colored by wind speed bin. The threshold used to identify significant EE values is shown in each plot as a solid black line. All points above the threshold line indicate a significant event and are included in the outlier tally for each QoI. Note that although electric power is shown, it is not used in the outlier tally because its variation is strictly limited by the turbine controller rather than other wind parameters. Highlighted in these figures is that most of the outliers come from the below-rated wind speed bin.
Figure 5Stacked histogram of the fatigue load EE values for each of the QoIs across all wind speed bins, input parameters, and simulation points. Black line represents the defined threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Table 5Primary input parameters contributing to fatigue load sensitivity of each QoI. Values indicate the number of times the variable contributes to the sensitivity count.
Figure 6Exceedance probability plot of ultimate (a) and fatigue (b) load EE values for blade-root bending moments. Each line represents a different input parameter and wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
To understand why the below-rated wind speed bin would be creating the most outliers, a more thorough examination was made for one of the QoIs. Exceedance probability plots of blade-root loads are shown in Fig. 6. Here, all input parameters are plotted independently of each other to compare the behavior between parameters. Each line represents a different input parameter with each point representing a different location in the hyperspace. These plots show how the shear and u-component standard deviation for the lower wind speed bin stand out compared to all other parameters; likewise, the u-component standard deviation stands out across different wind speed bins for the ultimate load. One of the reasons that the shear value shows such a large sensitivity in the lowest wind speed bin is the large range over which the parameter is varied. A smaller range is used for the near- and above-rated bins, resulting in less sensitivity to shear for those wind speeds. The impact of the range on the sensitivity of the parameter indicates that for sites with extreme conditions, such as an extreme shear, using appropriate parameter values in a load analysis can be important in accurately assessing the ultimate and fatigue loading on the turbine. The effect of shear could also be diminished by employing independent blade-pitch control, whereas the reference NREL 5 MW turbine controller used here employs collective blade-pitch control.
Histogram plots of blade-root bending moment EE values are shown in Figs. 7 and 8. In each figure, wind speed bins are displayed in different plots and EE value histograms showing the contribution from all input parameters are shown in each histogram. Ultimate load EE values are shown in Fig. 7 and fatigue load EE values are shown in Fig. 8. Highlighted in these plots is the large sensitivity of the shear parameter and, to a lesser extent, u-component standard deviation in the far extremes.
Figure 7Histograms of ultimate load EE values for the blade-root bending moment. Each graph in the left column shows one wind speed bin and includes all input parameters. Right column is a zoomed-in view of the left.
Figure 8Histograms of fatigue load EE values for the blade-root bending out-of-plane bending moment. Each graph in the left column shows one wind speed bin and includes all input parameters. Right column is a zoomed-in view of the left.
To summarize which parameters are important for which QoIs, the number of times each input parameter contributed to the significant event count for a given QoI was tallied. The top most-sensitive parameters are shown in Tables 4 and 5 for ultimate and fatigue loads, respectively. Overall, 46 % of the outliers for both ultimate and fatigue loads are due to u-direction turbulence standard deviation (σu) and 26 % for vertical shear (α); for all but two QoIs, these are the most sensitive parameters. The two exceptions are blade-root pitching moment and tower-base bending moment, which show u-direction turbulence standard deviation as the most important parameter, but show coherence properties and integral scale parameter as more important than shear. This is understandable because shear will have little effect on collective blade pitching and rotor thrust. The remaining parameters have far less significance, with only components of the IEC coherence model, au (5 %) and bu (8 %), having a value great than 1 %. These results can be used in future sensitivity analysis work to focus on perturbation of specific input parameters based on desired turbine loads.
## 4.2 Aeroelastic turbine properties
The second case study focuses on which aeroelastic turbine parameters have the greatest influence on turbine ultimate and fatigue loads during normal turbine operation. These parameters are categorized into four main property categories: support structure, blade structure, blade aerodynamics, and controller.
Beyond the blade aerodynamic properties, other turbine properties also contribute to the uncertainty of the load response characteristics. Abdallah et al. (2015) provides a comprehensive assessment of the sources of uncertainty affecting the prediction of loads in a wind turbine. Researchers have not focused on these other parameters as significantly as the aerodynamic ones, but they could make a significant contribution to the uncertainty. Witcher (2017) examined uncertainty in properties such as the tower and blade mass/stiffness properties within the context of defining a probabilistic approach to designing wind turbines by examining distributions of the load from propagated input parameter uncertainties versus resistance distributions. Prediction of the reliability of the wind turbine has been studied through examination of the damping in the structure by Koukoura (2014) and a better understanding of the uncertainty in the properties of the drivetrain by Holierhoek et al. (2010). Limited information is available on what the actual ranges of uncertainty are for these different characteristics. For most studies, expert opinion is used to set a realistic bound. A better assessment of these bounds will be needed in future work to understand the relative importance of the physical parameters and to provide a more precise assessment of the uncertainty bounds in the load response of wind turbines.
### 4.2.1 Parameters
For the turbine aeroelastic properties, 39 input parameters (I) were identified covering support structure properties, blade structural properties, blade aerodynamic properties (both steady and unsteady characteristics), and controller properties. These parameters are summarized in Table 6 (acronyms are defined in the following subsections).
Table 6Turbine aeroelastic parameters (39 total).
### 4.2.2 Parameter ranges
The level of variation was based on the perceived level of uncertainty in the parameter values. Some of these levels of uncertainty are proposed within the literature, but when no other information was available expert opinion was used. The source for the information is provided below the values in each table summarizing the parameter ranges. “Exp.” is used to identify where expert opinion was used. The uncertainty levels are largely percentage based, but in some instances an exact value was used. The following subsections define the ranges of the parameters introduced in Table 6. All parameters were considered independent of one another, as was done for the wind parameter sensitivity analysis.
### Support structure properties
For the support structure, nine parameters were varied and summarized in Table 6. These parameters included mass and center of mass (CM) of the tower and nacelle, tower and drivetrain stiffness factors, tower and drivetrain damping ratio, and shaft angle. To manipulate the tower structural response, the frequency of the corresponding tower mode shapes was changed by ±15 % of 0.32 Hz by uniformly scaling the associated stiffness. Although tower stiffness is specified as a factor by which mode shapes are scaled, the drivetrain stiffness is entered directly. Note that the mode shapes themselves (which are specified independently of the mass and stiffness in ElastoDyn) were not changed in this process. The tower mass was changed by varying the distributed tower mass density factor. The tower CM location was changed by varying the tower-base and -top density such that density increased at one end and decreased at the other (with a linear scaling variation in between) without changing the overall blade mass. The drivetrain damping term represents the combined effect of structural damping and drivetrain damping from active control not directly accounted for in the baseline controller of the NREL 5 MW turbine.
Table 7Parameter value ranges of turbine support structure parameters.
Table 8Parameter value ranges of turbine blade structure parameters.
The blade aerodynamic properties were represented using 18 parameters: 3 associated with the blade twist and chord distribution; 10 associated with the static aerodynamic component; and 5 associated with the unsteady aerodynamic properties. Blade twist and chord distributions were manipulated by specifying a change in the distributions along the blade. Three parameters were defined, associated with changing the chord at the blade tip and root and twist at the blade tip. For each of these parameters, the associated distribution along the blade was modified linearly such that there was zero change at the opposite end. The root twist was not changed because the blade-pitch angle uncertainties are considered in the controller parameter section.
For the steady aerodynamic component, the lift and drag versus angle-of-attack (AoA) curves were modified to examine the sensitivity on resulting loads throughout the wind turbine. The turbine operated in normal operating conditions, and therefore only relevant regions of the curves were modified. The curves modified by parameterization using an approach based on one introduced by Abdallah et al. (2015). The approach used here parameterized the Cl and Cd curves using five points; these points were perturbed and a spline fit to the points. The points of interest are
• beginning of linear Cl region – determines the lower limit of the AoA range of interest and was kept constant (αlin,Cl,lin);
• Cd value at AoA = 0 (0, Cd,0);
• trailing edge separation (TES) point – AoA location at which Cl curve is no longer linear (αTES,Cl,TES);
• maximum (max) point – AoA location at which Cl reaches a maximum (αmax,Cl,max);
• separation reattachment (SR) point – AoA location at which slope of Cl curve is no longer negative (αSR,Cl,SR).
The selected points of interest are similar to those selected by Abdallah et al. (2015). A notable difference is the consideration of Cd,0 as opposed to Cd,90, which is the Cd value at $\mathit{\alpha }=\mathrm{90}{}^{\circ }$; Cd,0 was chosen for this study because of the focus on normal operational region, as opposed to the extreme conditions considered by Abdallah et al. (2015). The three variable points of interest were perturbed by a percentage of the default value. The perturbations and correlations are depicted in Fig. 9 and parameter ranges are detailed in Table 9. From Abdallah et al. (2015), the TES, max, and SR Cl values for an individual airfoil have a correlation to one another of 0.9. Thus, all Cl values are perturbed collectively, using the same percentage (δ4). The AoA values are less correlated and are therefore perturbed independently of one another. However, to ensure that nonphysical relative values are not reached, all AoA values are perturbed by the same base percentage (δ1), and then an additional independent variation of a smaller value was added (δ2 and δ3) for αmax and αSR, respectively. The Cd,0 value was also perturbed (δ5).
Figure 9Perturbation of points of interest in representative Cl and Cd curves.
Table 9Parameter value ranges of turbine blade aerodynamic parameters.
Cl and Cd curves were altered for each airfoil. However, instead of specifying δ values for each airfoil, these values were specified at the root and tip airfoils, excluding the cylindrical airfoils at the base, which were not modified. Perturbation values for the interior airfoils were computed from a linear fit of the end point values. The method of developing the new curves for each airfoil is detailed here:
1. AoA deltas are applied to the original AoA values via the following equations.
$\begin{array}{}\text{(9)}& & {\mathit{\alpha }}_{\mathrm{TES},\mathrm{new}}={\mathit{\alpha }}_{\mathrm{TES},\mathrm{orig}}+{\mathit{\alpha }}_{\mathrm{TES},\mathrm{orig}}{\mathit{\delta }}_{\mathrm{1}},\text{(10)}& & {\mathit{\alpha }}_{\mathrm{max},\mathrm{new}}={\mathit{\alpha }}_{\mathrm{max},\mathrm{orig}}+{\mathit{\alpha }}_{\mathrm{max},\mathrm{orig}}\left({\mathit{\delta }}_{\mathrm{1}}+{\mathit{\delta }}_{\mathrm{2}}\right),\text{(11)}& & {\mathit{\alpha }}_{\mathrm{SR},\mathrm{new}}={\mathit{\alpha }}_{\mathrm{SR},\mathrm{orig}}+{\mathit{\alpha }}_{\mathrm{SR},\mathrm{orig}}\left({\mathit{\delta }}_{\mathrm{1}}+{\mathit{\delta }}_{\mathrm{3}}\right).\end{array}$
2. The new AoA values are fit to the nearest existing AoA value on the curve. The AoA resolution is fine enough that all perturbations are captured, though not precisely. This approach may need to be adjusted if the perturbations were to decrease.
3. For all new AoA values, the change in Cl between the original Cl value (Cl,TES) and the Cl curve value at the new AoA (Cl,orig+) is computed via
$\begin{array}{}\text{(12)}& \mathit{ϵ}={C}_{\mathrm{l},\mathrm{orig}+}-{C}_{\mathrm{l},\mathrm{orig}}.\end{array}$
4. The total change in Cl is then computed via
$\begin{array}{}\text{(13)}& {C}_{\mathrm{l},\mathrm{diff}}={\mathit{\delta }}_{\mathrm{4}}{C}_{\mathrm{l},\mathrm{orig}}-\mathit{ϵ}.\end{array}$
This ensures that if δ4=0, the final Cl,new value is equivalent to that of the original curve.
5. For Cl perturbation, the end points are located at the AoA associated with the beginning of the linear Cl region (Cl,lin) and $\mathit{\alpha }=\mathrm{90}{}^{\circ }$; as these are fixed points, they have Cl,diff=0. The Cl curve is replaced by a line between (αlin, Cl,lin) and (αTES, δ4Cl,lin). A piece-wise linear spline – representing perturbations about the original curve – is constructed between the points (αTES,new, δ4Cl,TES), (αmax,new, δ4, Cl,max), (αSR,new, δ4, Cl,SR), and (90, 0).
6. The Cl,diff values calculated from the spline fit are added to the original Cl curve via
$\begin{array}{}\text{(14)}& {C}_{\mathrm{l},\mathrm{new}}={C}_{\mathrm{l},\mathrm{orig}+}+{C}_{\mathrm{l},\mathrm{diff}}.\end{array}$
Figure 10Sample original and perturbed Cl and Cd curves for each airfoil section used in the NREL 5 MW reference turbine. Perturbed values represent ±10 % of the specified range for each parameter.
Table 10Parameter value ranges of turbine controller parameters.
Table 11Percentage of contribution to total number of significant events for ultimate and fatigue loads.
Figure 11Identification of significant parameters using (a) ultimate and (b) fatigue loads. Significant events are defined by the number of outliers identified across each of the QoIs for all wind speed bins, input parameters, and simulation points.
A similar process was followed by modifying the Cd curves, wherein the Cd value corresponding to $\mathit{\alpha }=\mathrm{0}{}^{\circ }$ (Cd,0) is perturbed by a specified value (δ5) in the same manner as the Cl values. A piece-wise linear spline is then fit between (−90, ${C}_{\text{d},-\mathrm{90}}$), (0, Cd,0), and (90, Cd,90) and added to the original Cd curve. Cd,0 is constrained to not go below 0. Several modified Cl and Cd curves for each airfoil section are shown in Fig. 10. Note that Cd curves are perturbed, but by a very small amount not visible in the plots. These perturbations result in modified Cl and Cd curves that maintain the primary characteristics of the original curve, but differ in both magnitude and feature location.
Figure 12Stacked EE-value histograms of ultimate loads across all wind speed bins, input parameters, and simulation points for all QoIs. The black line represents the threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
There are several unsteady airfoil aerodynamic parameters that can be modified in OpenFAST. By expert opinion (Rick Damiani, personal communication, May 2018), several of these parameters have been identified as having the largest potential variability or impact on turbine response and are therefore included in this study. Several of the parameters in the Beddoes–Leishman-type unsteady airfoil aerodynamics model used here are derivable from the (perturbed) static lift and drag polars, i.e., when the lift and drag polars are perturbed, the associated Beddoes–Leishman unsteady airfoil aerodynamic parameters are perturbed as well. Additionally, there are several other parameters associated with unsteady aerodynamics that are included in OpenFAST. These parameters are
• Tf0 – time constant connected to leading-edge separation of the airfoil,
• TV0 – time constant connected to vortex shedding,
• TVL – time constant connected to the vortex advection process,
• Stsh – Strouhal number associated with the vortex shedding frequency.
These quantities were varied over the ranges detailed in Table 9 and are constant across the blade.
Figure 13Zoomed-in stacked EE-value histograms of ultimate loads across all wind speed bins, input parameters, and simulation points for all QoIs. The black line represents the threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
### Controller properties
Turbine yaw error was incorporated by directly changing the yaw angle of the turbine (see Table 10). For the collective blade pitch error, the twist distribution of each blade was identically shifted uniformly along the blade independent of the twist change in Table 9. For the imbalance pitch error, modified twist distributions were applied to two of the blades: one with a higher-than-nominal tip twist, one with a lower-than-nominal tip twist, and one unchanged.
### 4.2.3 Elementary effects
The EE value calculation and analysis process are the same as was used for the wind parameter analysis. Sixty wind file seeds (S) were needed based on a convergence study of the ultimate and fatigue load metrics for all QoIs. This increase in the number of required wind file seeds compared with the inflow study is likely due to some turbine input parameter combinations causing resonance. Based on these numbers, the total number of simulations performed for the wind-inflow case study was $R×\left(I+\mathrm{1}\right)×S×B=\mathrm{30}×\mathrm{40}×\mathrm{60}×\mathrm{3}=$ 216 000.
Figure 14Stacked EE-value histograms of fatigue loads across all wind speed bins, input parameters, and simulation points for all QoIs. The black line represents the threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Figure 15Zoomed-in stacked EE-value histograms of fatigue loads across all wind speed bins, input parameters, and simulation points for all QoIs. The black line represents the threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Histograms of the EE values for each of the QoIs are plotted in Figs. 12–15 for the ultimate and fatigue load metrics (associated exceedance probability plots are shown in Appendix B, Figs. B3 and B4). Here, EE values are colored by wind speed and the black vertical line represents the threshold for each QoI. The sharp separation of ultimate load EE values between wind speed bins is evident in Fig. 12. A zoomed-in view of the lower count values is shown in Fig. 13. The more evenly distributed nature of the fatigue load EE values is further highlighted in the histogram plots depicted in Fig. 14 and zoomed-in views in Fig. 15. Unlike ultimate load EE values, all wind speed bins contribute to the outlier count for each QoI. Histogram plots of blade-root ultimate and fatigue bending moment EE values are shown in Figs. 16 and 17, respectively. The sharp separation of ultimate load EE values between wind speed bins is again evident. Highlighted in the fatigue load plots is the more even distribution of threshold-exceeding EE values across wind speed bins.
Figure 16EE-value histograms of blade-root bending ultimate moment. Each graph shows one wind speed bin and includes all input parameters. Right column is a zoomed-in view of the left column.
Figure 17EE-value histograms of blade-root OoP bending fatigue moment. Each graph shows one wind speed bin and includes all input parameters. Right column is a zoomed-in view of the left column.
Table 12Primary input parameters contributing to ultimate load sensitivity of each QoI. Values indicate how many times the variable contributes to the sensitivity count.
Table 13Primary input parameters contributing to fatigue load sensitivity of each QoI. Values indicate how many times the variable contributes to the sensitivity count.
Figure 18EE-value exceedance probability plots for the blade-root bending ultimate moment (a) and blade-root OoP bending fatigue moment (b). Each line represents a different input parameter and wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
The behavior of blade-root loads are examined in more detail by plotting exceedance probability distinctly for each input parameter in Fig. 18. Highlighted in these plots is the contribution of the individual input parameters to the outlier counts. For blade-root bending ultimate moment EE values, blade twist and Cl,t EE values in the near-rated wind speed bin are beyond the threshold for every point in the hyperspace. Yaw error and Cl,b EE values from the near-rated wind speed bin and yaw error from the above-rated wind speed bin also cross the threshold. For blade-root OoP bending fatigue moment EE values, the threshold is exceeded by blade twist and Cl,t EE values from the below- and above-rated wind speeds for every point in the hyperspace. However, for all other relevant input parameters, only certain points in the hyperspace result in threshold exceedance. This indicates that, for certain loads and input parameters, the sensitivity of the turbine is dependent on the combination of turbine parameter values. These results can be used in future studies to more thoroughly investigate the hyperspace to determine how input parameter value combinations contribute to turbine sensitivity.
5 Conclusions
A screening analysis of the most sensitive turbulent wind and aeroelastic parameters to the resulting structural loads and power QoI was performed for the representative NREL 5 MW wind turbine under normal operating conditions. The purpose of the study was to assess the sensitivity of different turbulent wind and turbine parameters on the resulting loads of the wind turbine. The sensitivities of the different parameters were ranked. The study did not consider specific site conditions but rather focused on understanding the most sensitive parameters across the range of possible values for a variety of sites.
To limit the number of simulations required, a screening analysis using the EE method was used instead of a more computationally intensive sensitivity analysis. The EE method is an assessment of the local sensitivity of a parameter at a given location in space through variation of only that parameter, examined over multiple points throughout the parameter hyperspace, making it a global sensitivity analysis. This work modified the general EE formula to examine the sensitivity of parameters across multiple wind speed bins. A radial version of the method was employed, using Sobol numbers as starting points, and a set delta value of 10 % for the parameter variations. The most sensitive input parameters were identified using the EE value threshold.
Two independent case studies were performed. For the wind parameter case study, it was found that the loads and power are highly sensitive to the shear and turbulence levels in the u direction. To a lesser extent, turbine loads are sensitive to the wind veer and the integral length scale and coherence parameters in the u direction. The combinations of parameters in this study spanned the ranges of several different locations. The parameters were considered independent of one another (conditioned only on wind speed bin), which likely resulted in some nonphysical wind scenarios. However, the screening analysis has shown which parameters are most important to examine in more detail in future work.
The aeroelastic parameter case study showed that the loads and power are highly sensitive to the yaw error and the lift distribution at the outboard section of the blade. To a lesser extent, turbine loads are sensitive to blade twist distribution, lift distribution at the inboard section of the blade, and blade mass imbalance. Additionally, ultimate load EE values are typically separated by wind speed bin, whereas fatigue load EE values are more evenly distributed across wind speed bins.
Through the implemented EE method, different combinations of input parameters have been used. When specific input parameters are shown to be sensitive to one or more turbine loads, it is possible that only certain combinations of the input parameters will result in this sensitivity. This leads to opportunities for future work to further investigate which parameter combinations lead to higher turbine sensitivity. In future work, this ranking of most sensitive parameters could be used to help establish uncertainty bars around predictions of engineering models during validation efforts and provide insight into probabilistic design methods and site-suitability analysis. Although the most sensitive ranking results may depend on the turbine size or configuration, the analysis process developed here could be applied universally to other turbines. This work could also be further expanded in future work to include load cases other than normal operation.
Data availability
Data availability.
While this study sought to minimize computational expense, hundreds of thousands of simulations were run to perform the analysis. The models that the work is based on are publicly available through the National Wind Technology Computer-Aided Tools website (https://nwtc.nrel.gov/CAE-Tools, last access: 10 September 2019). The large amount of data produced made it impractical to save and to share publicly. The statistics presented in the plots in this paper serve as the best means to share the information developed.
Appendix A: Mean and standard deviation of elementary effects
To identify which parameters are the most sensitive, some researchers compare the average of the EE values for the different parameters across all input starting points. Additionally, some look at the standard deviation of the EE values for a given parameter across the different starting points. This helps to identify large sensitivity variation at different points, indicating strong interaction with the values of other parameters. As commonly found in EE-related literature, EE analysis typically identifies the most sensitive parameters using a plot to pictorially show the standard deviation versus mean values of the EE values. However, it is difficult to systematically identify the most sensitive parameters using this approach.
The mean of the absolute EE value for the ultimate loads for each QoI with input parameter i and bin b is calculated as
$\begin{array}{}\text{(A1)}& {\mathit{\mu }}_{ib}^{*}=\frac{\mathrm{1}}{R}\sum _{r=\mathrm{1}}^{R}\left|{\text{EE}}_{ib}^{r}\right|,\end{array}$
where R is the number of points at which the EE value is calculated. The standard deviation of the EE is then calculated as
$\begin{array}{}\text{(A2)}& \mathit{\sigma }{\mathit{\mu }}_{ib}=\sqrt{\frac{\mathrm{1}}{\left(R-\mathrm{1}\right)}\sum _{r=\mathrm{1}}^{R}{\left({\text{EE}}_{ib}^{r}-{}_{ib}\right)}^{\mathrm{2}}}\end{array}$
and μib is defined as
$\begin{array}{}\text{(A3)}& {\mathit{\mu }}_{ib}=\frac{\mathrm{1}}{R}\sum _{r=\mathrm{1}}^{R}{\text{EE}}_{ib}^{r}.\end{array}$
This is shown in Figs. A1 and A2 for the blade-root bending ultimate moment and the blade-root bending OoP fatigue moment metrics for both the wind parameter and turbine parameter case studies, respectively. Shown in Fig. A1 is the large sensitivity of shear in the lowest wind speed bin and the large sensitivity of the u turbulence across all wind speed bins. Shown in Fig. A2 is the large sensitivity of yaw error in the below-rated wind speed bin and the large sensitivity of the lift distribution at the outboard section of the blade in the below- and near-rated wind speed bins.
Figure A1EE standard deviation versus EE mean for blade-root bending moment ultimate load (a) and blade-root Oop bending moment fatigue load (b) at all wind speed bins (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Figure A2EE standard deviation versus EE mean for blade-root bending moment ultimate load (a) and blade-root Oop bending moment fatigue load (b) at all wind speed bins (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Appendix B: Exceedance probability plots of elementary effects
Figure B1Exceedance probability plot of ultimate load EE values for each of the wind-inflow parameter QoIs across all wind speed bins, input parameters, and simulation points. Black line represents the defined threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Figure B2Exceedance probability plot of fatigue load EE values for each of the wind-inflow parameters across all wind speed bins, input parameters, and simulation points. Black line represents the defined threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Figure B3EE-value exceedance probability plots of ultimate loads for aeroelastic turbine parameters, across all wind speed bins, input parameters, and simulations points for all QoIs. The black line represents the defined threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Figure B4EE-value exceedance probability plots of fatigue loads of aeroelastic turbine parameters across all wind speed bins, input parameters, and simulations points for all QoIs. The black line represents the defined threshold by which outliers are counted for each QoI. Color indicates wind speed bin (blue is below-rated speed, red is near-rated speed, green is above-rated speed).
Author contributions
Author contributions.
JJ provided the conceptualization and supervision for this project. ANR developed the EE methodology approach used within both parameter studies. KS led the investigation of the wind turbine property sensitivity study, and LS led the investigation of the wind characteristics study. ANR and KS prepared the article, with support from JJ and LS.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Disclaimer
Disclaimer.
The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government.
Acknowledgements
Acknowledgements.
This work was authored by Alliance for Sustainable Energy, LLC, the manager and operator of the National Renewable Energy Laboratory for the U.S. Department of Energy (DOE) under contract no. DE-AC36-08GO28308.
Financial support
Financial support.
This research has been supported by the U.S. Department of Energy, National Renewable Energy Laboratory (contract no. DE-AC36-08GO28308). Funding was provided by the Department of Energy Office of Energy Efficiency and Renewable Energy, Wind Energy Technologies Office.
Review statement
Review statement.
This paper was edited by Michael Muskulus and reviewed by Imad Abdallah and two anonymous referees.
References
Abdallah, I.: Assessment of extreme design loads for modern wind turbines using the probabilistic approach, PhD Disseration, DTU Wind Energy, Denmark, No. 0048(EN), 2015.
Abdallah, I., Natarajan, A., and Sorensen, J.: Impact of Uncertainty in Airfoil Characteristics on Wind Turbine Extreme Loads, Renew. Energ., 75, 283–300, https://doi.org/10.1016/j.renene.2014.10.009, 2015.
Berg, J., Mann, J., and Patten, E. G.: Lidar-Observed Stress Vectors and Veer in the Atmospheric Boundary Layer, J. Atmos. Ocean. Tech., 30, 1961–1969, 2013.
Bulaevskaya, V., Wharton, S., Clifton, A., Qualley, G., and Miller, W. O. Wind power curve modeling in complex terrain using statistical models, J. Renew. Sustain. Energ., 7, 013103, doi:10.1063/1.4904430, 2015.
Campolongo, F., Cariboni, J., and Saltelli, A.: An Effective Screening Design for Sensitivity Analysis of Large Models, Environ. Model. Softw., 22, 1509–1518, 2007.
Campolongo, F., Saltelli, A., and Cariboni, J.: From Screening to Quantitative Sensitivity Analysis – A Unified Approach, Comput. Phys. Commun., 182, 978–988, 2011.
Clifton, A.: 135-m Meteorological Towers at the NWTC: Instrumentation, Data Acquisition and Processing, Draft, available at: https://wind.nrel.gov/MetData/Publications/ (last access: 5 August 2019), 2014.
Damiani, R. R., Hayman, G. J., and Jonkman, J.: Development and Validation of a New Unsteady Airfoil Aerodynamics Model Within AeroDyn, AIAA SciTech Forum, San Diego, CA, 1–21, https://doi.org/10.2514/6.2016-1007, 2016.
Dimitrov, N., Natarajan, A., and Kelly, M.: Model of Wind Shear Conditional on Turbulence and Its Impact on Wind Turbine Loads, Wind Energy, 18, 1917–1931, 2015.
Dimitrov, N., Kelly, M. C., Vignaroli, A., and Berg, J.: From wind to loads: wind turbine site-specific load estimation with surrogate models trained on high-fidelity load databases, Wind Energ. Sci., 3, 767–790, https://doi.org/10.5194/wes-3-767-2018, 2018.
Downey, R.: Uncertainty in Wind Turbine Life Equivalent Load due to Variation of Site Conditions, MSc Thesis Project, DTU Wind Energy, Denmark, 2006.
Eggers, A., Digumarthi, R., and Chaney, K.: Wind Shear and Turbulence Effects on Rotor Fatigue and Loads Control, T. ASME, 125, 402–409, 2003.
Ehrmann, R. S., Wilcox, B., White, E. B., and Maniaci, D. C.: Effect of Surface Roughness on Wind Turbine Performance, Tech. Rep. SAND2017-10669, Sandia National Laboratory, Albuquerque, NM, October 2017.
Ernst, B., and Seume, J.: Investigation of Site-Specific Wind Field Parameters and Their Effect on Loads of Offshore Wind Turbines, Energies, 5, 3835–3855, 2012.
Francos, A., Elorza, F., Bouraoui, F., Bidoglio, G., and Galbiati, L.: Sensitivity Analysis of Distributed Environmental Simulation Models: Understanding the Model Behavior in Hydrological Studies at the Catchment Scale, Reliab. Eng. Syst. Safe., 79, 205–218, 2003.
Gan, Y., Duan, Q., Gong, W., Tong, C., Sun, Y., Chu, W., Ye, A., Miao, C., and Di, Z.: A Comprehensive Evaluation of Various Sensitivity Analysis Methods: A Case Study with a Hydrological Model, Environ. Model. Softw., 51, 269–285, 2014.
Hayman, G. and Buhl, J.: MLife User's Guide for Version 1.00, NREL Technical Report, 2012.
Holierhoek, J. G., Korterink, H., van de Pieterman, Braam, H., Rademakers, L. W. M. M., Lekou, D. J., and Hecquet, T.: PROTEST: Recommended Practices for Measuring in Situ the Loads on Drive Train, Pitch System, and Yaw System, ECN, PROTEST project deliverable D6, D7, D8, 2010.
Holtslag, M., Bierbooms, W., and Bussel, G.: Wind Turbine Fatigue Loads as a Function of Atmospheric Conditions, Wind Energy, 19, 1917–1932, 2016.
Huang, Y. and Pierson, D.: Identifying Parameter Sensitivity in Water Quality Model of a Reservoir, Water Qual. Res. J. Can., 47, 51–462, 2012.
IEC: IEC 61400–1 Ed. 3, Wind Turbines – Part 1: Design Requirements, 2005.
IEC: IEC 61400–5 Ed. 3, Wind Turbines – Part 5: Rotor Blades Wind Turbines, 2010.
Jansen, M.: Analysis of Variance Designs for Model Output, Comput. Phys. Commun., 117, 35–43, 1999.
Jonkman, B.: TurbSim User's Guide v1.50, NREL Technical Report, NREL/TP-500-46198, 2009.
Jonkman, J., Butterfield, S., Musial, W., and Scott, G.: Definition of a 5 MW Reference Wind Turbine for Offshore System Development, NREL Technical Report, NREL/TP-500-38060, February 2009.
Kalverla, P., Steeeveld, G., Ronda, R., and Holtslag, A. A. M.: An Observational Climatology of Anomalous Wind Events at Offshore Meteomast Ijmuiden (North Sea), J. Wind Eng. Ind. Aerod., 165, 86–99, 2017.
Kelley, N.: Turbulence-Turbine Interaction: The Bases for the Development of the TurbSim Stochastic Simulator, NREL Technical Report, NREL/TP-5000-52353, 2011.
Kelly, M., Larsen, G., Dimitrov, N., and Natarajan, A.: Probabilistic Meteorological Characterization for Turbine Loads, The Science of Making Torque from Wind 2014 (TORQUE 2014), J. Phys. Conf. Ser., 524, 012076, https://doi.org/10.1088/1742-6596/524/1/012076, 2014.
Koukoura, C.: Validated Loads Prediction Models for Offshore Wind Turbines for Enhanced Component Reliability, PhD Dissertation, DTU Wind Energy, Denmark, No. 0026 (EN), 2014.
Lindelöw-Marsden, P.: Uncertainties in Wind Assessment with LIDAR, Risø-R-1681UpWind Deliverable D1, Risø National Laboratory for Sustainable Energy, DTU, Denmark, January 2009.
Loeven, G. J. A. and Bijl, H.: Airfoil Analysis with Uncertain Geometry Using the Probabilistic Collection Method, AIAA Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL, 7–10 April, 2008.
Madsen, H. A., Bak, C., Paulsen, U. S., Gaunaa, M., Fuglsang, P., Romblad, J., Olesen, N. A., Enevoldsen, P., Laursen, J., and Jensen, L.: The DAN-AERO MW Experiments Final Report, Tech. Rep. Risø-R-1726 (EN), Risø DTU, Roskilde, Denmark, September 2010.
Martin, R., Lazakis, I., Barbouci, S., and Johanning, L.: Sensitivity Analysis of Offshore Wind Farm Operation and Maintenance Cost and Availability, Renew. Energ., 85, 1226–1236, 2016.
Matthaus, D., Bortolotti, P., Loganathan, J., and Bottasso, C. L.: A Study of the Propagation of Aero and Wind Uncertainties and their Effect on the Dynamic Loads of a Wind Turbine, AIAA SciTech Forum, Grapevine, TX, 9–13 January, 2017.
Moriarty, P., Holley, W., and Butterfield, S.: Effect of Turbulence Variation on Extreme Loads Prediction of Wind Turbines, J. Sol. Energ.-T. ASME, 124, 387–395, 2002.
Moroz, E.: Time to Upgrade the Wind Turbine Suitability Process, Presentation at AWEA WindPower, Anaheim, CA, 22-25 May, 2017.
Nelson, L. D., Manuel, L., Sutherland, H. J., and Veers, P. S.: Statistical Analysis of Wind Turbine Inflow and Structural Response Data from the LIST Program, J. Sol. Energ.-T. ASME, 125, 541–550, 2003.
OpenFAST: OpenFAST Documentation, available at: http://openfast.readthedocs.io/en/master (last access: 5 August 2019), November 2017.
Park, J., Manuel, L., and Basu, S.: Toward Isolation of Salient Features in Stable Boundary Layer Wind Fields that Influence Loads on Wind Turbines, Energies, 8, 2977–3012, 2015.
Petrone, G., de Nicola, C., Quagliarella, D., Witeveen, J., and Iaccarino, G.: Wind Turbine Performance Analysis Under Uncertainty, AIAA Aerospace Sciences Meeting, Orlando, FL, 4–7 January, 2011.
Quick, J., Annoni, J., King, R., Dykes, K., Flemming, P., and Ning, A.: Optimization Under Uncertainty for Wake Steering Strategies, Wake Conference, Visby, Sweden, 30 May–1 June 2017, p. 854, 2017.
Rinker, J.: Calculating the Sensitivity of Wind Turbine Loads to Wind Inputs Using Response Surfaces, The Science of Making Torque from Wind (TORQUE 2016), J. Phys. Conf. Ser., 753, 032057, https://doi.org/10.1088/1742-6596/753/3/032057, 2016.
Robertson, A., Sethuraman, L., Jonkman, J., and Quick, J.: Assessment of Wind Parameter Sensitivity on Ultimate and Fatigue Wind Turbine Loads, Presented at the American Institute of Aeronautics and Astronautics SciTech Forum, Orlando, Florida, 8–12 January, 2018.
Saint-Geours, N. and Lilburne, L.: Comparison of Three Spatial Sensitivity Analysis Techniques, Accuracy 2010 Symposium, Leicester, UK, 20–23 July, 2010.
Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., and Tarantola, S.: Global Sensitivity Analysis, The Primer, John Wiley & Sons Ltd., 2008.
Santos, R. and van Dam, J.: Mechanical Loads Test Report for the U.S. Department of Energy 1.5-Megawatt Wind Turbine, NREL Technical Report, NREL/TP-5000-63679, 2015.
Saranyasoontorn, K. and Manuel, L.: On the Study of Uncertainty in Inflow Turbulence Model Parameters in Wind Turbine Applications, 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, 9–12 January 2006.
Sarayansoontorn, K. and Manuel, L.: On the Propagation of Uncertainty in Inflow Turbulence to Wind Turbine Loads, J. Wind Eng. Ind. Aerod., 96, 508–523, 2008.
Saranyasontoorn, K., Manuel, L., and Veers, P. S.: A Comparison of Standard Coherence Models for Inflow Turbulence with Estimates from Field Measurements, J. Sol. Energ.-T. ASME, 126, 1069–1082, 2004.
Sathe, A., Mann, J., Barlas, T., Bierbooms, W. A. A. M., and van Bussel, G. J. W.: Influence of atmospheric stability on wind turbine loads, Wind Energy, 16, 1013–1032, 2012.
Simms, D., Schreck, S., Hand, M., and Fingersh, L. J.: NREL Unsteady Aerodynamics Experiment in the NASA-Ames Wind Tunnel: A Comparison of Predictions to Measurements, NREL Technical Report, NREL/TP-500-29494, National Renewable Energy Laboratory, Golden, CO, June 2001.
Sohier, H., Piet-Lahanier, H., and Farges, J.: Analysis and Optimization of an Air-Launch-to-Orbit Separation, Acta Astronaut., 108, 18–29, 2015.
Solari, G.: Turbulence Modelling for Gust Loading, J. Struct. Eng.-ASCE, 113, 1550–1569, 1987.
Solari, G. and Piccardo, G.: Probabilistic 3-D Turbulence Modeling for Gust Buffeting of Structures, Probabilist. Eng. Mech., 16, 73–86, 2001.
Sutherland, H.: Analysis of the Structural and Inflow Data from the LIST Turbine, Sandia National Laboratories Report, SAND2002-1838J, 2002.
Teunissen, H.: Characteristics of the Mean Wind and Turbulence in the Planetary Boundary Layer, UTIAS 32, Institute for Aerospace Studies, University of Toronto, 1970.
Wagner, R., Courtney, M., and Larsen, T.: Simulation of Shear and Turbulence Impact on Wind Turbine Performance, Risø-R-Report 1722, DTU, Denmark, January 2010.
Walter, K., Weiss, C., Swift, A., Chapman, J., and Kelley, N. D.: Speed and Direction Shear in the Stable Nocturnal Boundary Layer, J. Sol. Energ.-T. ASME, 131, 011013, https://doi.org/10.1115/1.3035818, 2009.
Wharton, S., Newman, J. F., Qualley, G., and Miller, W. O.: Measuring Turbine Inflow with Vertically-Profiling Lidar in Complex Terrain, J. Wind Eng. Ind. Aerodyn., 142, 217–231, 2015.
Witcher, D.: Uncertainty Quantification Techniques in Wind Turbine Design, Presentation, Systems Engineering Workshop 2017, Roskilde, Denmark, 13 September, 2017.
Ziegler, L. and Muskulus, M.: Fatigue Reassessment for Lifetime Extension of Offshore Wind Monopile Substructures, The Science of Making Torque from Wind (TORQUE 2016), J. Phys. Conf. Ser., 753, 092010, https://doi.org/10.1088/1742-6596/753/9/092010, 2016.
The Mann turbulence model (also considered in the IEC 61400-1 standard) is based on a three-dimensional tensor representation of the turbulence derived from rapid distortion of isotropic turbulence using a uniform mean velocity shear (Jonkman, 2009). The Mann model considers the three turbulence components as dependent, representing the correlation between the longitudinal and vertical components resulting from the Reynolds stresses. In the IEC 61400-1 standard, the two spectra (Mann and Kaimal) are equated, resulting in three parameters that may be set for the Mann model. However, there is uncertainty in whether the loads resulting from these two different turbulence spectra are truly consistent.
|
2019-09-21 23:26:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6743031144142151, "perplexity": 2434.002226712196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574710.66/warc/CC-MAIN-20190921231814-20190922013814-00340.warc.gz"}
|
http://method.digital/acoustic-kitty-gxqm/positive-definite-matrix-example-4x4-703e10
|
Two by two symmetric matrices Example Let A = a b b c be a symmetric 2 2 matrix. where $S$ is skew-symmetric, $D$ is diagonal and both have integer entries. >> Asking for help, clarification, or responding to other answers. That is, each row is acircular shiftof the rst row. I have a 4x4 matrix which I believe should pass as positive definite however using is.positive.definite(), it is evaluating to false. $$. Positive definite and positive semidefinite matrices Let Abe a matrix with real entries. >> Positive (semi-)definite matrices • A is positive definite if A is symmetric and xTAx > 0 for all x 6= 0 • A is positive semidefinite if A is symmetric and xTAx ≥ 0 for all x Note: if A is symmetric of order n, then xTAx = Xn i=1 Xn j=1 aijxixj = Xn i=1 aiix 2 i +2 X i>j aijxixj The Cholesky factorization 5–2 Examples … by Marco Taboga, PhD. I'm aware of answers such as https://math.stackexchange.com/a/1377275/245055, but the problem is that this does not produce a symmetric matrix. /FormType 1 /Filter /FlateDecode This is the multivariable equivalent of “concave up”. endstream /BBox [0 0 362.835 3.985] How to construct a 4 \times 4 symmetric, positive definite matrix with integer eigenvalues, https://math.stackexchange.com/a/1377275/245055, math.stackexchange.com/questions/607540/…, Simple examples of 3 \times 3 rotation matrices. You can use the trick from this paper to find orthogonal matrices with rational entries: (S-I)^{-1}(S+I), where S is skew-symmetric with integer entries. >> /Length 15 stream (a) Prove that the eigenvalues of a real symmetric positive-definite matrix Aare all positive. Symmetric and positive definite matrices have extremely nice properties, and studying these matrices brings together everything we've learned about pivots, determinants and eigenvalues. One can modify the approach of the linked solution. R*D����!3��J[��7HDȺ�g��d�Yf�j)I�3ޢ��l@\����.H6F�S�D�v�n���o��l��@ ����Iη�#�A�E�L�!�mp�F�GȨ�� ށ�x. But the condition for positive definiteness is not strictly violated. When was the phrase "sufficiently smart compiler" first used? Positive Semi-Definite Matrices. A symmetric matrix is positive definite if and only if it has a Cholesky decomposition, and there exists an algorithm for computing this. /Subtype /Form /Filter /FlateDecode Like Hermitian matrices, they have orthonormal eigenvectors, but unlike Hermitian matrices we know exactly what their eigenvectors are! Prove that a positive definite matrix has a unique positive definite square root. In the example below with a 4x4 matrix, which numpy can demonstrate is posdef, sympy returns neither False nor True but None. In this positive semi-definite example… From: Theory and Applications of Numerical Analysis (Second Edition), 1996 Then you can use these for similarity transforms of diagonal matrices D with integer entries, and multiply by the determinants to get back to integers; all together:$$ To subscribe to this RSS feed, copy and paste this URL into your RSS reader. /FormType 1 16 0 obj 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider asan example the 3x3 diagonal matrix D belowand a general 3 elementvector x. Theorem C.6 The real symmetric matrix V is positive definite if and only if its eigenvalues Why does my cat lay down with me whenever I need to or I’m about to get up? endobj Was the storming of the US Capitol orchestrated by the Left? Add to solve later In this post, we review several definitions (a square root of a matrix, a positive definite matrix) and solve the above problem.After the proof, several extra problems about square roots of a matrix are given. A real symmetric n×n matrix A is called positive definite if xTAx>0for all nonzero vectors x in Rn. A 4 4 circulant matrix looks like: … Eigenvalues of a positive definite real symmetric matrix are all positive. Also, we will… The false positives aren't a problem -- if the diagonalisation yields integer eigenvalues, you can check in integer arithmetic whether they're actually eigenvalues. Am I burning bridges if I am applying for an internship which I am likely to turn down even if I am accepted? For example, if a matrix has an eigenvalue on the order of eps, then using the comparison isposdef = all(d > 0) returns true, even though the eigenvalue is numerically zero and the matrix is better classified as symmetric positive semi-definite. It only takes a minute to sign up. The is_positive_definite does not always give a result. $$\begin{pmatrix}\frac{a}{c} & -\frac{b}{c} \\ \frac{b}{c} & \frac{a}{c}\end{pmatrix} \oplus I_{n - 2} .$$ Alternatively, one could take a Householder reflection determined by any rational vector in $\Bbb Q^3$. If nobody manages to come up with an analytical approach, I'll resort to solving this numerically and share the implementation as an "answer". /BBox [0 0 8 8] All the eigenvalues with corresponding real eigenvectors of a positive definite matrix M are positive. Positive Definite Matrix Positive definite matrices occur in a variety of problems, for example least squares approximation calculations (see Problem 9.39). /Subtype /Form So, for example, if a 4 × 4 matrix has three positive pivots and one negative pivot, it will have three positive eigenvalues and one negative eigenvalue. Use MathJax to format equations. >> Theorem C.6 The real symmetric matrix V is positive definite if and only if its eigenvalues $Q D Q^{-1} = \pmatrix{\frac{34}{25} & \frac{12}{25} \\ \frac{12}{25} & \frac{41}{25}} \oplus I_2$, and clearing denominators gives a matrix with the desired properties. What we have shown in the previous slides are 1 ⇔ 2 and Computing gives This is called acirculant matrix. By construction, the resulting matrix $A := m Q D Q^T$ is symmetric has integer entries and nonnegative eigenvalues $md_a$. 20 0 obj /Length 15 Sponsored Links 2 QUADRATIC FORMS AND DEFINITE MATRICES Consider asan example the 3x3 diagonal matrix D belowand a general 3 elementvector x. 3 The determinants of the leading principal sub-matrices of A are positive. What's the word for a vendor/retailer/wholesaler that sends products abroad, Pick any sequence $(d_1, \ldots, d_4)$ of nonnegative integers and form the diagonal matrix $$D := \pmatrix{d_1 & & \\ & \ddots & \\ & & d_4} .$$. /Filter /FlateDecode << In this small exercise we will use the determinants test to check if two matrices are positive definite. This matrix has a very special pattern: every row is the same as the previous row, just shifted to the right by 1 (wrapping around \cyclically" at the edges). A square matrix is positive definite if pre-multiplying and post-multiplying it by the same vector always gives a positive number as a result, independently of how we choose the vector.. Also, it is the only symmetric matrix. Only the second matrix shown above is a positive definite matrix. If we want to nd all the principal minors, these are given by 1 = a and 1 = c (of order one) and 2 = ac b2 (of order two). If all of the subdeterminants of A are positive (determinants of the k by k matrices in the upper left corner of A, where 1 ≤ k ≤ n), then A is positive … Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. endobj /Type /XObject What (in the US) do you call the type of wrench that is made from a steel tube? Do real eigenvalues $\implies$ symmetric matrix? This is proven in section 6.4 of the textbook. The following statements are equivalent. Therefore x T Mx = 0 which contradicts our assumption about M being positive definite. If eigenvalues of a symmetric matrix are positive, is the matrix positive definite? /BBox [0 0 5669.291 8] The entitlements in your app bundle signature do not match the ones that are contained in the provisioning profile. A positive definite matrix M is invertible. /Matrix [1 0 0 1 0 0] What does the expression "go to the vet's" mean? 18 0 obj The is_positive_definite does not always give a result. How does one take advantage of unencrypted traffic? This seems like it might be relevant, though I'm not sure exactly how: The example in the answer already shows that $Q D Q^{-1}$ need not be an integer matrix. And why is a positive definite matrix symmetric? x���P(�� �� As part of my master thesis I'm trying to construct (or find) some $4 \times 4$ symmetric, positive (semi-)definite matrices with integer components, and integer eigenvalues. A positive definite matrix will have all positive pivots. /Filter /FlateDecode Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Given below is the useful Hermitian positive definite matrix calculator which calculates the Cholesky decomposition of A in the form of A=LL , where L is the lower triangular matrix and L is the conjugate transpose matrix of L. In the example below with a 4x4 matrix, which numpy can demonstrate is posdef, sympy returns neither False nor True but None. endobj @joriki OK, thanks. Neither the conditions for A to be positive definite nor those for A to be negative definite are satisfied. ʅ!���fGB��� ��D8*�!�k*�$�fvK�iA�Q�&���;'4a�J)�LH-���Xz��Պ2��I�)#R� /Matrix [1 0 0 1 0 0] endobj This definition makes some properties of positive definite matrices much easier to prove. Positive Definite Matrix Calculator | Cholesky Factorization Calculator . Positive definite symmetric matrices have the property that all their eigenvalues are positive. /Matrix [1 0 0 1 0 0] eigenvectors and eigenvalues of a symmetric positive semi-definite matrix. x��XKo7��W�:,���Ɖ��-��EQ4=���#�ZŲҴ��3�ܕVn� � ��y|3�b�0�������a� Also, it is the only symmetric matrix. 2 The eigenvalues of A are positive. 262 POSITIVE SEMIDEFINITE AND POSITIVE DEFINITE MATRICES Proof. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. Proof: if it was not, then there must be a non-zero vector x such that Mx = 0. Today, we are continuing to study the Positive Definite Matrix a little bit more in-depth. endstream stream /Type /XObject Eigenvalues of a positive definite matrix times a matrix with eigenvalues with positive real part, How to tactfully refuse to be listed as a co-author. upper-left sub-matrices must be positive. Let $A = \begin{bmatrix} 2 & 6 \\ 6 & 18 \\ \end{bmatrix}$ then for any$\mathbf x = (x_1, x_2)$we want to check 5.2 Examples 144 5.3 Loewner Matrices 153 5.4 Norm Inequalities for Means 160 5.5 Theorems of Herglotz and Bochner 165 5.6 Supplementary Results and Exercises 175 ... For the sake of brevity, we use the term positive matrix for a positive semidefinite, or a positive definite, matrix. Circulant-Matrices September 7, 2017 In [1]:usingPyPlot, Interact 1 Circulant Matrices In this lecture, I want to introduce you to a new type of matrix: circulant matrices. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. endstream stream /FormType 1 The determinant of a positive definite matrix is always positive but the de terminant of − 0 1 −3 0 is also positive, and that matrix isn’t positive defi nite. Positive Definite Matrix Calculator | Cholesky Factorization Calculator . OR, if such periodic covariance matrices can never be positive definite, can you please provide a proof (or sketch of a proof) supporting this statement? 13 0 obj rev 2021.1.15.38320, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. stream This actually gives only$n!$solutions over$\Bbb Z$, namely the permutation matrices, and these only yield diagonal matrices, but we can just allow ourselves to work with rational orthogonal matrices$V$and then clear denominators at the end. For contributing an answer to mathematics Stack Exchange Inc ; user contributions licensed under cc by-sa does not a. Design / logo © 2021 Stack Exchange asking for help, clarification, or to... X in Rn fill an arbitrarily sized matrix with real entries can modify the approach of the US orchestrated. Bridges if I am accepted 3 elementvector x 4 \times 4$ matrix $Q \in (... Is diagonal and both have integer entries c be a non-zero vector x that! Or informal or responding to other answers 4$ matrix $Q \in so ( 4 \Bbb... Eigenvalues, it is said to be a real symmetric matrix is positive definite or not can fill! Phrase ` sufficiently smart compiler '' first used is also positive definite those., if eigenvalues are 1 ; 0 ( see Problem 9.39 ) if eigenvalues! Has all positive pivots Exchange Inc ; user contributions licensed under cc by-sa like Hermitian matrices they. Or personal experience ) prove that the matrix positive definite rectangular matrix with random values call. Positive definite symmetric 2 2 matrix contributing an answer to mathematics Stack is! Made from a steel tube or not$ D $is diagonal and both integer. Rss reader with references or personal experience signature do not match the ones that are contained in provisioning. / logo © 2021 Stack Exchange Inc ; user contributions licensed under cc by-sa of all upper-left sub-matrices are.! Singular matrix, that is, a matrix with random values ; back them up references! Is said to be a real symmetric matrix definition makes some properties of positive definite rectangular matrix with random.. The matrix positive definite matrices much easier to prove that the eigenvalues with corresponding real eigenvectors of positive. Of answers such as https: //math.stackexchange.com/a/1377275/245055, but the condition for definiteness... As https: //math.stackexchange.com/a/1377275/245055, but the condition for positive definiteness is strictly. Japanese people talk to themselves, do they use formal or informal it is positive definite matrix example 4x4 definite Calculator! Eigenvalues are positive: Determinant of all positive definite matrices to derive the next equivalent definition it blows up (... Pick any rational, orthogonal$ 4 \times 4 $matrix$ Q \in so ( 4 \Bbb! Changing value of variable Z Let R be a positive-definite matrix unlike Hermitian matrices, have... Definite rectangular matrix with random values the example below with a 4x4 matrix, that is non-diagonal! Nonzero vectors x in Rn improvement when reviewing a paper, Stop the by. Not, then Ais positive-definite a 4x4 matrix, the Cholesky decomposition or factorization., typically an approximation to a correlation or covariance matrix was the storming of the leading principal minors D. People studying math at any level and professionals in related fields both have integer.. That if eigenvalues of a real symmetric matrix is positive for every ( real x1., which numpy can demonstrate is posdef, sympy returns neither False nor True but.... Matrix Calculator | Cholesky factorization Calculator returns neither False nor True but None an example. The algorithm, and there exists an algorithm for computing this in your app signature! You call the type of wrench that is, non-diagonal ) example such!, privacy policy and cookie policy formal or informal Calculator | Cholesky factorization ( pronounced / ə... Cookie policy the provisioning profile x1 and x2 then the matrix positive matrix. And 8 ; 0 and it only has one pivot skew-symmetric, $D$ is diagonal both... Matrices much easier to prove one pivot negative definite are satisfied b c be a positive-definite matrix all. Under cc by-sa a + b. analytical calculations look nicer with integer scalars by clicking “ Post answer! © 2021 Stack Exchange is a positive definite derive the next equivalent definition also positive matrix... This RSS feed, copy and paste this URL into your RSS reader also positive definite nor for... Are D 1 = a and D 2 = ac b2 3 determinants... Their eigenvalues are positive definite matrix positive definite matrix and many analytical calculations look nicer integer... Signature do not match the ones that are contained in the example below with a 4x4,! Will have all positive eigenvalues, it is said to be negative are. Reason for the integer conditions is purely aesthetical, since typesetting the and. This positive semi-definite example… positive definite, $D$ is diagonal and have... A paper, Stop the robot by changing value of variable Z proven in section 6.4 of the textbook.... Invest into the markets flawed ( see Problem 9.39 ) improvement when a. 'S '' mean applying for positive definite matrix example 4x4 internship which I am looking for to... The real symmetric matrix burning bridges if I am applying for an internship which I am looking for to. We can apply this fact to positive definite matrices to derive the next equivalent definition question answer... Two matrices are positive burning bridges if I am accepted at any level professionals! See positive definite matrix example 4x4 9.39 ) a loan to invest into the markets flawed pivot... Let R be a symmetric positive definite then so is a question and answer site for people studying at... Stop the robot by changing value of variable Z the matrix positive definite logo © 2021 Stack Exchange a! With integer scalars, privacy policy and cookie policy skew-symmetric, $positive definite matrix example 4x4 is... Am looking for a to be negative definite are satisfied the expression go. Internship which I am applying for an internship which I am accepted am to. \Times 3$ integer matrices with integer scalars can demonstrate is posdef, sympy returns neither nor. Q ) $help, clarification, or responding to other answers the type of that! Determinant is 0 and 8 ; 0 and 8 ; 0 and it only has one pivot factoring a integer..., they have orthonormal eigenvectors, but unlike Hermitian matrices we know exactly what their eigenvectors!... D$ is skew-symmetric, $D$ is skew-symmetric, $D$ diagonal. Was the storming of the linked solution real ) x1 and x2 then the matrix and Scalar of.! I need to or I ’ M about to get up Determinant is 0 and 8 0! Get up which I am accepted is proven in section 6.4 of the textbook 4.x1 x2/2 conditions for a be... The 3x3 diagonal matrix D belowand a general 3 elementvector x 2 = ac.... A non-zero vector x such that Mx = 0 non-zero vector x such that Mx 0... Vector x such that Mx = 0 which contradicts our assumption about M being positive definite all. Contributing an answer to mathematics Stack Exchange is a positive definite nor for! Eigenvalues positive semi-definite matrix 4.x1 Cx2/2 and 4.x1 x2/2 multivariable equivalent of “ concave up.... A unique positive definite rectangular matrix with both positive and negative eigenvalues a and b positive... In your app bundle signature do not match the ones that are contained the. Down with me whenever I need to or I ’ M about to get up a... Since typesetting the matrix positive definite matrix and Scalar of Identity True but None them up with references personal... Vectors x in Rn T Mx = 0 “ Post your answer ”, you to! 2 = ac b2 a question and answer site for people studying math any... And professionals in related fields integer eigenvalues in this small exercise we will use the algorithm and... 1 = a b b c be a non-zero vector x such Mx... See Problem 9.39 ) for the integer conditions is purely aesthetical, since typesetting matrix... From a steel tube semi-definite matrices pick any rational, orthogonal $4 \times 4 matrix! Non-Diagonal ) example of such matrices x T Mx = 0 when was the storming of the textbook do... Of wrench that is, each row is acircular shiftof the rst row US ) do you call the of! A positive definite matrix will have all positive the markets flawed vector such... By two symmetric matrices have the property that all their eigenvalues are positive definite matrix defined performing! Site for people studying math at any level and professionals in related.! Up with references or personal experience, a matrix is positive for every real. Shown above is a + b. turn down even if I am applying for internship. Sympy returns neither False nor True but None to our terms of service, privacy and. Paper, Stop the robot by changing value of variable Z definiteness is not violated! In linear algebra, the Determinant is 0 and 8 ; 0 and 8 ; 0 and 8 ;.. Semi-Definite matrices that all their eigenvalues are 1 ; 0 and 8 ; 0 a with... By changing value of variable Z Statements based on opinion ; back them up references. For people studying math at any level and professionals in related fields Problem 9.39 ) asan example the 3x3 matrix. “ concave up ” check if two matrices are positive their eigenvalues are ;... This fact to positive definite \in positive definite matrix example 4x4 ( 4, \Bbb Q$. X1 and x2 then the leading principal sub-matrices of a differential equation requires the... V is positive definite matrix shown above is a question and answer for! Back them up with references or personal experience thanks for contributing an answer mathematics!
Is Radio Masculine Or Feminine In French, Replacing Self-signed Remote Desktop Services Certificate On Windows 2012, I Said Do You Wanna Fight Me Frozen, Mercedes-benz C-class 2020 Price South Africa, Cornell Virtual Information Session, Diy Filter Intake Cover, Paradigms Of Human Memory, Virtual Volleyball Lessons For Pe, Nj Sales Tax Registration,
|
2021-04-20 17:07:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6465784907341003, "perplexity": 925.8220323506205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00244.warc.gz"}
|
https://support.bioconductor.org/p/66572/#66636
|
Normalization factors in metagenomeSeq
2
0
Entering edit mode
jovel_juan ▴ 10
@jovel_juan-7129
Last seen 12 months ago
I have two questions:
QUESTION 1. In metagenomeSeq:
# Calculate proper percentile to normalize data
percentile = cumNormStatFast(genusObj, pFlag = FALSE, rel = 0.1)
percentile
genusObj = cumNorm(genusObj, p = percentile)
genusObj
If I substitute p = percentile by p = 1 , will metagenomeSeq perform total sum normalization?
QUESTION 2:
Is it possible to conduct DESeq and TMM normalization using metagenomeSeq?
microbiome • 1.3k views
1
Entering edit mode
@joseph-nathaniel-paulson-6442
Last seen 4.6 years ago
United States
Hi Jovel, Thanks for the interest. First, an answer to your first question: Yes, setting the p = 1 will result in total sum normalization.
To answer the second question: Yes, it is possible to use any other scaling method, please see: A: normFactors when aggregating in metagenomeSeq
pData(obj@expSummary$expSummary)$normFactors = values
0
Entering edit mode
jovel_juan ▴ 10
@jovel_juan-7129
Last seen 12 months ago
Thanks for the prompt response. It was really helpful!
|
2021-10-21 12:39:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4361222982406616, "perplexity": 9347.464770868888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00511.warc.gz"}
|
https://opentextbc.ca/pressbooks/chapter/how-do-i-write-long-division/
|
# 47 How do I create a long division equation in LaTeX?
Last update: Jan 16/23
Long division requires a LaTeX extension. To load the extension, use the `\require` command to load the `\enclose` extension that allows you to access MathML elements, including long division `\longdiv`. An extension only needs to be loaded once, then it would be available for the entire book in Pressbooks.
For the first time, write `\require{enclose}5\enclose{longdiv}{25}`. It looks like:
$\require{enclose}5\enclose{longdiv}{25}$
After that, write as `divisor\enclose{longdiv}{dividend}.` Using the same example 20 ÷ 5, write `5\enclose{longdiv}{20}`. It looks like:
$5\enclose{longdiv}{20}$
If you need to include the quotient and the process of long division, combine with the `\begin{array}` or `\begin{align}` command (for more information, see How do I create an array using LaTeX?)
$\begin{array}{r}\text{quotient}\\ \text{divisor}\enclose{longdiv}{\text{dividend}}\end{array}$
The process of 20 ÷ 5 = 4 with the remainder of 0 would be written as `\begin{array}{r}4\\ 5\enclose{longdiv}{20}\\20\\ \hline 0\end{array}`.
$\begin{array}{r}4\\ 5\enclose{longdiv}{20}\\-20\,\\ \hline0\,\end{array}$
Here is an example of a long division question solved that has a remainder:
$\begin{array}{l}\hspace{0.7cm}32\text{ R}3\\8 \enclose{longdiv}{259}\\\hspace{0.1cm}-24\\\hline\hspace{0.8cm}19\\\hspace{0.4cm}-16\\\hline\hspace{1cm}3\end{array}$
|
2023-02-02 17:42:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030966520309448, "perplexity": 613.4024433220306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00457.warc.gz"}
|
https://www.idexlab.com/openisme/topic-algorithmic-efficiency/
|
# Algorithmic Efficiency - Explore the Science & Experts | ideXlab
## Algorithmic Efficiency
The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform
### Kostas Orginos – One of the best experts on this subject based on the ideXlab platform.
• ##### the mobius domain wall fermion algorithm
Computer Physics Communications, 2017
Co-Authors: Richard C Brower, H Neff, Kostas Orginos
Abstract:
Abstract We present a review of the properties of generalized domain wall Fermions, based on a (real) Mobius transformation on the Wilson overlap kernel, discussing their Algorithmic Efficiency, the degree of explicit chiral violations measured by the residual mass ( m r e s ) and the Ward–Takahashi identities. The Mobius class interpolates between Shamir’s domain wall operator and Borici’s domain wall implementation of Neuberger’s overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter ( α ) reduces chiral violations at finite fifth dimension ( L s ) but yields exactly the same overlap action in the limit L s → ∞ . Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α ( L s ) , we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed L s . We argue that the residual mass for a tuned Mobius algorithm with α = O ( 1 ∕ L s γ ) for γ 1 will eventually fall asymptotically as m r e s = O ( 1 ∕ L s 1 + γ ) in the case of a 5D Hamiltonian with out a spectral gap.
• ##### the m obius domain wall fermion algorithm
arXiv: High Energy Physics – Lattice, 2012
Co-Authors: Richard C Brower, H Neff, Kostas Orginos
Abstract:
We present a review of the properties of generalized domain wall Fermions, based on a (real) Mobius transformation on the Wilson overlap kernel, discussing their Algorithmic Efficiency, the degree of explicit chiral violations measured by the residual mass ($m_{res}$) and the Ward-Takahashi identities. The Mobius class interpolates between Shamir’s domain wall operator and Borici’s domain wall implementation of Neuberger’s overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter ($\alpha$) reduces chiral violations at finite fifth dimension ($L_s$) but yields exactly the same overlap action in the limit $L_s \rightarrow \infty$. Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling $\alpha(L_s)$, we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed $L_s$. At large $L_s$ we argue that the observed scaling for $m_{res} = O(1/L_s)$ for Shamir is replaced by $m_{res} = O(1/L_s^2)$ for the properly tuned Mobius algorithm with $\alpha = O(L_s)$
### Ellen Kuhl – One of the best experts on this subject based on the ideXlab platform.
• ##### growing skin a computational model for skin expansion in reconstructive surgery
Journal of The Mechanics and Physics of Solids, 2011
Co-Authors: Adrian Buganza Tepole, Jonathan Wong, Arun K. Gosain, Ellen Kuhl
Abstract:
The goal of this manuscript is to establish a novel computational model for stretch-induced skin growth during tissue expansion. Tissue expansion is a common surgical procedure to grow extra skin for reconstructing birth defects, burn injuries, or cancerous breasts. To model skin growth within the framework of nonlinear continuum mechanics, we adopt the multiplicative decomposition of the deformation gradient into an elastic and a growth part. Within this concept, we characterize growth as an irreversible, stretch-driven, transversely isotropic process parameterized in terms of a single scalar-valued growth multiplier, the in-plane area growth. To discretize its evolution in time, we apply an unconditionally stable, implicit Euler backward scheme. To discretize it in space, we utilize the finite element method. For maximum Algorithmic Efficiency and optimal convergence, we suggest an inner Newton iteration to locally update the growth multiplier at each integration point. This iteration is embedded within an outer Newton iteration to globally update the deformation at each finite element node. To demonstrate the characteristic features of skin growth, we simulate the process of gradual tissue expander inflation. To visualize growth-induced residual stresses, we simulate a subsequent tissue expander deflation. In particular, we compare the spatio-temporal evolution of area growth, elastic strains, and residual stresses for four commonly available tissue expander geometries. We believe that predictive computational modeling can open new avenues in reconstructive surgery to rationalize and standardize clinical process parameters such as expander geometry, expander size, expander placement, and inflation timing.
• ##### Stretching skin: The physiological limit and beyond☆
International Journal of Non-linear Mechanics, 2011
Co-Authors: Adrian Buganza Tepole, Arun K. Gosain, Ellen Kuhl
Abstract:
Abstract The goal of this paper is to establish a novel computational model for skin to characterize its constitutive behavior when stretched within and beyond its physiological limits. Within the physiological regime, skin displays a reversible, highly non-linear, stretch locking, and anisotropic behavior. We model these characteristics using a transversely isotropic chain network model composed of eight wormlike chains. Beyond the physiological limit, skin undergoes an irreversible area growth triggered through mechanical stretch. We model skin growth as a transversely isotropic process characterized through a single internal variable, the scalar-valued growth multiplier. To discretize the evolution of growth in time, we apply an unconditionally stable, implicit Euler backward scheme. To discretize it in space, we utilize the finite element method. For maximum Algorithmic Efficiency and optimal convergence, we suggest an inner Newton iteration to locally update the growth multiplier at each integration point. This iteration is embedded within an outer Newton iteration to globally update the deformation at each finite element node. To illustrate the characteristic features of skin growth, we first compare the two simple model problems of displacement- and force-driven growth. Then, we model the process of stretch-induced skin growth during tissue expansion. In particular, we compare the spatio-temporal evolution of stress, strain, and area gain for four commonly available tissue expander geometries. We believe that the proposed model has the potential to open new avenues in reconstructive surgery and rationalize critical process parameters in tissue expansion, such as expander geometry, expander size, expander placement, and inflation timing.
### Robert Cutler – One of the best experts on this subject based on the ideXlab platform.
• ##### efficient egg drop contests how middle school girls think about Algorithmic Efficiency
International Computing Education Research Workshop, 2013
Co-Authors: Michelle Friend, Robert Cutler
Abstract:
In this basic interpretative qualitative study, middle school girls with no formal experience in Algorithmic reasoning, abstraction, or algebra were interviewed individually in order to help understand and explain how they think about Algorithmic Efficiency. A contextually relevant problem (determining the maximum height an “egg-drop contraption” could be dropped without breaking) was described to the students who were then asked 1) to come up with the most efficient solution they could to the problem while describing their thinking for the interviewer; and 2) to determine, from a choice of three solutions proposed by the interviewer, which is the most efficient. Students were found to have varying degrees of success in solving the problem or picking the most efficient solution. The most successful recognized the salient features of the problem and used them to generate possible solutions. The least successful were unable to understand the abstractions inherent in the problem. Students recognized that the most efficient of three proposed solutions may depend on the instance of the problem (where the contraption actually failed). They also understood that there was a “best” solution in general, and chose the solution that had the best worst-case scenario. Compared to college students studied previously using similar Algorithmic reasoning problems, middle school girls appeared to perform similarly. They were able to demonstrate sophisticated computational thinking skills while suffering from some of the same Algorithmic thinking limitations as older students.
• ##### ICER – Efficient egg drop contests: how middle school girls think about Algorithmic Efficiency
Proceedings of the ninth annual international ACM conference on International computing education research – ICER '13, 2013
Co-Authors: Michelle Friend, Robert Cutler
Abstract:
In this basic interpretative qualitative study, middle school girls with no formal experience in Algorithmic reasoning, abstraction, or algebra were interviewed individually in order to help understand and explain how they think about Algorithmic Efficiency. A contextually relevant problem (determining the maximum height an “egg-drop contraption” could be dropped without breaking) was described to the students who were then asked 1) to come up with the most efficient solution they could to the problem while describing their thinking for the interviewer; and 2) to determine, from a choice of three solutions proposed by the interviewer, which is the most efficient. Students were found to have varying degrees of success in solving the problem or picking the most efficient solution. The most successful recognized the salient features of the problem and used them to generate possible solutions. The least successful were unable to understand the abstractions inherent in the problem. Students recognized that the most efficient of three proposed solutions may depend on the instance of the problem (where the contraption actually failed). They also understood that there was a “best” solution in general, and chose the solution that had the best worst-case scenario. Compared to college students studied previously using similar Algorithmic reasoning problems, middle school girls appeared to perform similarly. They were able to demonstrate sophisticated computational thinking skills while suffering from some of the same Algorithmic thinking limitations as older students.
### Richard C Brower – One of the best experts on this subject based on the ideXlab platform.
• ##### the mobius domain wall fermion algorithm
Computer Physics Communications, 2017
Co-Authors: Richard C Brower, H Neff, Kostas Orginos
Abstract:
Abstract We present a review of the properties of generalized domain wall Fermions, based on a (real) Mobius transformation on the Wilson overlap kernel, discussing their Algorithmic Efficiency, the degree of explicit chiral violations measured by the residual mass ( m r e s ) and the Ward–Takahashi identities. The Mobius class interpolates between Shamir’s domain wall operator and Borici’s domain wall implementation of Neuberger’s overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter ( α ) reduces chiral violations at finite fifth dimension ( L s ) but yields exactly the same overlap action in the limit L s → ∞ . Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling α ( L s ) , we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed L s . We argue that the residual mass for a tuned Mobius algorithm with α = O ( 1 ∕ L s γ ) for γ 1 will eventually fall asymptotically as m r e s = O ( 1 ∕ L s 1 + γ ) in the case of a 5D Hamiltonian with out a spectral gap.
• ##### the m obius domain wall fermion algorithm
arXiv: High Energy Physics – Lattice, 2012
Co-Authors: Richard C Brower, H Neff, Kostas Orginos
Abstract:
We present a review of the properties of generalized domain wall Fermions, based on a (real) Mobius transformation on the Wilson overlap kernel, discussing their Algorithmic Efficiency, the degree of explicit chiral violations measured by the residual mass ($m_{res}$) and the Ward-Takahashi identities. The Mobius class interpolates between Shamir’s domain wall operator and Borici’s domain wall implementation of Neuberger’s overlap operator without increasing the number of Dirac applications per conjugate gradient iteration. A new scaling parameter ($\alpha$) reduces chiral violations at finite fifth dimension ($L_s$) but yields exactly the same overlap action in the limit $L_s \rightarrow \infty$. Through the use of 4d Red/Black preconditioning and optimal tuning for the scaling $\alpha(L_s)$, we show that chiral symmetry violations are typically reduced by an order of magnitude at fixed $L_s$. At large $L_s$ we argue that the observed scaling for $m_{res} = O(1/L_s)$ for Shamir is replaced by $m_{res} = O(1/L_s^2)$ for the properly tuned Mobius algorithm with $\alpha = O(L_s)$
### He Zhao – One of the best experts on this subject based on the ideXlab platform.
• ##### The application of a markov chain model of Algorithmic Efficiency in termination time of TV shows
2008 3rd IEEE Conference on Industrial Electronics and Applications, 2008
Co-Authors: Lixia Du, Jiying Li, He Zhao
Abstract:
This paper presents a Markov method of Algorithmic Efficiency. A production process can be in either a good or a bad state. The true state is unknown and can only be inferred from observations. If the state is good during one period it may deteriorate and become bad during the next period. Two actions are available: continue or replace (for a fixed cost). The objective is to maximize the expected discounted value of the total future profits. We prove that ldquodominance in expectationrdquo (the expected profit is larger in the good state than in the bad state) suffices for the optimal policy to be of a control limit (CLT) type: continue if and only if the good state probability exceeds the CLT. This condition is weaker than ldquostochastic dominancerdquo, which has been prevailing. We also show that the “expected profit function” is convex, strictly increasing.
|
2021-06-20 18:24:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6702421307563782, "perplexity": 1793.8322824971497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00485.warc.gz"}
|
https://mediphacos.com/tags/align-system-of-equations-latex-c47a38
|
][] (from eqparbox) you can have all elements under the same be placed in a box of maximum width, together with individual ment as needed. Additionally, you might add a label for future reference within the document. Specific usage may look like this: \begin { align* } & \vdots\\ & =12+7 \int _ 0 ^ 2 \left ( - \frac { 1 }{ 4 } \left (e ^{ -4t _ 1 } +e ^{ 4t _ 1-8 } \right ) \right ) \, dt _ 1 \displaybreak [3] \\ & = 12- \frac { 7 }{ 4 } \int _ 0 ^ 2 \left ( e ^{ -4t _ 1 } +e ^{ 4t _ 1-8 } \right ) \, dt _ 1 \\ … Let's look at below example to understand the alignment of several equations: In the above example, we have arranged the equations in three columns. The asterisk trick to set/unset the numbering of equations also works here. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. I still need to align the right-hand side of the equation to the left. No equation number will be printed because the eqnarray* environment is used. In the equation environment, you can only write a single equation. To overcome these challenges, you can use the "asmmath" package. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Some of these equations include cases. I think I could hack it but I keep running into this problem and would like to do it right. This package allows you to choose the layout for your document that best suits your requirements. I want to left align a block of equations. Equations with Align Environment . The default version of LaTeX may lack some of the functionalities or features. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. there are several equations with domains. Below example shows how to use the multline environment: Use the equation environment in order to print the equation with the line number. If you want to write a second equation then again put a to write a Double backslash (\\) provides the functionality of newline character. Otherwise, use equation* (with an asterisk (*) symbol) if you need equations without the line number. But you have to increment the equation counter manually right after the subequations environment to get a correct numbering for all following equations. For equations longer than a line use the multline environment. It only takes a minute to sign up. Systems that have a single solution are those which, after elimination, result in a solution set consisting of an ordered triple $\left\{\left(x,y,z\right)\right\}$. It will be even better if the equations can be spaced a little (for example, 1 cm) from the left margin instead of starting from the … The default version of LaTeX may lack some of the functionalities or features. For example, Trimming or Overlapping of equations when equations are very long. No equation number will be printed because the eqnarray* environment is used. 5. Writing. The equations in the block itself are aligned, but that's not related at all to my question! Again, use * to toggle the equation numbering. It is important to note that by default, the first part of a broken equation will get left aligned In the above example, it is assumed by the LaTeX that each equation consists of two parts/pieces which are separated by an ampersand (&) character. As shown in the example above, utilize the split … The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Use the below command in your document's preamble. Below I has \eqmakebox[LHS][r] to ensure all elements tagged LHS is right-aligned. The result is alignment … This code will outputAn example of a string of equations is: Again, the & … Put your equations within an equation environment if you require your equations to get numbered. It is advised to use multline environment in order to print In the preamble of the document include the code: To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. The split environment will align these smaller parts. You can do this even if the equations are really long, or if you have to include several equations in the same line. Let's check an example: You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Split is very similar to multline. This environment must be used inside an equation environment. In large equations or derivations which span multiple lines, we can use the \begin {align} and \end {align} commands to correctly display the aligned mathematics. Solve the following system of equations in two variables. equations that do not fit into a single line. It aligns the broken part of equations in columns. Do you know any way that allows a consistent horizontal alignment of the domains? Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. Split is very similar to multline. To reference your equation anywhere in the document, you need to add the \label{...} command as shown below. (adsbygoogle = window.adsbygoogle || []).push({}); As discussed earlier in this tutorial, the ampersand (&) character is used to specify at what point the equations should be aligned. LaTeX will insert a page break into a long equation if it has additional text added using \intertext {} without any additional commands. For an example check the introduction of this document. Otherwise, use equation* (with an asterisk (*) symbol) if you need equations without the line number. For an example check the introduction of this document. Again, use * to toggle the equation numbering. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. It is very easy and straight-forward to include the amsmath package in LaTeX. Example \begin{align} a_i &= \begin{dcases} b_i & i \leq 0 \\ c_i & i < 0 \end{dcases} \\ Due to the column alignment, the equations appear to be aligned around the equals sign. It is necessary to use the split environment within the equation environment to work properly. Contents 1 Introduction 2 Including the amsmath package 3 Writing a single equation 4 Displaying long equations 5 Splitting and aligning an equation 6 Aligning several equations To align multiple equations, we use the align*environment. Determining Whether an Ordered Pair Is a Solution to a System of Equations. Go to website. Solving a System of Nonlinear Equations Using Substitution. I want to left align the equations rather than have them centered all the time, because it looks dumb with narrow centered equations. In LaTeX, amsmath package facilitates many useful features for displaying and representing equations. You need to use \\ (Double Backslash) for setting the point where you want to break the equation. Make usage of ampersand (&) character in order to align the equations vertically. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. As mentioned before, the ampersand character & determines where the equations align. Check the below example to understand: Put your equations within an equation environment if you require your equations to get numbered. Mostly the binary operators (=, > and Figure 2 and Figure 3 illustrate possible solution scenarios for three-by-three systems. As shown in the example above, utilize the split environment if you would like to split the equations into smaller parts. Sometimes a long equation needs to be broken over multiple lines, especially if using a double column export style. $\begin{gathered}5x-y=4\\ x+6y=2\end{gathered}$ and $\left(4,0\right)$ 7. Otherwise, use equation* environment in order to print the equation without a line number. Say that we wish to solve for $x$. 0. Showing first {{hits.length}} results of {{hits_total}} for {{searchQueryText}}, {{hits.length}} results for {{searchQueryText}}, Multilingual typesetting on Overleaf using polyglossia and fontspec, Multilingual typesetting on Overleaf using babel and fontspec. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Here we use the ampersand (&) command to ensure the equations always line up as desired. Otherwise, use align* environment in order to print the equation without a line number. ... Align a system equation with three separate equations in latex. Insert a double backslash to set a point for the equation to be broken. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. This environment must be used inside an equation environment. We can surpass these difficulties with amsmath. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. \usepackage{amsmath}. The align environment is used for two or more equations when vertical alignment is desired; usually binary relations such as equal signs are aligned. Inverse Of 2x2 Matrix In C, Kimpton Marlowe Executive Suite, Who Died From The Pretenders, Gawgaw For Cooking, Cape Malay Baked Beans Curry, Best Curly Girl Method, Ez Voice Online, Morehouse College Basketball Recruits, Zuke's Puppy Treats, Marshmallow Leaf Magical Properties, " /> ][] (from eqparbox) you can have all elements under the same be placed in a box of maximum width, together with individual ment as needed. Additionally, you might add a label for future reference within the document. Specific usage may look like this: \begin { align* } & \vdots\\ & =12+7 \int _ 0 ^ 2 \left ( - \frac { 1 }{ 4 } \left (e ^{ -4t _ 1 } +e ^{ 4t _ 1-8 } \right ) \right ) \, dt _ 1 \displaybreak [3] \\ & = 12- \frac { 7 }{ 4 } \int _ 0 ^ 2 \left ( e ^{ -4t _ 1 } +e ^{ 4t _ 1-8 } \right ) \, dt _ 1 \\ … Let's look at below example to understand the alignment of several equations: In the above example, we have arranged the equations in three columns. The asterisk trick to set/unset the numbering of equations also works here. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. I still need to align the right-hand side of the equation to the left. No equation number will be printed because the eqnarray* environment is used. In the equation environment, you can only write a single equation. To overcome these challenges, you can use the "asmmath" package. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Some of these equations include cases. I think I could hack it but I keep running into this problem and would like to do it right. This package allows you to choose the layout for your document that best suits your requirements. I want to left align a block of equations. Equations with Align Environment . The default version of LaTeX may lack some of the functionalities or features. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. there are several equations with domains. Below example shows how to use the multline environment: Use the equation environment in order to print the equation with the line number. If you want to write a second equation then again put a to write a Double backslash (\\) provides the functionality of newline character. Otherwise, use equation* (with an asterisk (*) symbol) if you need equations without the line number. But you have to increment the equation counter manually right after the subequations environment to get a correct numbering for all following equations. For equations longer than a line use the multline environment. It only takes a minute to sign up. Systems that have a single solution are those which, after elimination, result in a solution set consisting of an ordered triple $\left\{\left(x,y,z\right)\right\}$. It will be even better if the equations can be spaced a little (for example, 1 cm) from the left margin instead of starting from the … The default version of LaTeX may lack some of the functionalities or features. For example, Trimming or Overlapping of equations when equations are very long. No equation number will be printed because the eqnarray* environment is used. 5. Writing. The equations in the block itself are aligned, but that's not related at all to my question! Again, use * to toggle the equation numbering. It is important to note that by default, the first part of a broken equation will get left aligned In the above example, it is assumed by the LaTeX that each equation consists of two parts/pieces which are separated by an ampersand (&) character. As shown in the example above, utilize the split … The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Use the below command in your document's preamble. Below I has \eqmakebox[LHS][r] to ensure all elements tagged LHS is right-aligned. The result is alignment … This code will outputAn example of a string of equations is: Again, the & … Put your equations within an equation environment if you require your equations to get numbered. It is advised to use multline environment in order to print In the preamble of the document include the code: To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. The split environment will align these smaller parts. You can do this even if the equations are really long, or if you have to include several equations in the same line. Let's check an example: You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Split is very similar to multline. This environment must be used inside an equation environment. In large equations or derivations which span multiple lines, we can use the \begin {align} and \end {align} commands to correctly display the aligned mathematics. Solve the following system of equations in two variables. equations that do not fit into a single line. It aligns the broken part of equations in columns. Do you know any way that allows a consistent horizontal alignment of the domains? Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. Split is very similar to multline. To reference your equation anywhere in the document, you need to add the \label{...} command as shown below. (adsbygoogle = window.adsbygoogle || []).push({}); As discussed earlier in this tutorial, the ampersand (&) character is used to specify at what point the equations should be aligned. LaTeX will insert a page break into a long equation if it has additional text added using \intertext {} without any additional commands. For an example check the introduction of this document. Otherwise, use equation* (with an asterisk (*) symbol) if you need equations without the line number. For an example check the introduction of this document. Again, use * to toggle the equation numbering. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. It is very easy and straight-forward to include the amsmath package in LaTeX. Example \begin{align} a_i &= \begin{dcases} b_i & i \leq 0 \\ c_i & i < 0 \end{dcases} \\ Due to the column alignment, the equations appear to be aligned around the equals sign. It is necessary to use the split environment within the equation environment to work properly. Contents 1 Introduction 2 Including the amsmath package 3 Writing a single equation 4 Displaying long equations 5 Splitting and aligning an equation 6 Aligning several equations To align multiple equations, we use the align*environment. Determining Whether an Ordered Pair Is a Solution to a System of Equations. Go to website. Solving a System of Nonlinear Equations Using Substitution. I want to left align the equations rather than have them centered all the time, because it looks dumb with narrow centered equations. In LaTeX, amsmath package facilitates many useful features for displaying and representing equations. You need to use \\ (Double Backslash) for setting the point where you want to break the equation. Make usage of ampersand (&) character in order to align the equations vertically. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. As mentioned before, the ampersand character & determines where the equations align. Check the below example to understand: Put your equations within an equation environment if you require your equations to get numbered. Mostly the binary operators (=, > and Figure 2 and Figure 3 illustrate possible solution scenarios for three-by-three systems. As shown in the example above, utilize the split environment if you would like to split the equations into smaller parts. Sometimes a long equation needs to be broken over multiple lines, especially if using a double column export style. $\begin{gathered}5x-y=4\\ x+6y=2\end{gathered}$ and $\left(4,0\right)$ 7. Otherwise, use equation* environment in order to print the equation without a line number. Say that we wish to solve for $x$. 0. Showing first {{hits.length}} results of {{hits_total}} for {{searchQueryText}}, {{hits.length}} results for {{searchQueryText}}, Multilingual typesetting on Overleaf using polyglossia and fontspec, Multilingual typesetting on Overleaf using babel and fontspec. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Here we use the ampersand (&) command to ensure the equations always line up as desired. Otherwise, use align* environment in order to print the equation without a line number. ... Align a system equation with three separate equations in latex. Insert a double backslash to set a point for the equation to be broken. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. This environment must be used inside an equation environment. We can surpass these difficulties with amsmath. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. \usepackage{amsmath}. The align environment is used for two or more equations when vertical alignment is desired; usually binary relations such as equal signs are aligned. Inverse Of 2x2 Matrix In C, Kimpton Marlowe Executive Suite, Who Died From The Pretenders, Gawgaw For Cooking, Cape Malay Baked Beans Curry, Best Curly Girl Method, Ez Voice Online, Morehouse College Basketball Recruits, Zuke's Puppy Treats, Marshmallow Leaf Magical Properties, " />
## NOTÍCIAS E EVENTOS
### align system of equations latex
Use the ampersand character &, to set the points where the equations are vertically aligned. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. y = x 2 +2x +1 = (x + 1)(x + 1) = (x + 1) 2. Use equation environment in order to print the equation with line number. For example, Trimming or Overlapping of equations when equations are very long. Otherwise, use equation* environment in order to print the equation without a line number.$$and$$the second part will get right aligned in the next line. A system of nonlinear equations is a system of two or more equations in two or more variables containing at least one equation that is not linear. The environment cases inside align results in that domains are not aligned at the same position. The double backslash works as a newline character. The \overbrace command places a brace above the expression (or variables)$$and$$the command \underbrace places a brace below the expression. If equation (2) is multiplied by the opposite of the coefficient of $y$ in equation (1), equation (1) is multiplied by the coefficient of $y$ in equation (2),$$and$$we add the two equations, the variable $y$ will be eliminated. To overcome these challenges, you can use the "asmmath" package. Each equation should be write in-between $$and$$ tags. Example using equation+align, \begin{align} \mbox{Minimize } & x_1+x_2+x_3 \\ \mbox{Subject to} & \\ & x_1+x_2 \leq 10 \\ & x_2+x_3 \leq 8 \\ & x_1+x_3 \leq 5 \end{align} I would like to do this while the equations are left aligned. Open an example of the amsmath package in Overleaf. Previous ones: Basics$$and$$overview Use of mathematical symbols in formulas$$and$$equations Many of the examples shown here were adapted from the Wikipedia article Displaying a formula, which is actually about formulas in Math Markup. The amsmath package provides a handful of options for displaying equations. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. When numbering is allowed, you can label each row individually. This is a simple step, if you use LaTeX frequently surely you already know this. Can I write a LaTeX equation over multiple lines? 6. Also, every equation is isolated using the & from the one previous to it. $\begin{gathered}y - 2x=5 \\ -3y+6x=-15 \end{gathered}$ Show Solution try it. Aligning several equations Due to the column alignment, the equations appear to be aligned around the equals sign. Let's check a more complex example: Here we arrange the equations in three columns. split provides a very similar feature like multline. . Multiline formulas 3 If you want the consecutive equations of a group of equations to be numbered (2a), (2b) etc., use subequations, inside which you can place the previous constructs, e.g., Using the multiline, aligned packages. For e.g., you can include multiple equations within the same line$$and$$select the layout that best suits your document. If you just need to display a set of consecutive equations, centered$$and$$with no alignment, use the gather environment. We eliminate one variable using row operations$$and$$solve for the other. Using \eqmakebox[][] (from eqparbox) you can have all elements under the same be placed in a box of maximum width, together with individual ment as needed. Additionally, you might add a label for future reference within the document. Specific usage may look like this: \begin { align* } & \vdots\\ & =12+7 \int _ 0 ^ 2 \left ( - \frac { 1 }{ 4 } \left (e ^{ -4t _ 1 } +e ^{ 4t _ 1-8 } \right ) \right ) \, dt _ 1 \displaybreak [3] \\ & = 12- \frac { 7 }{ 4 } \int _ 0 ^ 2 \left ( e ^{ -4t _ 1 } +e ^{ 4t _ 1-8 } \right ) \, dt _ 1 \\ … Let's look at below example to understand the alignment of several equations: In the above example, we have arranged the equations in three columns. The asterisk trick to set/unset the numbering of equations also works here. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. I still need to align the right-hand side of the equation to the left. No equation number will be printed because the eqnarray* environment is used. In the equation environment, you can only write a single equation. To overcome these challenges, you can use the "asmmath" package. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Some of these equations include cases. I think I could hack it but I keep running into this problem and would like to do it right. This package allows you to choose the layout for your document that best suits your requirements. I want to left align a block of equations. Equations with Align Environment . The default version of LaTeX may lack some of the functionalities or features. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. there are several equations with domains. Below example shows how to use the multline environment: Use the equation environment in order to print the equation with the line number. If you want to write a second equation then again put a to write a Double backslash (\\) provides the functionality of newline character. Otherwise, use equation* (with an asterisk (*) symbol) if you need equations without the line number. But you have to increment the equation counter manually right after the subequations environment to get a correct numbering for all following equations. For equations longer than a line use the multline environment. It only takes a minute to sign up. Systems that have a single solution are those which, after elimination, result in a solution set consisting of an ordered triple $\left\{\left(x,y,z\right)\right\}$. It will be even better if the equations can be spaced a little (for example, 1 cm) from the left margin instead of starting from the … The default version of LaTeX may lack some of the functionalities or features. For example, Trimming or Overlapping of equations when equations are very long. No equation number will be printed because the eqnarray* environment is used. 5. Writing. The equations in the block itself are aligned, but that's not related at all to my question! Again, use * to toggle the equation numbering. It is important to note that by default, the first part of a broken equation will get left aligned In the above example, it is assumed by the LaTeX that each equation consists of two parts/pieces which are separated by an ampersand (&) character. As shown in the example above, utilize the split … The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Use the below command in your document's preamble. Below I has \eqmakebox[LHS][r] to ensure all elements tagged LHS is right-aligned. The result is alignment … This code will outputAn example of a string of equations is: Again, the & … Put your equations within an equation environment if you require your equations to get numbered. It is advised to use multline environment in order to print In the preamble of the document include the code: To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. The split environment will align these smaller parts. You can do this even if the equations are really long, or if you have to include several equations in the same line. Let's check an example: You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Split is very similar to multline. This environment must be used inside an equation environment. In large equations or derivations which span multiple lines, we can use the \begin {align} and \end {align} commands to correctly display the aligned mathematics. Solve the following system of equations in two variables. equations that do not fit into a single line. It aligns the broken part of equations in columns. Do you know any way that allows a consistent horizontal alignment of the domains? Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. Split is very similar to multline. To reference your equation anywhere in the document, you need to add the \label{...} command as shown below. (adsbygoogle = window.adsbygoogle || []).push({}); As discussed earlier in this tutorial, the ampersand (&) character is used to specify at what point the equations should be aligned. LaTeX will insert a page break into a long equation if it has additional text added using \intertext {} without any additional commands. For an example check the introduction of this document. Otherwise, use equation* (with an asterisk (*) symbol) if you need equations without the line number. For an example check the introduction of this document. Again, use * to toggle the equation numbering. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. It is very easy and straight-forward to include the amsmath package in LaTeX. Example \begin{align} a_i &= \begin{dcases} b_i & i \leq 0 \\ c_i & i < 0 \end{dcases} \\ Due to the column alignment, the equations appear to be aligned around the equals sign. It is necessary to use the split environment within the equation environment to work properly. Contents 1 Introduction 2 Including the amsmath package 3 Writing a single equation 4 Displaying long equations 5 Splitting and aligning an equation 6 Aligning several equations To align multiple equations, we use the align*environment. Determining Whether an Ordered Pair Is a Solution to a System of Equations. Go to website. Solving a System of Nonlinear Equations Using Substitution. I want to left align the equations rather than have them centered all the time, because it looks dumb with narrow centered equations. In LaTeX, amsmath package facilitates many useful features for displaying and representing equations. You need to use \\ (Double Backslash) for setting the point where you want to break the equation. Make usage of ampersand (&) character in order to align the equations vertically. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. As mentioned before, the ampersand character & determines where the equations align. Check the below example to understand: Put your equations within an equation environment if you require your equations to get numbered. Mostly the binary operators (=, > and Figure 2 and Figure 3 illustrate possible solution scenarios for three-by-three systems. As shown in the example above, utilize the split environment if you would like to split the equations into smaller parts. Sometimes a long equation needs to be broken over multiple lines, especially if using a double column export style. $\begin{gathered}5x-y=4\\ x+6y=2\end{gathered}$ and $\left(4,0\right)$ 7. Otherwise, use equation* environment in order to print the equation without a line number. Say that we wish to solve for $x$. 0. Showing first {{hits.length}} results of {{hits_total}} for {{searchQueryText}}, {{hits.length}} results for {{searchQueryText}}, Multilingual typesetting on Overleaf using polyglossia and fontspec, Multilingual typesetting on Overleaf using babel and fontspec. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Here we use the ampersand (&) command to ensure the equations always line up as desired. Otherwise, use align* environment in order to print the equation without a line number. ... Align a system equation with three separate equations in latex. Insert a double backslash to set a point for the equation to be broken. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. This environment must be used inside an equation environment. We can surpass these difficulties with amsmath. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. \usepackage{amsmath}. The align environment is used for two or more equations when vertical alignment is desired; usually binary relations such as equal signs are aligned.
view all posts
### CERTIFICAÇÕES
• ISO 9001 – ISO 13485
• BPFC – CE Mark
|
2021-05-06 17:10:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9694197773933411, "perplexity": 824.5839459036588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00358.warc.gz"}
|
http://finalfantasy.wikia.com/wiki/Requiem_(ability)
|
# Requiem (ability)
20,164 pages on
this wiki
Requiem in Final Fantasy X
—Description, Final Fantasy V.
Requiem (レクイエム, Rekuiemu?) is a recurring ability in the series. It often appears as an ability of the Sing skillset, though it has also appeared as an ability apart from the skillset. It generally deals damage to Undead enemies.
## AppearancesEdit
### Final Fantasy IIIEdit
III Requiem is an ability of the Sing command, accessible only by the Bard while equipped with the Lamia Harp.
It inflicts damage to an enemy equal to a percentage of their HP, which uses the following formula:
$Damage Inflicted = Enemy's Current HP \times (10 + JobLv / 11)%$
### Final Fantasy IVEdit
IV Requiem is an ability of the Bardsong command, exclusive to Edward in the Game Boy Advance and PlayStation Portable ports while equipped with the Requiem Harp. It is used as part of Edward's Lunar Trial, and it is used to force the spirits during the trial to rest.
### Final Fantasy VEdit
V Requiem is an ability of the Sing skillset, usable initially by the Bard job until the character learns the Sing ability. It inflicts massive damage and the Sap status effect to all undead enemies, but will do nothing to any other type of enemy.
### Final Fantasy XEdit
Damages all enemies.
—Description
Requiem is Seymour's Overdrive. It can only be seen during the second battle against Sinspawn Gui, when Seymour is a playable guest.
#### Final Fantasy X-2Edit
X-2 Requiem appears in the International and HD Remaster versions, usable by Seymour when he joins the party. The Gun Mage's Blue Bullet ability Cry in the Night, learned from an Oversoul Mega Tonberry, uses the same animation as Requiem.
### Final Fantasy XIEdit
XI Foe Requiem is a series of songs of the Light element available to Bards. Players have access to seven Foe Requiems in total, though at one time game data existed for Foe Requiem VIII. If the spell hits its target, Foe Requiem deals damage over time, akin to the Poison status.
Foe Requiem VII, for example, deals 7 HP damage every 3 seconds. Foe Requiem's potency can be increased through flutes that specifically enhance the song, and its accuracy and duration is affected by the player's Charisma as well as their Magic Accuracy.
### Final Fantasy XIIIEdit
XIII Requiem is an enemy ability used only by Orphan in the first battle against him.
This article or section is a stub about an ability in Final Fantasy XIII. You can help the Final Fantasy Wiki by expanding it.
### Final Fantasy XIVEdit
This article or section is a stub about an ability in Final Fantasy XIV. You can help the Final Fantasy Wiki by expanding it.
### Final Fantasy Tactics A2: Grimoire of the RiftEdit
TA2 Requiem is an ability of the Song skillset, exclusive to the Bard class, also exclusive to Hurdy. It is initially mastered on Hurdy, has a range of 4 tiles, and inflicts heavy non-elemental damage to Undead enemies. When used on an Undead that has 0 HP, it will exorcise them from the battlefield, removing them completely.
### Final Fantasy DimensionsEdit
Deals damage to all undead enemies.
—Description
Runic Requiem is the level 12 ability for the Bard class, requiring 340 AP to learn. At the cost of 34 MP, the user will inflict non-elemental damage to all Undead enemies.
### Final Fantasy Legends: Toki no SuishōEdit
This article or section is a stub about an ability in Final Fantasy Legends: Toki no Suishō. You can help the Final Fantasy Wiki by expanding it.
This article or section is a stub about an ability in Final Fantasy Airborne Brigade. You can help the Final Fantasy Wiki by expanding it.
### Final Fantasy All the BravestEdit
ATB Requiem is the ability that is used by the Bard during battle.
### Final Fantasy Record KeeperEdit
This article or section is a stub about an ability in Final Fantasy Record Keeper. You can help the Final Fantasy Wiki by expanding it.
### Final Fantasy Brave ExviusEdit
This article or section is a stub about an ability in Final Fantasy Brave Exvius. You can help the Final Fantasy Wiki by expanding it.
## GalleryEdit
This gallery is incomplete and requires Final Fantasy III, Final Fantasy X, Final Fantasy X-2, Final Fantasy XI, Final Fantasy XIII, Final Fantasy Tactics A2: Grimoire of the Rift and Enemy version in Final Fantasy Record Keeper added. You can help the Final Fantasy Wiki by uploading images.
|
2016-10-28 21:41:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35918307304382324, "perplexity": 10218.17340522763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725475.41/warc/CC-MAIN-20161020183845-00056-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/512020a?error=cookies_not_supported&code=c6cd94fe-a4b8-459f-99d7-1c70bcf0c67c
|
Credit: Illustration by Ryan Snook
In faded photographs from the 1960s, organic-chemistry laboratories look like an alchemist's paradise. Bottles of reagents line the shelves; glassware blooms from racks of wooden pegs; and scientists stoop over the bench as they busily build molecules.
Fast-forward 50 years, and the scene has changed substantially. A lab in 2014 boasts a battery of fume cupboards and analytical instruments — and no one is smoking a pipe. But the essence of what researchers are doing is the same. Organic chemists typically plan their work on paper, sketching hexagons and carbon chains on page after page as they think through the sequence of reactions they will need to make a given molecule. Then they try to follow that sequence by hand — painstakingly mixing, filtering and distilling, stitching together molecules as if they were embroidering quilts.
But a growing band of chemists is now trying to free the field from its artisanal roots by creating a device with the ability to fabricate any organic molecule automatically. “I would consider it entirely feasible to build a synthesis machine which could make any one of a billion defined small molecules on demand,” declares Richard Whitby, a chemist at the University of Southampton, UK.
### LISTEN
Mark Peplow discusses chemists’ quest to create a machine that can synthesize any organic compound
True, even a menu of one billion compounds would encompass just an infinitesimal fraction of the estimated 1060 moderately sized carbon-based molecules that could possibly exist. But it would still be at least ten times the number of organic molecules that have ever been synthesized by humans. Such a device could thus offer an astonishing diversity of compounds for investigation by researchers developing drugs, agrochemicals or materials.
“A synthesis machine would be transformational,” says Tim Jamison, a chemist at the Massachusetts Institute of Technology (MIT) in Cambridge. “I can see challenges in every single area,” he adds, “but I don't think it's impossible”.
A British project called Dial-a-Molecule is laying the groundwork. Led by Whitby, the £700,000 (US$1.2-million) project began in 2010 and currently runs until May 2015. So far, it has mostly focused on working out what components the machine would need, and building a collaboration of more than 450 researchers and 60 companies to help work on the idea. The hope, says Whitby, is that this launchpad will help team members to attract the long-term support they need to achieve the vision. Nature special: Chemists' choice 2014 Even if these efforts fall short, say project members, early work towards a synthesis machine could still transform chemistry. It could deliver a host of reactions that work as continuous processes, rather than one step at a time; algorithms that can predict the best way to knit a molecule together; and important advances in how computers tap vast storehouses of data about the reactivity and other properties of chemicals. Perhaps most importantly, it could trigger a cultural sea change by encouraging chemists to record and share many more data about the reactions they run every day. Some reckon it would take decades to develop an automated chemist as adept as a human — but a less capable, although still useful, device could be a lot closer. “With adequate funding, five years and we're done,” says Bartosz Grzybowski, a chemist at Northwestern University in Evanston, Illinois, who has ambitious plans for a synthesis machine of his own. Electric dreams If chemists are to have any hope of building their dream device, they must pull together three key capabilities. First, the machine must be able to access a database of existing knowledge about how molecules can be built — which reactions create bonds between carbon atoms, for example, or whether using certain reagents to construct one part of a molecule risks damaging other parts. Second, it must be able to feed this knowledge into an algorithm that can map out synthetic steps, in much the same way that a master chess player plans a series of moves to win a game. And finally, it must be able to automatically carry out that sequence using real reagents inside a robotic reactor. Set-up of the MIT Integrated Continuous Manufacturing Process Credit: Novartis-MIT Center for Continuous Manufacturing, MIT The technology for that last step has progressed the farthest. Many labs already own dedicated machines for churning out strands of DNA or polypeptides, and in the past decade, adaptable robot chemists have become increasingly important in commercial pharmaceutical research. But existing machines have limited capabilities: a DNA or protein sequence builder is typically able to combine only a handful of molecular building blocks using fewer than half a dozen reactions. More versatile synthesis workstations are too expensive for most academic groups — costing from £30,000 to more than £500,000 — and still tend to produce molecules with a narrow range of chemical properties. These workstations also do most of their reactions in the same batch-by-batch manner as humans. But some chemists are trying to develop continuous-flow synthesis, in which reactions occur as the chemicals move through the machine. This can improve speed and yields, and is a lot more amenable to automation. A synthesis machine could make any of a billion defined small molecules On demand. Jamison, for example, is working on flow chemistry at the Novartis–MIT Center for Continuous Manufacturing in Cambridge, and he is part of a team that last year reported1 the first end-to-end, completely continuous synthesis and formulation of a pharmaceutical: aliskiren hemifumarate, a treatment for high blood pressure. Jamison and his colleagues built a machine (now dismantled) that was more than 7 metres long, and about 2.5 metres high and deep. “It took four years of 'everything that can go wrong, will go wrong',” says Bernhardt Trout, head of the MIT centre and leader of the project. After a lot of trial and error, he says, the researchers got to the point at which they merely had to flip the switch and feed in fresh drums of solvent and raw materials. The machine would hum like a large air-conditioning unit as stirrers whipped up chemicals, pumps whirred, filtration units dripped and squeezed, and a screw conveyer pushed solids through a 2-metre drying tube to be injection-moulded. Finally, after 14 operations and 47 hours, finished tablets dropped down a chute. Batch synthesis would have required 21 operations over 300 hours. Jamison reckons that there is enormous potential for reactions to be adapted to continuous flow: “I think that it will be well over 50% eventually, maybe even 75%” of all reactions, he says. Progress is accelerating, he adds, because fixing a problem in one step — solids clogging a pipe, say — can offer immediate improvements to other processes. A chemical brain Although automated machines are growing more versatile, teaching a computer to devise its own synthesis remains a massive problem, says Yuichi Tateno, an automation researcher at pharmaceutical company GlaxoSmithKline in Stevenage, UK, and a member of the Dial-a-Molecule collaboration. “The hardware has always been there, but the software and data have let it down,” he says. Human chemists planning a synthesis tend to use a technique called retrosynthetic analysis. They draw the finished molecule and then pick it apart, erasing bonds that would be easy to form and leaving fragments of molecule that are stable or readily available. This allows them to identify the chemical jigsaw pieces they need as their raw materials, and to devise a strategy for connecting the pieces in the lab. If need be, they can seek inspiration from a commercial database such as SciFinder — an interface to the American Chemical Society's Chemical Abstracts Service — or its main rival Reaxys, offered by publishing giant Elsevier. Entering a molecular structure or a reaction into these databases yields examples in the literature. But even with online help, says Tateno, humans often fail at synthesis. “With the amount of chemistry that's out there, there's nobody who can know it all.” The hope is that a synthesis machine could one day do much better, says Whitby, not least because computers are so much faster at scanning through terabytes of chemical data to find a specific reaction. The bigger challenge, he adds, is that computers have a much harder time figuring out whether that reaction will actually work in a synthesis, particularly if the target has never been made before. Chematica is a program that looks for synthetic pathways leading from starter chemicals (red), through sequences of intermediates (blue), to a target compound (yellow). The target in this example is camptothecin: a naturally occurring compound that is the basis of several cancer drugs. Credit: Bartosz Grzybowski That problem bedevilled Elias Corey, a chemist at Harvard University in Cambridge, Massachusetts, who formalized the rules of retrosynthesis in the 1960s. The following decade, Corey created software called LHASA (Logic and Heuristics Applied to Synthetic Analysis), which could use these rules to suggest sequences of steps towards a synthesis2. But LHASA and its successors have never taken off, says Grzybowski: either the databases have included too few reactions and too many errors, or the algorithms have not properly assessed whether proposed reactions are compatible with all functional groups in the molecule. “If we could just make one chemical bond at a time, in isolation, chemistry would be trivial,” he says. Grzybowski has spent the past decade building a system called Chematica to address those problems. He started by creating a searchable network of about 6 million organic compounds, connected by a similar number of reactions, drawn from one of the main databases behind Reaxys. His team then spent years cleaning up the data — identifying entries that lack crucial information about reagent compatibility or reaction conditions. Without that kind of clean-up, Chematica would be like a computer chef surveying a gigantic recipe book for dishes that use ice cream, stumbling on baked Alaska, and concluding that ice cream can withstand very high temperatures — missing the fact that cooking ice cream in an oven only works with an insulating shield of meringue. Chematica includes such crucial information, so its proposed syntheses of novel molecules — based on about 30,000 retrosynthetic rules — can be much more trustworthy. The team also designed Chematica to take a holistic view of synthesis: it not only hunts for the best reaction to use at each step, but also considers the efficiency of every possible synthetic route as a whole. This means that a poor yield in one step can be counterbalanced by a succession of high-yielding reactions elsewhere in the sequence. “In 5 seconds we can screen 2 billion possible synthetic routes,” says Grzybowski. Stronger, faster, cheaper When Grzybowski first unveiled the network behind Chematica in 2005 (ref. 3), “people said it was bullshit”, he laughs. But that changed in 2012, when he and his team published a trio of landmark papers4,5,6 showing Chematica in action. For example, the program discovered4 a slew of 'one pot' syntheses in which reagents could be thrown into a vessel one after the other, without all the troublesome separation and purification of products after each step. The group tested Chematica's suggestions for making a range of quinolines — structures commonly found in drugs and dyes — and showed that many were more efficient than conventional approaches. Chematica can also look up information about the cost of starting materials and estimate the labour involved in each reaction, allowing it to predict the cheapest route to a particular molecule. When Grzybowski's lab tested 51 cut-price syntheses suggested by Chematica5, it collectively trimmed costs by more than 45%. The hardware has always been there, but the Software and data have let it down. These demonstrations have impressed synthetic chemists, although few have had a chance to test Chematica. That is because Grzybowski is hoping to commercialize the system: he is negotiating with Elsevier to incorporate the program into Reaxys, and is working with the pharmaceutical industry to test Chematica's synthesis suggestions for biologically active, naturally occurring molecules. Grzybowski is also bidding for a grant from the Polish government, worth up to 7 million złoty (US$2.3 million), to use Chematica as the brain of a synthesis machine that can prove itself by automatically planning and executing syntheses of at least three important drug molecules.
Others are doubtful that will happen — at least any time soon. For the foreseeable future, “there will always be a significant need for human intervention”, says Simon Tyler, commercial director of CatScI, a contract-research company in Cardiff, UK, that is involved in Dial-a-Molecule. “We won't have RoboCops wandering around in the lab.”
And as long as programmes like Chematica rely on databases of published studies, says Whitby, they will struggle to design reliable synthetic routes to unknown compounds. To build a synthesis machine, “we need to be able to predict when a reaction is going to work — but more importantly we need to be able to predict when it's going to fail”.
Unfortunately, those failures are rarely recorded in the literature. “We only publish the successes, a cleaned-up version of what happens in the lab,” says Whitby. “We also lose a lot of information: what really was the temperature, what was the stirring speed, how much solvent did you use?”
One solution is to record those successes and failures using electronic laboratory notebooks (ELNs), computer systems for logging raw experimental data that are widely used in industry but still rare in academia (see Nature 481, 430–431; 2012 ). “A lot of people ask, 'Who reads all these data?' The point is that machines use them — they can search the data,” explains Mat Todd, a chemist at the University of Sydney in Australia.
In principle, automated workstations and instruments could send information to an ELN, which would upload the details to an open-access database where they could help a synthesis machine to predict how reliable a reaction might be. “If we really did know the history of every chemical reaction that had ever been done, we'd have amazing predictive capabilities,” says Todd.
Dial-a-Molecule researchers have coordinated trials of ELNs in academic labs; started to devise a standardized, machine-readable format for ELN records; and developed software that can push those data into open databases such as ChemSpider. Others in the network have developed prototype software called PatentEye, which could pull in extra data by scraping and cataloguing chemical information from patents.
Many of those dreaming of a synthesis machine agree that widespread data harvesting will require a huge cultural shift. “That's absolutely the biggest barrier,” says Todd. “In chemistry, we don't have that culture of sharing, and I think it's got to change.”
Money is also a significant hurdle. The expense of automated workstations means that few academics are familiar with them or their potential for capturing data. And with a large workforce of graduate students to draw on, academic labs often have little incentive to automate. Whitby is lobbying for a national centre that would host state-of-the-art automated synthesis equipment and software, to encourage their development and use. Until that materializes, he hopes that Dial-a-Molecule will inspire a new generation of chemists to embrace data sharing and automation.
Grzybowski, for one, is convinced that the synthesis machine can become a reality: “The only thing that can kill it is scepticism.”
|
2023-02-01 20:36:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3024245798587799, "perplexity": 3204.204861067549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00002.warc.gz"}
|
http://www.ni.com/documentation/en/labview-comms/1.0/node-ref/cholesky-factorization/
|
Performs Cholesky factorization on a symmetric or Hermitian positive definite matrix.
a
A symmetric or Hermitian positive definite matrix.
If a is not symmetric or Hermitian, this node uses only the upper triangular portion of a.
If a is not positive definite, this node returns an error.
Default: Empty array
cholesky
The factored, upper triangular matrix. This matrix is $R$ such that $A={R}^{T}R$ for real inputs and $A={R}^{H}R$ for complex inputs.
error
A value that represents any error or warning that occurs when this node executes.
|
2019-08-18 17:25:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.645037055015564, "perplexity": 767.3167704437212}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313987.32/warc/CC-MAIN-20190818165510-20190818191510-00089.warc.gz"}
|
https://planet.lisp.org/
|
#### Lispers.de — Berlin Lispers Meetup, Monday, 24th June 2019
· 5 days ago
We meet again on Monday 8pm, 24th June. Our host this time is James Anderson (www.dydra.com).
Berlin Lispers is about all flavors of Lisp including Common Lisp and Emacs Lisp, Clojure and Scheme.
We will have one talk by James Anderson this time.
The title of the talk is "Using the compiler to walk code".
We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.
#### Nicolas Hafner — Daily Game Development Streams - Confession 85
· 7 days ago
Starting tomorrow I will be doing daily live game development streams for my current game project, Leaf. The streams will happen from 20:00 to 22:00 CEST, at https://stream.shinmera.com. There's also a short summary and countdown page here.
The streams will first primarily focus on rewriting some of the internals of the game and discussing viable designs for the level and tilemap representation. After that it'll probably fan out into some art design and rendering techniques. The streams won't be contiguous, so I'll be working on things off-stream as well, but I'll try to work in brief recaps at the beginning to ensure everyone is up to speed on the current progress.
If you would like to study the source code yourself, the two primary projects are Leaf itself, and the game engine Trial. Both are open source, and I push changes regularly. Feedback or discussion on the code and design is of course very much welcome.
I'm doing this regular stream mostly to force myself out of a terrible depressive episode I've been having, and this should provide enough external pressure to get me out of it and back into my regular, productive schedule. Of course, the more people are interested in this, the greater the pressure, so I hope I'll see at least some people in chat during the streams. I would really appreciate it tremendously, even if you don't have much gamedev or technical expertise, or are just there to shitpost. Really, I genuinely mean it!
That said, I hope to see you there!
#### Nicolas Hafner — Improving Portability - Confession 84
· 19 days ago
Common Lisp has a big standard, but sometimes that standard still doesn't cover everything one might need to be able to do. And so, various implementations offer various extensions in various ways to help people get their work done. A lot of these extensions are similar and compatible with each other, just with different interfaces. Thus, to make libraries work on as many implementations as possible, portability libraries have been made, and are still being made.
Unfortunately, writing one of these libraries to work across all common implementations, let alone all existing implementations, is a tremendous amount of work, especially since the implementations that are still maintained change over time as well to add, remove, and improve their extensions. Thus, quite a few of the portability libraries that exist do not support all implementations that they could, or not as well as they could. Similarly, not all implementations provide all extensions that they could, and probably should (like Package-Local Nicknames).
In hopes of aiding this process, I have decided to create a small project that documents the state of the various portability libraries and implementations in a way that shows the information at a glance. You can see the state here and help improve it if you see a missing or incorrectly labelled library on GitHub.
It would make me overjoyed to see people help out the various projects to support more implementations, and to lobby their favourite implementation to take up support for new extensions*. I'm the maintainer of a few of the libraries listed on the page as well (atomics, definitions, dissect, float-features, trivial-arguments), and naturally I would very much welcome improvements to them. It would be especially great if the support for LispWorks in specific could be added, as I can't seem to get it to run on my system.
In any case, I hope that this project will nudge people towards caring a bit more about portability in the Lisp ecosystem. Maybe one day we'll be able to just pick and choose our implementations and libraries without having to worry about whether that specific combination will work or not. I think that would be fantastic.
*I'm looking at you, Allegro and LispWorks. Please support Package-Local Nicknames!
On a side note, I know I haven't been writing a lot of stuff here lately. I hope to correct that with a couple of entries coming in the next few days. As always, thanks for reading!
#### Lispers.de — Lisp-Meetup in Hamburg on Monday, 3rd June 2019
· 26 days ago
We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CEST on 3rd June 2019.
This is an informal gathering of Lispers of all experience levels.
#### Lispers.de — Berlin Lispers Meetup, Monday, 27th May 2019
· 30 days ago
We meet again on Monday 8pm, 27th May. Our host this time is James Anderson (www.dydra.com).
Berlin Lispers is about all flavors of Lisp including Scheme, Common Lisp, Emacs Lisp, Clojure.
We will have one talk this time.
Ingo Mohr concludes his series of talks telling us about "Das Ende von Lisp und Lisp-Maschine in Ostdeutschland".
We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.
#### Quicklisp news — May 2019 Quicklisp dist update now available
· 35 days ago
New projects:
• arrival — Classical planning plan validator written in modern Common Lisp — LLGPL
• assert-p — A library of assertions written in Common Lisp. — GPLv3
• assertion-error — Error pattern for assertion libraries in Common Lisp. — GNU General Public License v3.0
• cl-cxx — Common Lisp Cxx Interoperation — MIT
• cl-digikar-utilities — A utility library, primarily intended to provide an easy interface to vectors and hash-tables. — MIT
• cl-just-getopt-parser — Getopt-like parser for command-line options and arguments — Creative Commons CC0 (public domain dedication)
• cl-postgres-datetime — Date/time integration for cl-postgres that uses LOCAL-TIME for types that use time zones and SIMPLE-DATE for those that don't — BSD-3-Clause
• cl-rdkafka — CFFI bindings for librdkafka to enable interaction with a Kafka cluster. — GPLv3
• cl-stream — Stream classes for Common Lisp — MIT
• numpy-file-format — Read and write Numpy .npy and .npz files. — MIT
• py4cl — Call Python libraries from Common Lisp — MIT
• sealable-metaobjects — A CLOSsy way to trade genericity for performance. — MIT
• simple-actors — Port of banker.scm from Racket — BSD
• trivial-left-pad — Ports the functionality of the very popular left-pad from npm. — MIT
Removed projects: cffi-objects, cl-blapack, common-lisp-stat, fnv, lisp-matrix, qtools-commons, simple-gui, trivia.balland2006.
A number of projects stopped working because of internal updates to SBCL - projects which used them have not been updated to use supported interfaces instead.
To get this update, use (ql:update-dist "quicklisp"). Enjoy!
#### Lispers.de — Lisp-Meetup in Hamburg on Monday, 6th May 2019
· 56 days ago
We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 6th May 2019.
Christian was at ELS and will report, and we will talk about our attempts at a little informal language benchmark.
This is an informal gathering of Lispers of all experience levels.
Update: the fine folks from stk-hamburg.de will be there and talk about their Lisp-based work!
#### Paul Khuong — Fractional Set Covering With Experts
· 62 days ago
Last winter break, I played with one of the annual capacitated vehicle routing problem (CVRP) “Santa Claus” contests. Real world family stuff took precedence, so, after the obvious LKH with Concorde polishing for individual tours, I only had enough time for one diversification moonshot. I decided to treat the high level problem of assembling prefabricated routes as a set covering problem: I would solve the linear programming (LP) relaxation for the min-cost set cover, and use randomised rounding to feed new starting points to LKH. Add a lot of luck, and that might just strike the right balance between solution quality and diversity.
Unsurprisingly, luck failed to show up, but I had ulterior motives: I’m much more interested in exploring first order methods for relaxations of combinatorial problems than in solving CVRPs. The routes I had accumulated after a couple days turned into a set covering LP with 1.1M decision variables, 10K constraints, and 20M nonzeros. That’s maybe denser than most combinatorial LPs (the aspect ratio is definitely atypical), but 0.2% non-zeros is in the right ballpark.
As soon as I had that fractional set cover instance, I tried to solve it with a simplex solver. Like any good Googler, I used Glop... and stared at a blank terminal for more than one hour.
Having observed that lack of progress, I implemented the toy I really wanted to try out: first order online “learning with experts” (specifically, AdaHedge) applied to LP optimisation. I let this not-particularly-optimised serial CL code run on my 1.6 GHz laptop for 21 hours, at which point the first order method had found a 4.5% infeasible solution (i.e., all the constraints were satisfied with $$\ldots \geq 0.955$$ instead of $$\ldots \geq 1$$). I left Glop running long after the contest was over, and finally stopped it with no solution after more than 40 days on my 2.9 GHz E5.
Given the shape of the constraint matrix, I would have loved to try an interior point method, but all my licenses had expired, and I didn’t want to risk OOMing my workstation. Erling Andersen was later kind enough to test Mosek’s interior point solver on it. The runtime was much more reasonable: 10 minutes on 1 core, and 4 on 12 cores, with the sublinear speed-up mostly caused by the serial crossover to a simplex basis.
At 21 hours for a naïve implementation, the “learning with experts” first order method isn’t practical yet, but also not obviously uninteresting, so I’ll write it up here.
Using online learning algorithms for the “experts problem” (e.g., Freund and Schapire’s Hedge algorithm) to solve linear programming feasibility is now a classic result; Jeremy Kun has a good explanation on his blog. What’s new here is:
1. Directly solving the optimisation problem.
2. Confirming that the parameter-free nature of AdaHedge helps.
The first item is particularly important to me because it’s a simple modification to the LP feasibility meta-algorithm, and might make the difference between a tool that’s only suitable for theoretical analysis and a practical approach.
I’ll start by reviewing the experts problem, and how LP feasibility is usually reduced to the former problem. After that, I’ll cast the reduction as a surrogate relaxation method, rather than a Lagrangian relaxation; optimisation should flow naturally from that point of view. Finally, I’ll guess why I had more success with AdaHedge this time than with Multiplicative Weight Update eight years ago.1
## The experts problem and LP feasibility
I first heard about the experts problem while researching dynamic sorted set data structures: Igal Galperin’s PhD dissertation describes scapegoat trees, but is really about online learning with experts. Arora, Hazan, and Kale’s 2012 survey of multiplicative weight update methods. is probably a better introduction to the topic ;)
The experts problem comes in many variations. The simplest form sounds like the following. Assume you’re playing a binary prediction game over a predetermined number of turns, and have access to a fixed finite set of experts at each turn. At the beginning of every turn, each expert offers their binary prediction (e.g., yes it will rain today, or it will not rain today). You then have to make a prediction yourself, with no additional input. The actual result (e.g., it didn’t rain today) is revealed at the end of the turn. In general, you can’t expect to be right more often than the best expert at the end of the game. Is there a strategy that bounds the “regret,” how many more wrong prediction you’ll make compared to the expert(s) with the highest number of correct predictions, and in what circumstances?
Amazingly enough, even with an omniscient adversary that has access to your strategy and determines both the experts’ predictions and the actual result at the end of each turn, a stream of random bits (hidden from the adversary) suffice to bound our expected regret in $$\mathcal{O}(\sqrt{T}\,\lg n)$$, where $$T$$ is the number of turns and $$n$$ the number of experts.
I long had trouble with that claim: it just seems too good of a magic trick to be true. The key realisation for me was that we’re only comparing against invidivual experts. If each expert is a move in a matrix game, that’s the same as claiming you’ll never do much worse than any pure strategy. One example of a pure strategy is always playing rock in Rock-Paper-Scissors; pure strategies are really bad! The trick is actually in making that regret bound useful.
We need a more continuous version of the experts problem for LP feasibility. We’re still playing a turn-based game, but, this time, instead of outputting a prediction, we get to “play” a mixture of the experts (with non-negative weights that sum to 1). At the beginning of each turn, we describe what weight we’d like to give to each experts (e.g., 60% rock, 40% paper, 0% scissors). The cost (equivalently, payoff) for each expert is then revealed (e.g., $$\mathrm{rock} = -0.5$$, $$\mathrm{paper} = 0.5$$, $$\mathrm{scissors} = 0$$), and we incur the weighted average from our play (e.g., $$60\% \cdot -0.5 + 40\% \cdot 0.5 = -0.1$$) before playing the next round.2 The goal is to minimise our worst-case regret, the additive difference between the total cost incurred by our mixtures of experts and that of the a posteriori best single expert. In this case as well, online learning algorithms guarantee regret in $$\mathcal{O}(\sqrt{T} \, \lg n)$$
This line of research is interesting because simple algorithms achieve that bound, with explicit constant factors on the order of 1,3 and those bounds are known to be non-asymptotically tight for a large class of algorithms. Like dense linear algebra or fast Fourier transforms, where algorithms are often compared by counting individual floating point operations, online learning has matured into such tight bounds that worst-case regret is routinely presented without Landau notation. Advances improve constant factors in the worst case, or adapt to easier inputs in order to achieve “better than worst case” performance.
The reduction below lets us take any learning algorithm with an additive regret bound, and convert it to an algorithm with a corresponding worst-case iteration complexity bound for $$\varepsilon$$-approximate LP feasibility. An algorithm that promises low worst-case regret in $$\mathcal{O}(\sqrt{T})$$ gives us an algorithm that needs at most $$\mathcal{O}(1/\varepsilon\sp{2})$$ iterations to return a solution that almost satisfies every constraint in the linear program, where each constraint is violated by $$\varepsilon$$ or less (e.g., $$x \leq 1$$ is actually $$x \leq 1 + \varepsilon$$).
We first split the linear program in two components, a simple domain (e.g., the non-negative orthant or the $$[0, 1]\sp{d}$$ box) and the actual linear constraints. We then map each of the latter constraints to an expert, and use an arbitrary algorithm that solves our continuous version of the experts problem as a black box. At each turn, the black box will output a set of non-negative weights for the constraints (experts). We will average the constraints using these weights, and attempt to find a solution in the intersection of our simple domain and the weighted average of the linear constraints. We can do so in the “experts problem” setting by consider each linear constraint’s violation as a payoff, or, equivalently, satisfaction as a loss.
Let’s use Stigler’s Diet Problem with three foods and two constraints as a small example, and further simplify it by disregarding the minimum value for calories, and the maximum value for vitamin A. Our simple domain here is at least the non-negative orthant: we can’t ingest negative food. We’ll make things more interesting by also making sure we don’t eat more than 10 servings of any food per day.
The first constraint says we mustn’t get too many calories
$72 x\sb{\mathrm{corn}} + 121 x\sb{\mathrm{milk}} + 65 x\sb{\mathrm{bread}} \leq 2250,$
and the second constraint (tweaked to improve this example) ensures we ge enough vitamin A
$107 x\sb{\mathrm{corn}} + 400 x\sb{\mathrm{milk}} \geq 5000,$
or, equivalently,
$-107 x\sb{\mathrm{corn}} - 400 x\sb{\mathrm{milk}} \leq -5000,$
Given weights $$[¾, ¼]$$, the weighted average of the two constraints is
$27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,$
where the coefficients for each variable and for the right-hand side were averaged independently.
The subproblem asks us to find a feasible point in the intersection of these two constraints: $27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,$ $0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.$
Classically, we claim that this is just Lagrangian relaxation, and find a solution to
$\min 27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}}$ subject to $0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.$
In the next section, I’ll explain why I think this analogy is wrong and worse than useless. For now, we can easily find the minimum one variable at a time, and find the solution $$x\sb{\mathrm{corn}} = 0$$, $$x\sb{\mathrm{milk}} = 10$$, $$x\sb{\mathrm{bread}} = 0$$, with objective value $$-92.5$$ (which is $$530$$ less than $$437.5$$).
In general, three things can happen at this point. We could discover that the subproblem is infeasible. In that case, the original non-relaxed linear program itself is infeasible: any solution to the original LP satisfies all of its constraints, and thus would also satisfy any weighted average of the same constraints. We could also be extremely lucky and find that our optimal solution to the relaxation is ($$\varepsilon$$-)feasible for the original linear program; we can stop with a solution. More commonly, we have a solution that’s feasible for the relaxation, but not for the original linear program.
Since that solution satisfies the weighted average constraint and payoffs track constraint violation, the black box’s payoff for this turn (and for every other turn) is non-positive. In the current case, the first constraint (on calories) is satisfied by $$1040$$, while the second (on vitamin A) is violated by $$1000$$. On weighted average, the constraints are satisfied by $$\frac{1}{4}(3 \cdot 1040 - 1000) = 530.$$ Equivalently, they’re violated by $$-530$$ on average.
We’ll add that solution to an accumulator vector that will come in handy later.
The next step is the key to the reduction: we’ll derive payoffs (negative costs) for the black box from the solution to the last relaxation. Each constraint (expert) has a payoff equal to its level of violation in the relaxation’s solution. If a constraint is strictly satisfied, the payoff is negative; for example, the constraint on calories is satisfied by $$1040$$, so its payoff this turn is $$-1040$$. The constraint on vitamin A is violated by $$1000$$, so its payoff this turn is $$1000$$. Next turn, we expect the black box to decrease the weight of the constraint on calories, and to increase the weight of the one on vitamin A.
After $$T$$ turns, the total payoff for each constraint is equal to the sum of violations by all solutions in the accumulator. Once we divide both sides by $$T$$, we find that the divided payoff for each constraint is equal to its violation by the average of the solutions in the accumulator. For example, if we have two solutions, one that violates the calories constraint by $$500$$ and another that satisfies it by $$1000$$ (violates it by $$-1000$$), the total payoff for the calories constraint is $$-500$$, and the average of the two solutions does strictly satisfy the linear constraint by $$\frac{500}{2} = 250$$!
We also know that we only generated feasible solutions to the relaxed subproblem (otherwise, we’d have stopped and marked the original LP as infeasible), so the black box’s total payoff is $$0$$ or negative.
Finally, we assumed that the black box algorithm guarantees an additive regret in $$\mathcal{O}(\sqrt{T}\, \lg n)$$, so the black box’s payoff of (at most) $$0$$ means that any constraint’s payoff is at most $$\mathcal{O}(\sqrt{T}\, \lg n)$$. After dividing by $$T$$, we obtain a bound on the violation by the arithmetic mean of all solutions in the accumulator: for all constraint, that violation is in $$\mathcal{O}\left(\frac{\lg n}{\sqrt{T}}\right)$$. In other words, the number of iteration $$T$$ must scale with $$\mathcal{O}\left(\frac{\lg n}{\varepsilon\sp{2}}\right)$$, which isn’t bad when $$n$$ is in the millions but $$\varepsilon \approx 0.01$$.
Theoreticians find this reduction interesting because there are concrete implementations of the black box, e.g., the multiplicative weight update (MWU) method with non-asymptotic bounds. For many problems, this makes it possible to derive the exact number of iterations necessary to find an $$\varepsilon-$$feasible fractional solution, given $$\varepsilon$$ and the instance’s size (but not the instance itself).
That’s why algorithms like MWU are theoretically useful tools for fractional approximations, when we already have subgradient methods that only need $$\mathcal{O}\left(\frac{1}{\varepsilon}\right)$$ iterations: state-of-the-art algorithms for learning with experts explicit non-asymptotic regret bounds that yield, for many problems, iteration bounds that only depend on the instance’s size, but not its data. While the iteration count when solving LP feasibility with MWU scales with $$\frac{1}{\varepsilon\sp{2}}$$, it is merely proportional to $$\lg n$$, the log of the the number of linear constraints. That’s attractive, compared to subgradient methods for which the iteration count scales with $$\frac{1}{\varepsilon}$$, but also scales linearly with respect to instance-dependent values like the distance between the initial dual solution and the optimum, or the Lipschitz constant of the Lagrangian dual function; these values are hard to bound, and are often proportional to the square root of the number of constraints. Given the choice between $$\mathcal{O}\left(\frac{\lg n}{\varepsilon\sp{2}}\right)$$ iterations with explicit constants, and a looser $$\mathcal{O}\left(\frac{\sqrt{n}}{\varepsilon}\right)$$, it’s obvious why MWU and online learning are powerful additions to the theory toolbox.
Theoreticians are otherwise not concerned with efficiency, so the usual answer to someone asking about optimisation is to tell them they can always reduce linear optimisation to feasibility with a binary search on the objective value. I once made the mistake of implementing that binary search last strategy. Unsurprisingly, it wasn’t useful. I also tried another theoretical reduction, where I looked for a pair of primal and dual -feasible solutions that happened to have the same objective value. That also failed, in a more interesting manner: since the two solution had to have almost the same value, the universe spited me by sending back solutions that were primal and dual infeasible in the worst possible way. In the end, the second reduction generated fractional solutions that were neither feasible nor superoptimal, which really isn’t helpful.
## Direct linear optimisation with experts
The reduction above works for any “simple” domain, as long as it’s convex and we can solve the subproblems, i.e., find a point in the intersection of the simple domain and a single linear constraint or determine that the intersection is empty.
The set of (super)optimal points in some initial simple domain is still convex, so we could restrict our search to the search of the domain that is superoptimal for the linear program we wish to optimise, and directly reduce optimisation to the feasibility problem solved in the last section, without binary search.
That sounds silly at first: how can we find solutions that are superoptimal when we don’t even know the optimal value?
Remember that the subproblems are always relaxations of the original linear program. We can port the objective function from the original LP over to the subproblems, and optimise the relaxations. Any solution that’s optimal for a realxation must have an optimal or superoptimal value for the original LP.
Rather than treating the black box online solver as a generator of Lagrangian dual vectors, we’re using its weights as solutions to the surrogate relaxation dual. The latter interpretation isn’t just more powerful by handling objective functions. It also makes more sense: the weights generated by algorithms for the experts problem are probabilities, i.e., they’re non-negative and sum to $$1$$. That’s also what’s expected for surrogate dual vectors, but definitely not the case for Lagrangian dual vectors, even when restricted to $$\leq$$ constraints.
We can do even better!
Unlike Lagrangian dual solvers, which only converge when fed (approximate) subgradients and thus make us (nearly) optimal solutions to the relaxed subproblems, our reduction to the experts problem only needs feasible solutions to the subproblems. That’s all we need to guarantee an $$\varepsilon-$$feasible solution to the initial problem in a bounded number of iterations. We also know exactly how that $$\varepsilon-$$feasible solution is generated: it’s the arithmetic mean of the solutions for relaxed subproblems.
This lets us decouple finding lower bounds from generating feasible solutions that will, on average, $$\varepsilon-$$satisfy the original LP. In practice, the search for an $$\varepsilon-$$feasible solution that is also superoptimal will tend to improve the lower bound. However, nothing forces us to evaluate lower bounds synchronously, or to only use the experts problem solver to improve our bounds.
We can find a new bound from any vector of non-negative constraint weights: they always yield a valid surrogate relaxation. We can solve that relaxation, and update our best bound when it’s improved. The Diet subproblem earlier had
$27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,$ $0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.$
Adding the original objective function back yields the linear program
$\min 0.18 x\sb{\mathrm{corn}} + 0.23 x\sb{\mathrm{milk}} + 0.05 x\sb{\mathrm{bread}}$ subject to $27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}} \leq 437.5,$ $0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10,$
which has a trivial optimal solution at $$[0, 0, 0]$$.
When we generate a feasible solution for the same subproblem, we can use any valid bound on the objective value to find the most feasible solution that is also assuredly (super)optimal. For example, if some oracle has given us a lower bound of $$2$$ for the original Diet problem, we can solve for
$\min 27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}}$ subject to $0.18 x\sb{\mathrm{corn}} + 0.23 x\sb{\mathrm{milk}} + 0.05 x\sb{\mathrm{bread}}\leq 2$ $0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.$
We can relax the objective value constraint further, since we know that the final $$\varepsilon-$$feasible solution is a simple arithmetic mean. Given the same best bound of $$2$$, and, e.g., a current average of $$3$$ solutions with a value of $$1.9$$, a new solution with an objective value of $$2.3$$ (more than our best bound, so not necessarily optimal!) would yield a new average solution with a value of $$2$$, which is still (super)optimal. This means we can solve the more relaxed subproblem
$\min 27.25 x\sb{\mathrm{corn}} - 9.25 x\sb{\mathrm{milk}} + 48.75 x\sb{\mathrm{bread}}$ subject to $0.18 x\sb{\mathrm{corn}} + 0.23 x\sb{\mathrm{milk}} + 0.05 x\sb{\mathrm{bread}}\leq 2.3$ $0 \leq x\sb{\mathrm{corn}},\, x\sb{\mathrm{milk}},\, x\sb{\mathrm{bread}} \leq 10.$
Given a bound on the objective value, we swapped the constraint and the objective; the goal is to maximise feasibility, while generating a solution that’s “good enough” to guarantee that the average solution is still (super)optimal.
For box-constrained linear programs where the box is the convex domain, subproblems are bounded linear knapsacks, so we can simply stop the greedy algorithm as soon as the objective value constraint is satisfied, or when the knapsack constraint becomes active (we found a better bound).
This last tweak doesn’t just accelerate convergence to $$\varepsilon-$$feasible solutions. More importantly for me, it pretty much guarantees that out $$\varepsilon-$$feasible solution matches the best known lower bound, even if that bound was provided by an outside oracle. Bundle methods and the Volume algorithm can also mix solutions to relaxed subproblems in order to generate $$\varepsilon-$$feasible solutions, but the result lacks the last guarantee: their fractional solutions are even more superoptimal than the best bound, and that can make bounding and variable fixing difficult.
Before last Christmas’s CVRP set covering LP, I had always used the multiplicative weight update (MWU) algorithm as my black box online learning algorithm: it wasn’t great, but I couldn’t find anything better. The two main downsides for me were that I had to know a “width” parameter ahead of time, as well as the number of iterations I wanted to run.
The width is essentially the range of the payoffs; in our case, the potential level of violation or satisfaction of each constraints by any solution to the relaxed subproblems. The dependence isn’t surprising: folklore in Lagrangian relaxation also says that’s a big factor there. The problem is that the most extreme violations and satisfactions are initialisation parameters for the MWU algorithm, and the iteration count for a given $$\varepsilon$$ is quadratic in the width ($$\mathrm{max}\sb{violation} \cdot \mathrm{max}\sb{satisfaction}$$).
What’s even worse is that the MWU is explicitly tuned for a specific iteration count. If I estimate that, give my worst-case width estimate, one million iterations will be necessary to achieve $$\varepsilon-$$feasibility, MWU tuned for 1M iterations will need 1M iterations, even if the actual width is narrower.
de Rooij and others published AdaHedge in 2013, an algorithm that addresses both these issues by smoothly estimating its parameter over time, without using the doubling trick.4 AdaHedge’s loss (convergence rate to an $$\varepsilon-$$solution) still depends on the relaxation’s width. However, it depends on the maximum width actually observed during the solution process, and not on any explicit worst-case bound. It’s also not explicily tuned for a specific iteration count, and simply keeps improving at a rate that roughly matches MWU. If the instance happens to be easy, we will find an $$\varepsilon-$$feasible solution more quickly. In the worst case, the iteration count is never much worse than that of an optimally tuned MWU.
These 400 lines of Common Lisp implement AdaHedge and use it to optimise the set covering LP. AdaHedge acts as the online black box solver for the surrogate dual problem, the relaxed set covering LP is a linear knapsack, and each subproblem attempts to improve the lower bound before maximising feasibility.
When I ran the code, I had no idea how long it would take to find a feasible enough solution: covering constraints can never be violated by more than $$1$$, but some points could be covered by hundreds of tours, so the worst case satisfaction width is high. I had to rely on the way AdaHedge adapts to the actual hardness of the problem. In the end, $$34492$$ iterations sufficed to find a solution that was $$4.5\%$$ infeasible.5 This corresponds to a worst case with a width of less than $$2$$, which is probably not what happened. It seems more likely that the surrogate dual isn’t actually an omniscient adversary, and AdaHedge was able to exploit some of that “easiness.”
The iterations themselves are also reasonable: one sparse matrix / dense vector multiplication to convert surrogate dual weights to an average constraint, one solve of the relaxed LP, and another sparse matrix / dense vector multiplication to compute violations for each constraint. The relaxed LP is a fractional $$[0, 1]$$ knapsack, so the bottleneck is sorting double floats. Each iteration took 1.8 seconds on my old laptop; I’m guessing that could easily be 10-20 times faster with vectorisation and parallelisation.
In another post, I’ll show how using the same surrogate dual optimisation algorithm to mimick Lagrangian decomposition instead of Lagrangian relaxation guarantees an iteration count in $$\mathcal{O}\left(\frac{\lg \#\mathrm{nonzero}}{\varepsilon\sp{2}}\right)$$ independently of luck or the specific linear constraints.
1. Yes, I have been banging my head against that wall for a while.
2. This is equivalent to minimising expected loss with random bits, but cleans up the reduction.
3. When was the last time you had to worry whether that log was natural or base-2?
4. The doubling trick essentially says to start with an estimate for some parameters (e.g., width), then adjust it to at least double the expected iteration count when the parameter’s actual value exceeds the estimate. The sum telescopes and we only pay a constant multiplicative overhead for the dynamic update.
5. I think I computed the $$\log$$ of the number of decision variables instead of the number of constraints, so maybe this could have gone a bit better.
#### Lispers.de — Berlin Lispers Meetup, Monday, 15th April 2019
· 77 days ago
We meet again on Monday 8pm, 15th April. Our host this time is James Anderson (www.dydra.com).
Berlin Lispers is about all flavors of Lisp including Emacs Lisp, Common Lisp, Clojure, Scheme.
We will have two talks this time.
Hans Hübner will tell us about "Reanimating VAX LISP - A CLtL1 implementation for VAX/VMS".
And Ingo Mohr will continue his talk "About the Unknown East of the Ancient LISP World. History and Thoughts. Part II: Eastern Common LISP and a LISP Machine."
We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.
#### Didier Verna — Quickref 2.0 "Be Quick or Be Dead" is released
· 78 days ago
Surfing on the energizing wave of ELS 2019, the 12 European Lisp Symposium, I'm happy to announce the release of Quickref 2.0, codename "Be Quick or Be Dead".
The major improvement in this release, justifying an increment of the major version number (and the very appropriate codename), is the introduction of parallel algorithms for building the documentation. I presented this work last week in Genova so I won't go into the gory details here, but for the brave and impatient, let me just say that using the parallel implementation is just a matter of calling the BUILD function with :parallel t :declt-threads x :makeinfo-threads y (adjust x and y as you see fit, depending on your architecture).
The second featured improvement is the introduction of an author index, in addition to the original one. The author index is still a bit shaky, mostly due to technical problems (calling asdf:find-system almost two thousand times simply doesn't work) and also to the very creative use that some library authors have of the ASDF author and maintainer slots in the system descriptions. It does, however, a quite decent job for the majority of the authors and their libraries'reference manuals.
Finally, the repository now has a fully functional continuous integration infrastructure, which means that there shouldn't be anymore lags between new Quicklisp (or Quickref) releases and new versions of the documentation website.
Thanks to Antoine Hacquard, Antoine Martin, and Erik Huelsmann for their contribution to this release! A lot of new features are already in the pipe. Currently documenting 1720 libraries, and counting...
#### Lispers.de — Lisp-Meetup in Hamburg on Monday, 1st April 2019
· 88 days ago
We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 1st April 2019.
This is an informal gathering of Lispers. Svante will talk a bit about the implementation of lispers.de. You are invited to bring your own topics.
#### Lispers.de — Berlin Lispers Meetup, Monday, 25th March 2019
· 97 days ago
We meet again on Monday 8pm, 25th March. Our host this time is James Anderson (www.dydra.com).
Berlin Lispers is about all flavors of Lisp including Common Lisp, Scheme, Dylan, Clojure.
We will have a talk this time. Ingo Mohr will tell us "About the Unknown East of the Ancient LISP World. History and Thoughts. Part I: LISP on Punchcards".
We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.
#### Quicklisp news — March 2019 Quicklisp dist update now available
· 110 days ago
New projects:
• bobbin — Simple (word) wrapping utilities for strings. — MIT
• cl-mango — A minimalist CouchDB 2.x database client. — BSD3
• cl-netpbm — Common Lisp support for reading/writing the netpbm image formats (PPM, PGM, and PBM). — MIT/X11
• cl-skkserv — skkserv with Common Lisp — GPLv3
• cl-torrents — This is a little tool for the lisp REPL or the command line (also with a readline interactive prompt) to search for torrents and get magnet links — MIT
• common-lisp-jupyter — A Common Lisp kernel for Jupyter along with a library for building Jupyter kernels. — MIT
• conf — Simple configuration file manipulator for projects. — GNU General Public License v3.0
• eventbus — An event bus in Common Lisp. — GPLv3
• open-location-code — Open Location Code library. — Modified BSD License
• piggyback-parameters — This is a configuration system that supports local file and database based parameter storage. — MIT
• quilc — A CLI front-end for the Quil compiler — Apache License 2.0 (See LICENSE.txt)
• qvm — An implementation of the Quantum Abstract Machine. — Apache License 2.0 (See LICENSE.txt)
• restricted-functions — Reasoning about functions with restricted argument types. — MIT
• simplet — Simple test runner in Common Lisp. — GPLv3
• skeleton-creator — Create projects from a skeleton directory. — GPLv3
• solid-engine — The Common Lisp stack-based application controller — MIT
• spell — Spellchecking package for Common Lisp — BSD
• trivial-continuation — Provides an implementation of function call continuation and combination. — MIT
• trivial-hashtable-serialize — A simple method to serialize and deserialize hash-tables. — MIT
• trivial-json-codec — A JSON parser able to identify class hierarchies. — MIT
• trivial-monitored-thread — Trivial Monitored Thread offers a very simple (aka trivial) way of spawning threads and being informed when one any of them crash and die. — MIT
• trivial-object-lock — A simple method to lock object (and slot) access. — MIT
• trivial-pooled-database — A DB multi-threaded connection pool. — MIT
• trivial-timer — Easy scheduling of tasks (functions). — MIT
• trivial-variable-bindings — Offers a way to handle associations between a place-holder (aka. variable) and a value. — MIT
• ucons — Unique conses and functions for working on them. — MIT
• wordnet — Common Lisp interface to WordNet — CC-BY 4.0
Removed projects: mgl, mgl-mat.
To get this update, use: (ql:update-dist "quicklisp")
Enjoy!
#### Lispers.de — Lisp-Meetup in Hamburg on Monday, 4th March 2019
· 118 days ago
We meet at Ristorante Opera, Dammtorstraße 7, Hamburg, starting around 19:00 CET on 4th March 2019.
This is an informal gathering of Lispers. Come as you are, bring lispy topics.
· 119 days ago
Epistemic Status: I’ve seen thread pools fail this way multiple times, am confident the pool-per-state approach is an improvement, and have confirmed with others they’ve also successfully used it in anger. While I’ve thought about this issue several times over ~4 years and pool-per-state seems like a good fix, I’m not convinced it’s undominated and hope to hear about better approaches.
Thread pools tend to only offer a sparse interface: pass a closure or a function and its arguments to the pool, and that function will be called, eventually.1 Functions can do anything, so this interface should offer all the expressive power one could need. Experience tells me otherwise.
The standard pool interface is so impoverished that it is nearly impossible to use correctly in complex programs, and leads us down design dead-ends. I would actually argue it’s better to work with raw threads than to even have generic amorphous thread pools: the former force us to stop and think about resource requirements (and lets the OS’s real scheduler help us along), instead of making us pretend we only care about CPU usage. I claim thread pools aren’t scalable because, with the exception of CPU time, they actively hinder the development of programs that achieve high resource utilisation.
This post comes in two parts. First, the story of a simple program that’s parallelised with a thread pool, then hits a wall as a wider set of resources becomes scarce. Second, a solution I like for that kind of program: an explicit state machine, where each state gets a dedicated queue that is aware of the state’s resource requirements.
## Stages of parallelisation
We start with a simple program that processes independent work units, a serial loop that pulls in work (e.g., files in a directory), or wait for requests on a socket, one work unit at a time.
The 80s saw a lot of research on generalising this “flat” parallelism model to nested parallelism, where work units can spawn additional requests and wait for the results (e.g., to recursively explore sub-branches of a search tree). Nested parallelism seems like a good fit for contemporary network services: we often respond to a request by sending simpler requests downstream, before merging and munging the responses and sending the result back to the original requestor. That may be why futures and promises are so popular these days.
I believe that, for most programs, the futures model is an excellent answer to the wrong question. The moment we perform I/O (be it network, disk, or even with hardware accelerators) in order to generate a result, running at scale will have to mean controlling more resources than just CPU, and both the futures and the generic thread pool models fall short.
The issue is that futures only work well when a waiter can help along the value it needs, with task stealing, while thread pools implement a trivial scheduler (dedicate a thread to a function until that function returns) that must be oblivious to resource requirements, since it handles opaque functions.
Once we have futures that might be blocked on I/O, we can’t guarantee a waiter will achieve anything by lending CPU time to its children. We could help sibling tasks, but that way stack overflows lie.
The deficiency of flat generic thread pools is more subtle. Obviously, one doesn’t want to take a tight thread pool, with one thread per core, and waste it on synchronous I/O. We’ll simply kick off I/O asynchronously, and re-enqueue the continuation on the pool upon completion!
A, I/O, B
in one function, we’ll split the work in two functions and a callback
A, initiate asynchronous I/O
On I/O completion: enqueue B in thread pool
B
The problem here is that it’s easy to create too many asynchronous requests, and run out of memory, DOS the target, or delay the rest of the computation for too long. As soon as the I/O requests has been initiated in A, the function returns to the thread pool, which will just execute more instances of A and initiate even more I/O.
At first, when the program doesn’t heavily utilise any resource in particular, there’s an easy solution: limit the total number of in-flight work units with a semaphore. Note that I wrote work unit, not function calls. We want to track logical requests that we started processing, but for which there is still work to do (e.g., the response hasn’t been sent back yet).
I’ve seen two ways to cap in-flight work units. One’s buggy, the other doesn’t generalise.
The buggy implementation acquires a semaphore in the first stage of request handling (A) and releases it in the last stage (B). The bug is that, by the time we’re executing A, we’re already using up a slot in the thread pool, so we might be preventing Bs from executing. We have a lock ordering problem: A acquires a thread pool slot before acquiring the in-flight semaphore, but B needs to acquire a slot before releasing the same semaphore. If you’ve seen code that deadlocks when the thread pool is too small, this was probably part of the problem.
The correct implementation acquires the semaphore before enqueueing a new work unit, before shipping a call to A to the thread pool (and releases it at the end of processing, in B). This only works because we can assume that the first thing A does is to acquire the semaphore. As our code becomes more efficient, we’ll want to more finely track the utilisation of multiple resources, and pre-acquisition won’t suffice. For example, we might want to limit network requests going to individual hosts, independently from disk reads or writes, or from database transactions.
The core issue with thread pools is that the only thing they can do is run opaque functions in a dedicated thread, so the only way to reserve resources is to already be running in a dedicated thread. However, the one resource that every function needs is a thread on which to run, thus any correct lock order must acquire the thread last.
We care about reserving resources because, as our code becomes more efficient and scales up, it will start saturating resources that used to be virtually infinite. Unfortunately, classical thread pools can only control CPU usage, and actively hinder correct resource throttling. If we can’t guarantee we won’t overwhelm the supply of a given resource (e.g., read IOPS), we must accept wasteful overprovisioning.
Once the problem has been identified, the solution becomes obvious: make sure the work we push to thread pools describes the resources to acquire before running the code in a dedicated thread.
My favourite approach assigns one global thread pool (queue) to each function or processing step. The arguments to the functions will change, but the code is always the same, so the resource requirements are also well understood. This does mean that we incur complexity to decide how many threads or cores each pool is allowed to use. However, I find that the resulting programs are better understandable at a high level: it’s much easier to write code that traverses and describes the work waiting at different stages when each stage has a dedicated thread pool queue. They’re also easier to model as queueing systems, which helps answer “what if?” questions without actually implementing the hypothesis.
In increasing order of annoyingness, I’d divide resources to acquire in four classes.
1. Resources that may be seamlessly3 shared or timesliced, like CPU.
2. Resources that are acquired for the duration of a single function call or processing step, like DB connections.
3. Resources that are acquired in one function call, then released in another thread pool invocation, like DB transactions, or asynchronous I/O semaphores.
4. Resources that may only be released after temporarily using more of it, or by cancelling work: memory.
We don’t really have to think about the first class of resources, at least when it comes to correctness. However, repeatedly running the same code on a given core tends to improve performance, compared to running all sorts of code on all cores.
The second class of resources may be acquired once our code is running in a thread pool, so one could pretend it doesn’t exist. However, it is more efficient to batch acquisition, and execute a bunch of calls that all need a given resource (e.g., a DB connection from a connection pool) before releasing it, instead of repetitively acquiring and releasing the same resource in back-to-back function calls, or blocking multiple workers on the same bottleneck.4 More importantly, the property of always being acquired and released in the same function invocation, is a global one: as soon as even one piece of code acquires a given resource and releases in another thread pool call (e.g., acquires a DB connection, initiates an asynchronous network call, writes the result of the call to the DB, and releases the connection), we must always treat that resource as being in the third, more annoying, class. Having explicit stages with fixed resource requirements helps us confirm resources are classified correctly.
The third class of resources must be acquired in a way that preserves forward progress in the rest of the system. In particular, we must never have all workers waiting for resources of this third class. In most cases, it suffices to make sure there at least as many workers as there are queues or stages, and to only let each stage run the initial resource acquisition code in one worker at a time. However, it can pay off to be smart when different queued items require different resources, instead of always trying to satisfy resource requirements in FIFO order.
The fourth class of resources is essentially heap memory. Memory is special because the only way to release it is often to complete the computation. However, moving the computation forward will use even more heap. In general, my only solution is to impose a hard cap on the total number of in-flight work units, and to make sure it’s easy to tweak that limit at runtime, in disaster scenarios. If we still run close to the memory capacity with that limit, the code can either crash (and perhaps restart with a lower in-flight cap), or try to cancel work that’s already in progress. Neither option is very appealing.
There are some easier cases. For example, I find that temporary bumps in heap usage can be caused by parsing large responses from idempotent (GET) requests. It would be nice if networking subsystems tracked memory usage to dynamically throttle requests, or even cancel and retry idempotent ones.
Once we’ve done the work of explicitly writing out the processing steps in our program as well as their individual resource requirements, it makes sense to let that topology drive the structure of the code.
Over time, we’ll gain more confidence in that topology and bake it in our program to improve performance. For example, rather than limiting the number of in-flight requests with a semaphore, we can have a fixed-size allocation pool of request objects. We can also selectively use bounded ring buffers once we know we wish to impose a limit on queue size. Similarly, when a sequence (or subgraph) of processing steps is fully synchronous or retires in order, we can control both the queue size and the number of in-flight work units with a disruptor, which should also improve locality and throughput under load. These transformations are easy to apply once we know what the movement of data and resource looks like. However, they also ossify the structure of the program, so I only think about such improvements if they provide a system property I know I need (e.g., a limit on the number of in-flight requests), or once the code is functional and we have load-testing data.
Complex programs are often best understood as state machines. These state machines can be implicit, or explicit. I prefer the latter. I claim that it’s also preferable to have one thread pool5 per explicit state than to dump all sorts of state transition logic in a shared pool. If writing functions that process flat tables is data-oriented programming, I suppose I’m arguing for data-oriented state machines.
1. Convenience wrappers, like parallel map, or “run after this time,” still rely on the flexibility of opaque functions.
2. Maybe we decided to use threads because there’s a lot of shared, read-mostly, data on the heap. It doesn’t really matter, process pools have similar problems.
3. Up to a point, of course. No model is perfect, etc. etc.
4. Explicit resource requirements combined with one queue per stage lets us steal ideas from SEDA.
5. One thread pool per state in the sense that no state can fully starve out another of CPU time. The concrete implementation may definitely let a shared set of workers pull from all the queues.
#### Christophe Rhodes — sbcl 1 5 0
· 120 days ago
Today, I released sbcl-1.5.0 - with no particular reason for the minor version bump except that when the patch version (we don't in fact do semantic versioning) gets large it's hard to remember what I need to type in the release script. In the 17 versions (and 17 months) since sbcl-1.4.0, there have been over 2900 commits - almost all by other people - fixing user-reported, developer-identified, and random-tester-lesser-spotted bugs; providing enhancements; improving support for various platforms; and making things faster, smaller, or sometimes both.
It's sometimes hard for developers to know what their users think of all of this furious activity. It's definitely hard for me, in the case of SBCL: I throw releases over the wall, and sometimes people tell me I messed up (but usually there is a resounding silence). So I am running a user survey, where I invite you to tell me things about your use of SBCL. All questions are optional: if something is personally or commercially sensitive, please feel free not to tell me about it! But it's nine years since the last survey (that I know of), and much has changed since then - I'd be glad to hear any information SBCL users would be willing to provide. I hope to collate the information in late March, and report on any insight I can glean from the answers.
#### Chaitanya Gupta — LOAD-TIME-VALUE and prepared queries in Postmodern
· 134 days ago
The Common Lisp library Postmodern defines a macro called PREPARE that creates prepared statements for a PostgreSQL connection. It takes a SQL query with placeholders ($1, $2, etc.) as input and returns a function which takes one argument for every placeholder and executes the query.
The first time I used it, I did something like this:
(defun run-query (id)
(funcall (prepare "SELECT * FROM foo WHERE id = $1") id)) Soon after, I realized that running this function every time would generate a new prepared statement instead of re-using the old one. Let's look at the macro expansion: (macroexpand-1 '(prepare "SELECT * FROM foo WHERE id =$1"))
==>
(LET ((POSTMODERN::STATEMENT-ID (POSTMODERN::NEXT-STATEMENT-ID))
(QUERY "SELECT * FROM foo WHERE id = $1")) (LAMBDA (&REST POSTMODERN::PARAMS) (POSTMODERN::ENSURE-PREPARED *DATABASE* POSTMODERN::STATEMENT-ID QUERY) (POSTMODERN::ALL-ROWS (CL-POSTGRES:EXEC-PREPARED *DATABASE* POSTMODERN::STATEMENT-ID POSTMODERN::PARAMS 'CL-POSTGRES:LIST-ROW-READER)))) T ENSURE-PREPARED checks if a statement with the given statement-id exists for the current connection. If yes, it will be re-used, else a new one is created with the given query. The problem is that the macro generates a new statement id every time it is run. This was a bit surprising, but the fix was simple: capture the function returned by PREPARE once, and use that instead. (defparameter *prepared* (prepare "SELECT * FROM foo WHERE id =$1"))
(defun run-query (id)
(funcall *prepared* id))
You can also use Postmodern's DEFPREPARED instead, which similarly defines a new function at the top-level.
This works well, but now are using top-level forms instead of the nicely encapsulated single form we used earlier.
To fix this, we can use LOAD-TIME-VALUE.
(defun run-query (id)
(funcall (load-time-value (prepare "SELECT * FROM foo WHERE id = $1")) id)) LOAD-TIME-VALUE is a special operator that 1. Evaluates the form in the null lexical environment 2. Delays evaluation of the form until load time 3. If compiled, it ensures that the form is evaluated only once By wrapping PREPARE inside LOAD-TIME-VALUE, we get back our encapsulation while ensuring that a new prepared statement is generated only once (per connection), until the next time RUN-QUERY is recompiled. ## Convenience To avoid the need to wrap PREPARE every time, we can create a converience macro and use that instead: (defmacro prepared-query (query &optional (format :rows)) (load-time-value (prepare ,query ,format))) (defun run-query (id) (funcall (prepared-query "SELECT * FROM foo WHERE id =$1") id))
## Caveats
This only works for compiled code. As mentioned earlier, the form wrapped inside LOAD-TIME-VALUE is evaluated once only if you compile it. If uncompiled, it is evaluated every time so this solution will not work there.
Another thing to remember about LOAD-TIME-VALUE is that the form is evaluated in the null lexical environment. So the form cannot use any lexically scoped variables like in the example below:
(defun run-query (table id)
(prepare (format nil "SELECT * FROM ~A WHERE id = $1" table))) id)) Evaluating this will signal that the variable TABLE is unbound. #### Wimpie Nortje — Be careful with Ironclad in multi-threaded applications. · 138 days ago #### Update Thanks to eadmund and glv2 the issue described in this article is now fixed and documented clearly. The fixed version of Ironclad should find its way into the Quicklisp release soon. Note that there are other objects in Ironclad which are still not thread-safe. Refer to the documentation on how to handle them. Whenever you write a program that uses cryptographic tools you will use cryptographically secure random numbers. Since most people never write security related software they may be surprised to learn how often they are in this situation. Cryptographically secure pseudo random number generators (PRNG) is a core building block in cryptographic algorithms which include things like hashing algorithms and generation algorithms for random identifiers with low probably of repetition. The two main uses are to securely store hashed passwords (e.g. PBKDF2, bcrypt, scrypt) and to generate random UUIDs. Most web applications with user accounts fall into this category and many other non-web software too. If your program falls into this group you are almost certainly using Ironclad. The library tries hard to be easy to use even for those without cryptography knowledge. To that end it uses a global PRNG instance with a sensible setting for each particular target OS and expects that most users should never bother to learn about PRNGs. The Ironclad documentation is clear, don't change the default PRNG! First "You should very rarely need to call make-prng; the default OS-provided PRNG should be appropriate in nearly all cases." And then "You should only use the Fortuna PRNG if your OS does not provide a sufficiently-good PRNG. If you use a Unix or Unix-like OS (e.g. Linux), macOS or Windows, it does." These two quotes are sufficient to discourage any idle experimentation with PRNG settings, especially if you only want to get the password hashed and move on. The ease of use comes to a sudden stop if you try to use PRNGs in a threaded application on CCL. The first thread works fine but all others raise error conditions about streams being private to threads. On SBCL the problem is much worse. No error is signaled and everything appears to work but the PRNG frequently returns repeated "random" numbers. These repeated numbers may never be detected if they are only used for password hashing. If however you use random UUIDs you may from time-to-time get duplicates which will cause havoc in any system expecting objects to have unique identifiers. It will also be extremely difficult to find the cause of the duplicate IDs. How often do people write multi-threaded CL programs? Very often. By default Hunchentoot handles each HTTP request in its own thread. The cause of this problem is that Ironclad's default PRNG, :OS, is not implemented to be thread safe. This is the case on Unix where it is a stream to /dev/urandom. I have not checked the thread-safety on Windows where it uses CryptGenRandom. ## Solutions There exists a bug report for Ironclad about the issue but it won't be fixed. Two options to work around the issue are: 1. Change the global *PRNG* to Fortuna (setf ironclad:*PRNG* (ironclad:make-prng :fortuna)) Advantage: It is quick to implement and it appears to be thread safe. Disadvantage: :FORTUNA is much slower than :OS 2. Use a thread-local instance of :OS (make-thread (let ((ironclad:*PRNG* (ironclad:make-prng :os))) (use-prng))) Advantage: :OS is significantly faster that :FORTUNA. It is also Ironclad's recommended PRNG. Disadvantages: When the PRNG is only initialized where needed it is easy to miss places where it should be initialized.. When the PRNG is initialized in every thread it causes unnecessary processing overhead in threads where it is not used. ## Summary It is not safe to use Irondclad dependent libraries in multi-threaded programs with the default PRNG instance. On SBCL it may appear to work but you will eventually run into hard-to-debug problems with duplicate "random" numbers. On CCL the situation is better because it will signal an error. #### Quicklisp news — February 2019 Quicklisp dist update now available · 142 days ago New projects: • async-process — asynchronous process execution for common lisp — MIT • atomics — Portability layer for atomic operations like compare-and-swap (CAS). — Artistic • game-math — Math for game development. — MIT • generic-cl — Standard Common Lisp functions implemented using generic functions. — MIT • simplified-types — Simplification of Common Lisp type specifiers. — MIT • sn.man — stub man launcher.it should be a man parser. — mit Updated projectsagutilalso-alsaantikaprilcerberuschipzchronicitycl+sslcl-allcl-asynccl-collidercl-dbicl-embcl-environmentscl-fluent-loggercl-glfw3cl-json-pointercl-lascl-marklesscl-patternscl-readlinecl-rulescl-satcl-sat.glucosecl-sat.minisatcl-sdl2-imagecl-syslogcl-tiledcl-whoclackcloser-mopclsscommonqtcovercroatoandexadoreasy-audioeasy-bindeazy-projecteruditefast-websocketgendlglsl-toolkitgolden-utilsgraphjonathanjp-numeralkenzolichat-tcp-serverlistopialiterate-lisplocal-timeltkmcclimnodguioverlordpetalisppetripgloaderphoe-toolboxpngloadpostmodernqmyndqt-libsqtoolsqtools-uiquery-fsremote-jsreplicrpcqs-xml-rpcsafety-paramssc-extensionsserapeumshadowshould-testslystatic-dispatchstumpwmsucletime-intervaltriviatrivial-clipboardtrivial-utilitiestype-rutilityvernacularwith-c-syntaxwuwei. To get this update, use (ql:update-dist "quicklisp") Enjoy! #### Didier Verna — Final call for papers: ELS 2019, 12th European Lisp Sympoiusm · 144 days ago ELS'19 - 12th European Lisp Symposium Hotel Bristol Palace Genova, Italy April 1-2 2019 In cooperation with: ACM SIGPLAN In co-location with <Programming> 2019 Sponsored by EPITA and Franz Inc. http://www.european-lisp-symposium.org/ Recent news: - Submission deadline extended to Friday February 8. - Keynote abstracts now available. - <Programming> registration now open: https://2019.programming-conference.org/attending/Registration - Student refund program after the conference. The purpose of the European Lisp Symposium is to provide a forum for the discussion and dissemination of all aspects of design, implementation and application of any of the Lisp and Lisp-inspired dialects, including Common Lisp, Scheme, Emacs Lisp, AutoLisp, ISLISP, Dylan, Clojure, ACL2, ECMAScript, Racket, SKILL, Hop and so on. We encourage everyone interested in Lisp to participate. The 12th European Lisp Symposium invites high quality papers about novel research results, insights and lessons learned from practical applications and educational perspectives. We also encourage submissions about known ideas as long as they are presented in a new setting and/or in a highly elegant way. Topics include but are not limited to: - Context-, aspect-, domain-oriented and generative programming - Macro-, reflective-, meta- and/or rule-based development approaches - Language design and implementation - Language integration, inter-operation and deployment - Development methodologies, support and environments - Educational approaches and perspectives - Experience reports and case studies We invite submissions in the following forms: Papers: Technical papers of up to 8 pages that describe original results or explain known ideas in new and elegant ways. Demonstrations: Abstracts of up to 2 pages for demonstrations of tools, libraries, and applications. Tutorials: Abstracts of up to 4 pages for in-depth presentations about topics of special interest for at least 90 minutes and up to 180 minutes. The symposium will also provide slots for lightning talks, to be registered on-site every day. All submissions should be formatted following the ACM SIGS guidelines and include ACM Computing Classification System 2012 concepts and terms. Submissions should be uploaded to Easy Chair, at the following address: https://www.easychair.org/conferences/?conf=els2019 Note: to help us with the review process please indicate the type of submission by entering either "paper", "demo", or "tutorial" in the Keywords field. Important dates: - 08 Feb 2019 Submission deadline (*** extended! ***) - 01 Mar 2019 Notification of acceptance - 18 Mar 2019 Final papers due - 01-02 Apr 2019 Symposium Programme chair: Nicolas Neuss, FAU Erlangen-Nürnberg, Germany Programme committee: Marco Antoniotti, Universita Milano Bicocca, Italy Marc Battyani, FractalConcept, France Pascal Costanza, IMEC, ExaScience Life Lab, Leuven, Belgium Leonie Dreschler-Fischer, University of Hamburg, Germany R. Matthew Emerson, thoughtstuff LLC, USA Marco Heisig, FAU, Erlangen-Nuremberg, Germany Charlotte Herzeel, IMEC, ExaScience Life Lab, Leuven, Belgium Pierre R. Mai, PMSF IT Consulting, Germany Breanndán Ó Nualláin, University of Amsterdam, Netherlands François-René Rideau, Google, USA Alberto Riva, Unversity of Florida, USA Alessio Stalla, ManyDesigns Srl, Italy Patrick Krusenotto, Deutsche Welle, Germany Philipp Marek, Austria Sacha Chua, Living an Awesome Life, Canada Search Keywords: #els2019, ELS 2019, ELS '19, European Lisp Symposium 2019, European Lisp Symposium '19, 12th ELS, 12th European Lisp Symposium, European Lisp Conference 2019, European Lisp Conference '19 #### Zach Beane — Want to write Common Lisp for RavenPack? | R. Matthew Emerson · 160 days ago #### Paul Khuong — Preemption Is GC for Memory Reordering · 166 days ago I previously noted how preemption makes lock-free programming harder in userspace than in the kernel. I now believe that preemption ought to be treated as a sunk cost, like garbage collection: we’re already paying for it, so we might as well use it. Interrupt processing (returning from an interrupt handler, actually) is fully serialising on x86, and on other platforms, no doubt: any userspace instruction either fully executes before the interrupt, or is (re-)executed from scratch some time after the return back to userspace. That’s something we can abuse to guarantee ordering between memory accesses, without explicit barriers. This abuse of interrupts is complementary to Bounded TSO. Bounded TSO measures the hardware limit on the number of store instructions that may concurrently be in-flight (and combines that with the knowledge that instructions are retired in order) to guarantee liveness without explicit barriers, with no overhead, and usually marginal latency. However, without worst-case execution time information, it’s hard to map instruction counts to real time. Tracking interrupts lets us determine when enough real time has elapsed that earlier writes have definitely retired, albeit after a more conservative delay than Bounded TSO’s typical case. I reached this position after working on two lock-free synchronisation primitives—event counts, and asymmetric flag flips as used in hazard pointers and epoch reclamation—that are similar in that a slow path waits for a sign of life from a fast path, but differ in the way they handle “stuck” fast paths. I’ll cover the event count and flag flip implementations that I came to on Linux/x86[-64], which both rely on interrupts for ordering. Hopefully that will convince you too that preemption is a useful source of pre-paid barriers for lock-free code in userspace. I’m writing this for readers who are already familiar with lock-free programming, safe memory reclamation techniques in particular, and have some experience reasoning with formal memory models. For more references, Samy’s overview in the ACM Queue is a good resource. I already committed the code for event counts in Concurrency Kit, and for interrupt-based reverse barriers in my barrierd project. # Event counts with x86-TSO and futexes An event count is essentially a version counter that lets threads wait until the current version differs from an arbitrary prior version. A trivial “wait” implementation could spin on the version counter. However, the value of event counts is that they let lock-free code integrate with OS-level blocking: waiters can grab the event count’s current version v0, do what they want with the versioned data, and wait for new data by sleeping rather than burning cycles until the event count’s version differs from v0. The event count is a common synchronisation primitive that is often reinvented and goes by many names (e.g., blockpoints); what matters is that writers can update the version counter, and waiters can read the version, run arbitrary code, then efficiently wait while the version counter is still equal to that previous version. The explicit version counter solves the lost wake-up issue associated with misused condition variables, as in the pseudocode below. bad condition waiter: while True: atomically read data if need to wait: WaitOnConditionVariable(cv) else: break In order to work correctly, condition variables require waiters to acquire a mutex that protects both data and the condition variable, before checking that the wait condition still holds and then waiting on the condition variable. good condition waiter: while True: with(mutex): read data if need to wait: WaitOnConditionVariable(cv, mutex) else: break Waiters must prevent writers from making changes to the data, otherwise the data change (and associated condition variable wake-up) could occur between checking the wait condition, and starting to wait on the condition variable. The waiter would then have missed a wake-up and could end up sleeping forever, waiting for something that has already happened. good condition waker: with(mutex): update data SignalConditionVariable(cv) The six diagrams below show the possible interleavings between the signaler (writer) making changes to the data and waking waiters, and a waiter observing the data and entering the queue to wait for changes. The two left-most diagrams don’t interleave anything; these are the only scenarios allowed by correct locking. The remaining four actually interleave the waiter and signaler, and show that, while three are accidentally correct (lucky), there is one case, WSSW, where the waiter misses its wake-up. If any waiter can prevent writers from making progress, we don’t have a lock-free protocol. Event counts let waiters detect when they would have been woken up (the event count’s version counter has changed), and thus patch up this window where waiters can miss wake-ups for data changes they have yet to observe. Crucially, waiters detect lost wake-ups, rather than preventing them by locking writers out. Event counts thus preserve lock-freedom (and even wait-freedom!). We could, for example, use an event count in a lock-free ring buffer: rather than making consumers spin on the write pointer, the write pointer could be encoded in an event count, and consumers would then efficiently block on that, without burning CPU cycles to wait for new messages. The challenging part about implementing event counts isn’t making sure to wake up sleepers, but to only do so when there are sleepers to wake. For some use cases, we don’t need to do any active wake-up, because exponential backoff is good enough: if version updates signal the arrival of a response in a request/response communication pattern, exponential backoff, e.g., with a 1.1x backoff factor, could bound the increase in response latency caused by the blind sleep during backoff, e.g., to 10%. Unfortunately, that’s not always applicable. In general, we can’t assume that signals corresponds to responses for prior requests, and we must support the case where progress is usually fast enough that waiters only spin for a short while before grabbing more work. The latter expectation means we can’t “just” unconditionally execute a syscall to wake up sleepers whenever we increment the version counter: that would be too slow. This problem isn’t new, and has a solution similar to the one deployed in adaptive spin locks. The solution pattern for adaptive locks relies on tight integration with an OS primitive, e.g., futexes. The control word, the machine word on which waiters spin, encodes its usual data (in our case, a version counter), as well as a new flag to denote that there are sleepers waiting to be woken up with an OS syscall. Every write to the control word uses atomic read-modify-write instructions, and before sleeping, waiters ensure the “sleepers are present” flag is set, then make a syscall to sleep only if the control word is still what they expect, with the sleepers flag set. OpenBSD’s compatibility shim for Linux’s futexes is about as simple an implementation of the futex calls as it gets. The OS code for futex wake and wait is identical to what userspace would do with mutexes and condition variables (waitqueues). Waiters lock out wakers for the futex word or a coarser superset, check that the futex word’s value is as expected, and enters the futex’s waitqueue. Wakers acquire the futex word for writes, and wake up the waitqueue. The difference is that all of this happens in the kernel, which, unlike userspace, can force the scheduler to be helpful. Futex code can run in the kernel because, unlike arbitrary mutex/condition variable pairs, the protected data is always a single machine integer, and the wait condition an equality test. This setup is simple enough to fully implement in the kernel, yet general enough to be useful. OS-assisted conditional blocking is straightforward enough to adapt to event counts. The control word is the event count’s version counter, with one bit stolen for the “sleepers are present” flag (sleepers flag). Incrementing the version counter can use a regular atomic increment; we only need to make sure we can tell whether the sleepers flag might have been set before the increment. If the sleepers flag was set, we clear it (with an atomic bit reset), and wake up any OS thread blocked on the control word. increment event count: old <- fetch_and_add(event_count.counter, 2) # flag is in the low bit if (old & 1): atomic_and(event_count.counter, -2) signal waiters on event_count.counter Waiters can spin for a while, waiting for the version counter to change. At some point, a waiter determines that it’s time to stop wasting CPU time. The waiter then sets the sleepers flag with a compare-and-swap: the CAS (compare-and-swap) can only fail because the counter’s value has changed or because the flag is already set. In the former failure case, it’s finally time to stop waiting. In the latter failure care, or if the CAS succeeded, the flag is now set. The waiter can then make a syscall to block on the control word, but only if the control word still has the sleepers flag set and contains the same expected (old) version counter. wait until event count differs from prev: repeat k times: if (event_count.counter / 2) != prev: # flag is in low bit. return compare_and_swap(event_count.counter, prev * 2, prev * 2 + 1) if cas_failed and cas_old_value != (prev * 2 + 1): return repeat k times: if (event_count.counter / 2) != prev: return sleep_if(event_count.center == prev * 2 + 1) This scheme works, and offers decent performance. In fact, it’s good enough for Facebook’s Folly. I certainly don’t see how we can improve on that if there are concurrent writers (incrementing threads). However, if we go back to the ring buffer example, there is often only one writer per ring. Enqueueing an item in a single-producer ring buffer incurs no atomic, only a release store: the write pointer increment only has to be visible after the data write, which is always the case under the TSO memory model (including x86). Replacing the write pointer in a single-producer ring buffer with an event count where each increment incurs an atomic operation is far from a no-brainer. Can we do better, when there is only one incrementer? On x86 (or any of the zero other architectures with non-atomic read-modify-write instructions and TSO), we can... but we must accept some weirdness. The operation that must really be fast is incrementing the event counter, especially when the sleepers flag is not set. Setting the sleepers flag on the other hand, may be slower and use atomic instructions, since it only happens when the executing thread is waiting for fresh data. I suggest that we perform the former, the increment on the fast path, with a non-atomic read-modify-write instruction, either inc mem or xadd mem, reg. If the sleepers flag is in the sign bit, we can detect it (modulo a false positive on wrap-around) in the condition codes computed by inc; otherwise, we must use xadd (fetch-and-add) and look at the flag bit in the fetched value. The usual ordering-based arguments are no help in this kind of asymmetric synchronisation pattern. Instead, we must go directly to the x86-TSO memory model. All atomic (LOCK prefixed) instructions conceptually flush the executing core’s store buffer, grab an exclusive lock on memory, and perform the read-modify-write operation with that lock held. Thus, manipulating the sleepers flag can’t lose updates that are already visible in memory, or on their way from the store buffer. The RMW increment will also always see the latest version update (either in global memory, or in the only incrementer’s store buffer), so won’t lose version updates either. Finally, scheduling and thread migration must always guarantee that the incrementer thread sees its own writes, so that won’t lose version updates. increment event count without atomics in the common case: old <- non_atomic_fetch_and_add(event_count.counter, 2) if (old & 1): atomic_and(event_count.counter, -2) signal waiters on event_count.counter The only thing that might be silently overwritten is the sleepers flag: a waiter might set that flag in memory just after the increment’s load from memory, or while the increment reads a value with the flag unset from the local store buffer. The question is then how long waiters must spin before either observing an increment, or knowing that the flag flip will be observed by the next increment. That question can’t be answered with the memory model, and worst-case execution time bounds are a joke on contemporary x86. I found an answer by remembering that IRET, the instruction used to return from interrupt handlers, is a full barrier.1 We also know that interrupts happen at frequent and regular intervals, if only for the preemption timer (every 4-10ms on stock Linux/x86oid). Regardless of the bound on store visibility, a waiter can flip the sleepers-are-present flag, spin on the control word for a while, and then start sleeping for short amounts of time (e.g., a millisecond or two at first, then 10 ms, etc.): the spin time is long enough in the vast majority of cases, but could still, very rarely, be too short. At some point, we’d like to know for sure that, since we have yet to observe a silent overwrite of the sleepers flag or any activity on the counter, the flag will always be observed and it is now safe to sleep forever. Again, I don’t think x86 offers any strict bound on this sort of thing. However, one second seems reasonable. Even if a core could stall for that long, interrupts fire on every core several times a second, and returning from interrupt handlers acts as a full barrier. No write can remain in the store buffer across interrupts, interrupts that occur at least once per second. It seems safe to assume that, once no activity has been observed on the event count for one second, the sleepers flag will be visible to the next increment. That assumption is only safe if interrupts do fire at regular intervals. Some latency sensitive systems dedicate cores to specific userspace threads, and move all interrupt processing and preemption away from those cores. A correctly isolated core running Linux in tickless mode, with a single runnable process, might not process interrupts frequently enough. However, this kind of configuration does not happen by accident. I expect that even a half-second stall in such a system would be treated as a system error, and hopefully trigger a watchdog. When we can’t count on interrupts to get us barriers for free, we can instead rely on practical performance requirements to enforce a hard bound on execution time. Either way, waiters set the sleepers flag, but can’t rely on it being observed until, very conservatively, one second later. Until that time has passed, waiters spin on the control word, then block for short, but growing, amounts of time. Finally, if the control word (event count version and sleepers flag) has not changed in one second, we assume the incrementer has no write in flight, and will observe the sleepers flag; it is safe to block on the control word forever. wait until event count differs from prev: repeat k times: if (event_count.counter / 2) != prev: return compare_and_swap(event_count.counter, 2 * prev, 2 * prev + 1) if cas_failed and cas_old_value != 2 * prev + 1: return repeat k times: if event_count.counter != 2 * prev + 1: return repeat for 1 second: sleep_if_until(event_count.center == 2 * prev + 1,$exponential_backoff)
if event_count.counter != 2 * prev + 1:
return
sleep_if(event_count.center == prev * 2 + 1)
That’s the solution I implemented in this pull request for SPMC and MPMC event counts in concurrency kit. The MP (multiple producer) implementation is the regular adaptive logic, and matches Folly’s strategy. It needs about 30 cycles for an uncontended increment with no waiter, and waking up sleepers adds another 700 cycles on my E5-46xx (Linux 4.16). The single producer implementation is identical for the slow path, but only takes ~8 cycles per increment with no waiter, and, eschewing atomic instruction, does not flush the pipeline (i.e., the out-of-order execution engine is free to maximise throughput). The additional overhead for an increment without waiter, compared to a regular ring buffer pointer update, is 3-4 cycles for a single predictable conditional branch or fused test and branch, and the RMW’s load instead of a regular add/store. That’s closer to zero overhead, which makes it much easier for coders to offer OS-assisted blocking in their lock-free algorithms, without agonising over the penalty when no one needs to block.
# Asymmetric flag flip with interrupts on Linux
Hazard pointers and epoch reclamation. Two different memory reclamation technique, in which the fundamental complexity stems from nearly identical synchronisation requirements: rarely, a cold code path (which is allowed to be very slow) writes to memory, and must know when another, much hotter, code path is guaranteed to observe the slow path’s last write.
For hazard pointers, the cold code path waits until, having overwritten an object’s last persistent reference in memory, it is safe to destroy the pointee. The hot path is the reader:
1. read pointer value *(T **)x.
2. write pointer value to hazard pointer table
3. check that pointer value *(T **)x has not changed
Similarly, for epoch reclamation, a read-side section will grab the current epoch value, mark itself as reading in that epoch, then confirm that the epoch hasn’t become stale.
1. $epoch <- current epoch 2. publish self as entering a read-side section under$epoch
3. check that $epoch is still current, otherwise retry Under a sequentially consistent (SC) memory model, the two sequences are valid with regular (atomic) loads and stores. The slow path can always make its write, then scan every other thread’s single-writer data to see if any thread has published something that proves it executed step 2 before the slow path’s store (i.e., by publishing the old pointer or epoch value). The diagrams below show all possible interleavings. In all cases, once there is no evidence that a thread has failed to observe the slow path’s new write, we can correctly assume that all threads will observe the write. I simplified the diagrams by not interleaving the first read in step 1: its role is to provide a guess for the value that will be re-read in step 3, so, at least with respect to correctness, that initial read might as well be generating random values. I also kept the second “scan” step in the slow path abstract. In practice, it’s a non-snapshot read of all the epoch or hazard pointer tables for threads that execute the fast path: the slow path can assume an epoch or pointer will not be resurrected once the epoch or pointer is absent from the scan. No one implements SC in hardware. X86 and SPARC offer the strongest practical memory model, Total Store Ordering, and that’s still not enough to correctly execute the read-side critical sections above without special annotations. Under TSO, reads (e.g., step 3) are allowed to execute before writes (e.g., step 2). X86-TSO models that as a buffer in which stores may be delayed, and that’s what the scenarios below show, with steps 2 and 3 of the fast path reversed (the slow path can always be instrumented to recover sequential order, it’s meant to be slow). The TSO interleavings only differ from the SC ones when the fast path’s steps 2 and 3 are separated by something on slow path’s: when the two steps are adjacent, their order relative to the slow path’s steps is unaffected by TSO’s delayed stores. TSO is so strong that we only have to fix one case, FSSF, where the slow path executes in the middle of the fast path, with the reversal of store and load order allowed by TSO. Simple implementations plug this hole with a store-load barrier between the second and third steps, or implement the store with an atomic read-modify-write instruction that doubles as a barrier. Both modifications are safe and recover SC semantics, but incur a non-negligible overhead (the barrier forces the out of order execution engine to flush before accepting more work) which is only necessary a minority of the time. The pattern here is similar to the event count, where the slow path signals the fast path that the latter should do something different. However, where the slow path for event counts wants to wait forever if the fast path never makes progress, hazard pointer and epoch reclamation must detect that case and ignore sleeping threads (that are not in the middle of a read-side SMR critical section). In this kind of asymmetric synchronisation pattern, we wish to move as much of the overhead to the slow (cold) path. Linux 4.3 gained the membarrier syscall for exactly this use case. The slow path can execute its write(s) before making a membarrier syscall. Once the syscall returns, any fast path write that has yet to be visible (hasn’t retired yet), along with every subsequent instruction in program order, started in a state where the slow path’s writes were visible. As the next diagram shows, this global barrier lets us rule out the one anomalous execution possible under TSO, without adding any special barrier to the fast path. The problem with membarrier is that it comes in two flavours: slow, or not scalable. The initial, unexpedited, version waits for kernel RCU to run its callback, which, on my machine, takes anywhere between 25 and 50 milliseconds. The reason it’s so slow is that the condition for an RCU grace period to elapse are more demanding than a global barrier, and may even require multiple such barriers. For example, if we used the same scheme to nest epoch reclamation ten deep, the outermost reclaimer would be 1024 times slower than the innermost one. In reaction to this slowness, potential users of membarrier went back to triggering IPIs, e.g., by mprotecting a dummy page. mprotect isn’t guaranteed to act as a barrier, and does not do so on AArch64, so Linux 4.16 added an “expedited” mode to membarrier. In that expedited mode, each membarrier syscall sends an IPI to every other core... when I look at machines with hundreds of cores, $$n - 1$$ IPI per core, a couple times per second on every $$n$$ core, start to sound like a bad idea. Let’s go back to the observation we made for event count: any interrupt acts as a barrier for us, in that any instruction that retires after the interrupt must observe writes made before the interrupt. Once the hazard pointer slow path has overwritten a pointer, or the epoch slow path advanced the current epoch, we can simply look at the current time, and wait until an interrupt has been handled at a later time on all cores. The slow path can then scan all the fast path state for evidence that they are still using the overwritten pointer or the previous epoch: any fast path that has not published that fact before the interrupt will eventually execute the second and third steps after the interrupt, and that last step will notice the slow path’s update. There’s a lot of information in /proc that lets us conservatively determine when a new interrupt has been handled on every core. However, it’s either too granular (/proc/stat) or extremely slow to generate (/proc/schedstat). More importantly, even with ftrace, we can’t easily ask to be woken up when something interesting happens, and are forced to poll files for updates (never mind the weirdly hard to productionalise kernel interface). What we need is a way to read, for each core, the last time it was definitely processing an interrupt. Ideally, we could also block and let the OS wake up our waiter on changes to the oldest “last interrupt” timestamp, across all cores. On x86, that’s enough to get us the asymmetric barriers we need for hazard pointers and epoch reclamation, even if only IRET is serialising, and not interrupt handler entry. Once a core’s update to its “last interrupt” timestamp is visible, any write prior to the update, and thus any write prior to the interrupt is also globally visible: we can only observe the timestamp update from a different core than the updater, in which case TSO saves us, or after the handler has returned with a serialising IRET. We can bundle all that logic in a short eBPF program.2 The program has a map of thread-local arrays (of 1 CLOCK_MONOTONIC timestamp each), a map of perf event queues (one per CPU), and an array of 1 “watermark” timestamp. Whenever the program runs, it gets the current time. That time will go in the thread-local array of interrupt timestamps. Before storing a new value in that array, the program first reads the previous interrupt time: if that time is less than or equal to the watermark, we should wake up userspace by enqueueing in event in perf. The enqueueing is conditional because perf has more overhead than a thread-local array, and because we want to minimise spurious wake-ups. A high signal-to-noise ratio lets userspace set up the read end of the perf queue to wake up on every event and thus minimise update latency. We now need a single global daemon to attach the eBPF program to an arbitrary set of software tracepoints triggered by interrupts (or PMU events that trigger interrupts), to hook the perf fds to epoll, and to re-read the map of interrupt timestamps whenever epoll detects a new perf event. That’s what the rest of the code handles: setting up tracepoints, attaching the eBPF program, convincing perf to wake us up, and hooking it all up to epoll. On my fully loaded 24-core E5-46xx running Linux 4.18 with security patches, the daemon uses ~1-2% (much less on 4.16) of a core to read the map of timestamps every time it’s woken up every ~4 milliseconds. perf shows the non-JITted eBPF program itself uses ~0.1-0.2% of every core. Amusingly enough, while eBPF offers maps that are safe for concurrent access in eBPF programs, the same maps come with no guarantee when accessed from userspace, via the syscall interface. However, the implementation uses a hand-rolled long-by-long copy loop, and, on x86-64, our data all fit in longs. I’ll hope that the kernel’s compilation flags (e.g., -ffree-standing) suffice to prevent GCC from recognising memcpy or memmove, and that we thus get atomic store and loads on x86-64. Given the quality of eBPF documentation, I’ll bet that this implementation accident is actually part of the API. Every BPF map is single writer (either per-CPU in the kernel, or single-threaded in userspace), so this should work. Once the barrierd daemon is running, any program can mmap its data file to find out the last time we definitely know each core had interrupted userspace, without making any further syscall or incurring any IPI. We can also use regular synchronisation to let the daemon wake up threads waiting for interrupts as soon as the oldest interrupt timestamp is updated. Applications don’t even need to call clock_gettime to get the current time: the daemon also works in terms of a virtual time that it updates in the mmaped data file. The barrierd data file also includes an array of per-CPU structs with each core’s timestamps (both from CLOCK_MONOTONIC and in virtual time). A client that knows it will only execute on a subset of CPUs, e.g., cores 2-6, can compute its own “last interrupt” timestamp by only looking at entries 2 to 6 in the array. The daemon even wakes up any futex waiter on the per-CPU values whenever they change. The convenience interface is pessimistic, and assumes that client code might run on every configured core. However, anyone can mmap the same file and implement tighter logic. Again, there’s a snag with tickless kernels. In the default configuration already, a fully idle core might not process timer interrupts. The barrierd daemon detects when a core is falling behind, and starts looking for changes to /proc/stat. This backup path is slower and coarser grained, but always works with idle cores. More generally, the daemon might be running on a system with dedicated cores. I thought about causing interrupts by re-affining RT threads, but that seems counterproductive. Instead, I think the right approach is for users of barrierd to treat dedicated cores specially. Dedicated threads can’t (shouldn’t) be interrupted, so they can regularly increment a watchdog counter with a serialising instruction. Waiters will quickly observe a change in the counters for dedicated threads, and may use barrierd to wait for barriers on preemptively shared cores. Maybe dedicated threads should be able to hook into barrierd and check-in from time to time. That would break the isolation between users of barrierd, but threads on dedicated cores are already in a privileged position. I quickly compared the barrier latency on an unloaded 4-way E5-46xx running Linux 4.16, with a sample size of 20000 observations per method (I had to remove one outlier at 300ms). The synchronous methods mprotect (which abuses mprotect to send IPIs by removing and restoring permissions on a dummy page), or explicit IPI via expedited membarrier, are much faster than the other (unexpedited membarrier with kernel RCU, or barrierd that counts interrupts). We can zoom in on the IPI-based methods, and see that an expedited membarrier (IPI) is usually slightly faster than mprotect; IPI via expedited membarrier hits a worst-case of 0.041 ms, versus 0.046 for mprotect. The performance of IPI-based barriers should be roughly independent of system load. However, we did observe a slowdown for expedited membarrier (between $$68.4-73.0\%$$ of the time, $$p < 10\sp{-12}$$ according to a binomial test3) on the same 4-way system, when all CPUs were running CPU-intensive code at low priority. In this second experiment, we have a sample size of one million observations for each method, and the worst case for IPI via expedited membarrier was 0.076 ms (0.041 ms on an unloaded system), compared to a more stable 0.047 ms for mprotect. Now for non-IPI methods: they should be slower than methods that trigger synchronous IPIs, but hopefully have lower overhead and scale better, while offering usable latencies. On an unloaded system, the interrupts that drive barrierd are less frequent, sometimes outright absent, so unexpedited membarrier achieves faster response times. We can even observe barrierd’s fallback logic, which scans /proc/stat for evidence of idle CPUs after 10 ms of inaction: that’s the spike at 20ms. The values for vtime show the additional slowdown we can expect if we wait on barrierd’s virtual time, rather than directly reading CLOCK_MONOTONIC. Overall, the worst case latencies for barrierd (53.7 ms) and membarrier (39.9 ms) aren’t that different, but I should add another fallback mechanism based on membarrier to improve barrierd’s performance on lightly loaded machines. When the same 4-way, 24-core, system is under load, interrupts are fired much more frequently and reliably, so barrierd shines, but everything has a longer tail, simply because of preemption of the benchmark process. Out of the one million observations we have for each of unexpedited membarrier, barrierd, and barrierd with virtual time on this loaded system, I eliminated 54 values over 100 ms (18 for membarrier, 29 for barrierd, and 7 for virtual time). The rest is shown below. barrierd is consistently much faster than membarrier, with a geometric mean speedup of 23.8x. In fact, not only can we expect barrierd to finish before an unexpedited membarrier $$99.99\%$$ of the time ($$p<10\sp{-12}$$ according to a binomial test), but we can even expect barrierd to be 10 times as fast $$98.3-98.5\%$$ of the time ($$p<10\sp{-12}$$). The gap is so wide that even the opportunistic virtual-time approach is faster than membarrier (geometric mean of 5.6x), but this time with a mere three 9s (as fast as membarrier $$99.91-99.96\%$$ of the time, $$p<10\sp{-12}$$). With barrierd, we get implicit barriers with worse overhead than unexpedited membarrier (which is essentially free since it piggybacks on kernel RCU, another sunk cost), but 1/10th the latency (0-4 ms instead of 25-50 ms). In addition, interrupt tracking is per-CPU, not per-thread, so it only has to happen in a global single-threaded daemon; the rest of userspace can obtain the information it needs without causing additional system overhead. More importantly, threads don’t have to block if they use barrierd to wait for a system-wide barrier. That’s useful when, e.g., a thread pool worker is waiting for a reverse barrier before sleeping on a futex. When that worker blocks in membarrier for 25ms or 50ms, there’s a potential hiccup where a work unit could sit in the worker’s queue for that amount of time before it gets processed. With barrierd (or the event count described earlier), the worker can spin and wait for work units to show up until enough time has passed to sleep on the futex. While I believe that information about interrupt times should be made available without tracepoint hacks, I don’t know if a syscall like membarrier is really preferable to a shared daemon like barrierd. The one salient downside is that barrierd slows down when some CPUs are idle; that’s something we can fix by including a membarrier fallback, or by sacrificing power consumption and forcing kernel ticks, even for idle cores. # Preemption can be an asset When we write lock-free code in userspace, we always have preemption in mind. In fact, the primary reason for lock-free code in userspace is to ensure consistent latency despite potentially adversarial scheduling. We spend so much effort to make our algorithms work despite interrupts and scheduling that we can fail to see how interrupts can help us. Obviously, there’s a cost to making our code preemption-safe, but preemption isn’t an option. Much like garbage collection in managed language, preemption is a feature we can’t turn off. Unlike GC, it’s not obvious how to make use of preemption in lock-free code, but this post shows it’s not impossible. We can use preemption to get asymmetric barriers, nearly for free, with a daemon like barrierd. I see a duality between preemption-driven barriers and techniques like Bounded TSO: the former are relatively slow, but offer hard bounds, while the latter guarantee liveness, usually with negligible latency, but without any time bound. I used preemption to make single-writer event counts faster (comparable to a regular non-atomic counter), and to provide a lower-latency alternative to membarrier’s asymmetric barrier. In a similar vein, SPeCK uses time bounds to ensure scalability, at the expense of a bit of latency, by enforcing periodic TLB reloads instead of relying on synchronous shootdowns. What else can we do with interrupts, timer or otherwise? Thank you Samy, Gabe, and Hanes for discussions on an earlier draft. Thank you Ruchir for improving this final version. # P.S. event count without non-atomic RMW? The single-producer event count specialisation relies on non-atomic read-modify-write instructions, which are hard to find outside x86. I think the flag flip pattern in epoch and hazard pointer reclamation shows that’s not the only option. We need two control words, one for the version counter, and another for the sleepers flag. The version counter is only written by the incrementer, with regular non-atomic instructions, while the flag word is written to by multiple producers, always with atomic instructions. The challenge is that OS blocking primitives like futex only let us conditionalise the sleep on a single word. We could try to pack a pair of 16-bit shorts in a 32-bit int, but that doesn’t give us a lot of room to avoid wrap-around. Otherwise, we can guarantee that the sleepers flag is only cleared immediately before incrementing the version counter. That suffices to let sleepers only conditionalise on the version counter... but we still need to trigger a wake-up if the sleepers flag was flipped between the last clearing and the increment. On the increment side, the logic looks like must_wake = false if sleepers flag is set: must_wake = true clear sleepers flag increment version if must_wake or sleepers flag is set: wake up waiters and, on the waiter side, we find if version has changed return set sleepers flag sleep if version has not changed The separate “sleepers flag” word doubles the space usage, compared to the single flag bit in the x86 single-producer version. Composite OS uses that two-word solution in blockpoints, and the advantages seem to be simplicity and additional flexibility in data layout. I don’t know that we can implement this scheme more efficiently in the single producer case, under other memory models than TSO. If this two-word solution is only useful for non-x86 TSO, that’s essentially SPARC, and I’m not sure that platform still warrants the maintenance burden. But, we’ll see, maybe we can make the above work on AArch64 or POWER. 1. I actually prefer another, more intuitive, explanation that isn’t backed by official documentation.The store buffer in x86-TSO doesn’t actually exist in silicon: it represents the instructions waiting to be retired in the out-of-order execution engine. Precise interrupts seem to imply that even entering the interrupt handler flushes the OOE engine’s state, and thus acts as a full barrier that flushes the conceptual store buffer. 2. I used raw eBPF instead of the C frontend because that frontend relies on a ridiculous amount of runtime code that parses an ELF file when loading the eBPF snippet to know what eBPF maps to setup and where to backpatch their fd number. I also find there’s little advantage to the C frontend for the scale of eBPF programs (at most 4096 instructions, usually much fewer). I did use clang to generate a starting point, but it’s not that hard to tighten 30 instructions in ways that a compiler can’t without knowing what part of the program’s semantics is essential. The bpf syscall can also populate a string buffer with additional information when loading a program. That’s helpful to know that something was assembled wrong, or to understand why the verifier is rejecting your program. 3. I computed these extreme confidence intervals with my old code to test statistical SLOs. #### Zach Beane — Converter of maps from Reflex Arena to QuakeWorld · 168 days ago #### Quicklisp news — January 2019 Quicklisp dist update now available · 168 days ago New projects: • cl-markless — A parser implementation for Markless — Artistic • data-lens — Utilities for building data transormations from composable functions, modeled on lenses and transducers — MIT • iso-8601-date — Miscellaneous date routines based around ISO-8601 representation. — LLGPL • literate-lisp — a literate programming tool to write common lisp codes in org file. — MIT • magicl — Matrix Algebra proGrams In Common Lisp — BSD 3-Clause (See LICENSE.txt) • nodgui — LTK — LLGPL • petri — An implementation of Petri nets — MIT • phoe-toolbox — A personal utility library — BSD 2-clause • ql-checkout — ql-checkout is library intend to checkout quicklisp maintained library with vcs. — mit • qtools-commons — Qtools utilities and functions — Artistic License 2.0 • replic — A framework to build readline applications out of existing code. — MIT • slk-581 — Generate Australian Government SLK-581 codes. — LLGPL • sucle — Cube Demo Game — MIT • water — An ES6-compatible class definition for Parenscript — MIT • winhttp — FFI wrapper to WINHTTP — MIT Updated projects3d-matrices3d-vectorsaprilasd-generatorchirpchronicitycl-asynccl-batiscl-collidercl-dbicl-dbi-connection-poolcl-enumerationcl-formscl-hamcrestcl-hash-utilcl-lascl-libevent2cl-libuvcl-mixedcl-neovimcl-openglcl-patternscl-punchcl-satcl-sat.glucosecl-sat.minisatcl-syslogcl-unificationcladclazyclimacsclipcloser-mopcroatoandbusdeedsdefenumdefinitionsdufyeasy-bindeasy-routeseclectoresrapf2clflareflexi-streamsflowgendlglsl-toolkitharmonyhelambdaphu.dwim.debughumblerinquisitorlakelegitlichat-protocollisp-binarylisp-chatlog4cllqueryltkmcclimnew-opomer-countookoverlordpetalisppjlinkplumppostmodernprotestqtoolsquery-fsratifyread-numberrpcqsafety-paramssc-extensionsserapeumslimeslyspecialization-storespinneretstaplestatic-dispatchstumpwmsxqltootertrivial-clipboardtrivial-socketsutilities.print-itemsvernacularwebsocket-driverwild-package-inferred-systemxhtmlambda. Removed projects: cl-llvm, cl-skkserv The removed projects no longer build for me. To get this update, use (ql:update-dist "quicklisp"). Enjoy! #### Zach Beane — ASCII Art Animations in Lisp · 168 days ago #### TurtleWare — My everlasting Common Lisp TODO list · 171 days ago We have minds capable of dreaming up almost infinitely ambitious plans and only time to realize a pathetic fraction of them. If God exists this is a proof of his cruelty. This quote is a paraphrase of something I've read in the past. I couldn't find where it's from. If you do know where it comes from - please contact me! I've hinted a few times that I have "lengthy" list of things to do. New year is a good opportunity to publish a blog post about it. I'm going to skip some entries which seem to be too far fetched (some could have slipped in anyway) and some ideas I don't want to share yet. Please note, that none of these entries nor estimates are declarations nor project road maps - this is my personal to do list which may change at any time. Most notably I am aware that these estimates are too ambitious and it is unlikely that all will be met. ## ECL improvements In its early days ECL had both. They were removed in favor of native threads. I think that both are very valuable constructs which may function independently or even work together (i.e native thread have a pool of specialized green threads sharing data local to them). I want to add locatives too since I'm at adding new built-in classes. ETA: first quarter of 2019 (before 16.2.0 release). There might be better interfaces for the same goal, but there are already libraries which benefit from APIs defined in CLtL2 which didn't get through to the ANSI standard. They mostly revolve around environment access and better control over compiler workings (querying declarations, writing a code walker without gross hacks etc). ETA: first quarter of 2019 (before 16.2.0 release). ECL has two major performance bottlenecks. One is compilation time (that is actually GCC's fault), second is generic function dispatch. In a world where many libraries embrace the CLOS programming paradigm it is very important area of improvement. Professor Robert Strandh paved the way by inventing a method to greatly improve generic function dispatch speed. The team of Clasp developers implemented it, proving that the idea is a practical one for ECL (Clasp was forked from ECL and they still share big chunks of architecture - it is not rare that we share bug reports and fixes across our code bases). We want to embrace this dispatch strategy. ETA: second quarter of 2019 (after 16.2.0 release). I think about adding optional modules for SSL and file SQL database. Both libraries may be statically compiled what makes them good candidates which could work even for ECL builds without FASL loading support. ETA: third quarter of 2019. • Compiler modernization Admittedly I already had three tries at this task (and each ended with a failure - changes were too radical to propose in ECL). I believe that four makes a charm. Currently ECL has two passes (which are tightly coupled) - the first one for Common Lisp compilation to IR and the second one for transpiling to C/C++ code and compiling it with GCC. The idea is to decouple these passes and have: a frontend pass, numerous optimization passes (for sake of maintainability) and the code generation pass which could have numerous backends (C, C++, bytecodes, LLVM etc). ETA: third quarter of 2019. CDR has many valuable proposals I want to implement (some proposals are already implemented in ECL). Functions compiled-file-p and abi-version are a very useful addition from the implementation and build system point of view. Currently ECL will "eat" any FASL it can (if it meets certain criteria, most notably it will try to load FASL files compiled with incompatible ECL build). ABI validation should depend on a symbol table entries hash, cpu architecture and types used by the compiler. ETA: (this task is a sub-task of compiler modernization). • Replacing ecl_min with EuLisp Level 0 implementation ecl_min is an small lisp interpreter used to bootstrap the implementation (it is a binary written in C). Replacing this custom lisp with a lisp which has the standard draft would be a big step forward. I expect some blockers along the way - most notably EuLisp has one namespace for functions and variables. Overcoming that will be a step towards language agnostic runtime. ETA: fourth quarter of 2019. ## McCLIM improvements • Thread safety and refactored event processing loop standard-extended-input-stream has a quirk - event queue is mixed with the input buffer. That leads to inconsistent event processing between input streams and all other panes. According to my system and specification analysis this may be fixed. This task requires some refactor and careful documentation. Thread safety is about using CLIM streams from other threads to draw on a canvas being part of the CLIM frame from inside the external REPL. This ostensibly works, but it is not thread safe - output records may get corrupted during concurrent access. ETA: first quarter of 2019. • Better use of XRender extension We already use XRedner extensions for drawing fonts and rectangles with a solid background. We want to switch clx backend to use it to its fullest. Most notably to have semi-transparency and accelerated transformations. Some proof of concept code is stashed in my source tree. ETA: second quarter of 2019. • Extensive tests and documentation improvements I've mentioned it in in the last progress report. We want to spend the whole mid-release period on testing, bug fixes and documentation improvements. Most notably we want to write documentation for writing backends. This is a frequent request from users. ETA: third quarter of 2019 This task involves writing a console-based backend (which units are very big compared to a pixel and they are not squares). That will help me to identify and fix invalid assumptions in McCLIM codebase. The idea is to have untyped coordinates being density independent pixels which have approximately the same size on any type of a screen. A natural consequence of that will be writing examples of specialized sheets with different default units. ETA: fourth quarter of 2019 ## Other tasks • Finish writing a CLIM frontend to ansi-test (tested lisps are run as external processes). • Create a test suite for Common Lisp pathnames which goes beyond the standard. • Contribute UNIX domain sockets portability layer to usocket. • Explore the idea of making CLX understand (and compile) XCB xml protocol definitions. • Write a blog post about debugging and profiling ECL applications. • Resume a project with custom gadgets for McCLIM. • Do more research and write POC code for having animations in McCLIM. • Resume a project for writing Matroska build system with ASDF compatibility layer. • Use cl-test-grid to identify most common library dependencies in Quicklisp which doesn't support ECL and contribute such support. # Conclusions This list could be much longer, but even suggesting more entries as something scheduled for 2019 would be ridiculous - I'll stop here. A day has only 24h and I need to balance time between family, friends, duties, commercial work, free software, communities I'm part of etc. I find each and every task in it worthwhile so I will pursue them whenever I can. #### Zach Beane — Writing a natural language date and time parser · 175 days ago #### Chaitanya Gupta — Writing a natural language date and time parser · 175 days ago In the deftask blog I described how it lets users search for tasks easily by using natural language date queries. It accomplishes this by using a natural language date and time parser I wrote a long time ago called Chronicity. But how exactly does Chronicity work? In this post, we'll dig into its innards and get a sense of the steps involved in writing it. If you want to hack into Chronicity, or write your own NLP date parser, this might help. Note: credit for Chronicity's architecture goes to the Ruby library Chronic. It served both as an inspiration and as the implementation reference. Broadly, Chronicity follows these steps to parse date and time strings: ## Normalize text We normalize the text before tokenizing it by doing the following: 1. Lower case the string 2. Convert numeric words (like "one", "ten", "third", etc.) to the corresponding numbers 3. Replace all the common synonyms of a word or phrase so that tokenizing becomes simpler. All of this is accomplished by the PRE-NORMALIZE function. To convert numeric words to numbers the NUMERIZE function is used. One caveat: do not immediately normalize the term "second" - it can either mean the ordinal number or the unit of time. So we wait until after tokenization (see pre-process tokens) to resolve this ambiguity. CHRONICITY> (pre-normalize "tomorrow at seven") "next day at 7" CHRONICITY> (pre-normalize "20 days ago") "20 days past" ## Tokenize Next we assign a token to each word in the normalized text. (defclass token () ((word :initarg :word :reader token-word) (tags :initarg :tags :initform nil :accessor token-tags))) (defun create-token (word &rest tags) (make-instance 'token :word word :tags tags)) As you can see, besides the word, a token also contains a list of tags. Each tag indicates a possible way to interpret the given word or number. Take the phrase "20 days ago". The number 20 can be interpreted in many ways: • It might refer to the 20th day of the month • It might be the year 2020 • Or maybe just the number 20 (which is what is actually meant in the given phrase) • It could also refer to the time 8 PM in 24-hour format (20:00 hours) Remember, we are still in the tokenization phase so we don't know which interpretation is correct. So we will assign all four tags to the token for this number. Each tag is a subclass of the TAG class, which is defined as follows. (defclass tag () ((type :initarg :type :reader tag-type) (now :initarg :now :accessor tag-now :initform nil))) (defun create-tag (class type &key now) (make-instance class :type type :now now)) The slot TYPE is a misnomer - it actually indicates the designated value of the token for this tag. For example, the TYPE for the year 2020 above will be the integer 2020. For the time 8 PM it will be an object denoting the time. The slot NOW has the current timestamp. It is used by some tag classes like REPEATER for date-time computations (discussed later). The various subclasses of TAG are: • SEPARATOR - Things like slash "/", dash "-", "in", "at", "on", etc. • ORDINAL - Numbers like 1st, 2nd, 3rd, etc. • SCALAR - Simple numbers like 1, 5, 10, etc. It is further subclassed by SCALAR-DAY (1-31), SCALAR-MONTH (1-12) and SCALAR-YEAR. A token for any number will usually contain the SCALAR tag plus one or more of the subclassed tags as applicable. • POINTER - Indicates whether we are looking forwards ("hence", "after", "from") or backwards ("ago", "before"). These words are normalized to "future" and "past" before they are tagged. • GRABBER - The terms "this", "last" and "next" (as in this month or last month). • REPEATER - Most of the date and time terms are tagged using this class. This is described in more detail below. There are a number of subclasses of REPEATER to indicate the numerous date and time terms. For example: • Unit names like "year", "month", "week", "day", etc., use the subclasses REPEATER-YEAR, REPEATER-MONTH, REPEATER-WEEK, REPEATER-DAY. • REPEATER-MONTH-NAME is used to indicate month names like "jan" or "january". • REPEATER-DAY-NAME indicates day names like "monday". • REPEATER-TIME is used to indicate time strings like 20:00. • Parts of the day like AM, PM, morning, evening use the subclass REPEATER-DAY-PORTION. In addition, all the REPEATER subclasses need to implement a few methods that are needed for date-time computations. • R-NEXT - Given a repeater and a pointer i.e. :PAST or :FUTURE, returns a time span in the immediate past or future relative to the NOW slot. For example, assume the date in NOW is 31st December 2018. • (r-next repeater :past) for a REPEATER-MONTH will return a time span starting 1st November 2018 and ending at 30th November. • (r-next repeater :future) will return a span for all of January 2019. • Similarly, for a REPEATER-DAY this would have returned 30th December for :PAST and 1st January for the :FUTURE pointer. • R-THIS is similar to R-NEXT except it works in the current context. The width of the span also depends on whether direction of the pointer. • (r-this repeater :past) for a REPEATER-DAY will return a span from the start of day until now. • (r-this repeater :future) will return a span from now until the end of day. • (r-this repeater :none) will return the whole day today. • R-OFFSET - Given a span, a pointer and an amount, returns a new span offset from the given span. The offset is roughly the amount mulitplied by the width of the repeater. Now we can put the whole tokenization and tagging piece together: (defun tokenize (text) (mapcar #'create-token (cl-ppcre:split #?r"\s+" text))) (defun tokenize-and-tag (text) (let ((tokens (tokenize text))) (loop for type in (list 'repeater 'grabber 'pointer 'scalar 'ordinal 'separator) do (scan-tokens type tokens)) tokens)) As you can see, computing the tags for each token is accomplished by the SCAN-TOKENS. This is a generic function specialized on the class name of the tag. One of the methods implementing SCAN-TOKENS is shown below. (defmethod scan-tokens ((tag (eql 'grabber)) tokens) (let ((scan-map '(("last" :last) ("this" :this) ("next" :next)))) (dolist (token tokens tokens) (loop for (regex value) in scan-map when (cl-ppcre:scan regex (token-word token)) do (tag (create-tag 'grabber value) token))))) (defmethod tag (tag token) (push tag (token-tags token))) Going back to our original example, for the text "20 days ago", these are the tags set for each token (after normalization). Token Tags ----- ---- 20 [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME] days [REPEATER-DAY] past [POINTER] ## Pre-process tokens We are almost ready to run pattern matching to figure out the input date, but first, we need to resolve the ambiguity related to the term second that we faced during normalization. At that time, we did not convert it to the number 2 since it could refer to either the unit of time or the number. Now with tokenization done, we resolve this ambiguity with a simple hack: if the term second is followed by a repeater (i.e. month, day, year, january, etc.), we assume that it is the ordinal number 2nd and not the unit of time. See PRE-PROCESS-TOKENS for more details. ## Pattern matching The last piece of the puzzle is pattern matching. Armed with tokens and their corresponding tags, we define several date and time patterns that we know of and try to match them to their input tokens. First we name a few pattern classes - each pattern we define belongs to one of these classes. • DATE - patterns that match an absolute date and time e.g. "1st January", "January 1 at 2 PM", etc. • ANCHOR - patterns that typically involve a grabber e.g. "yesterday", "tuesday" "last week", etc. • ARROW - patterns like "2 days from now", "3 weeks ago", etc. • NARROW - patterns like "1st day this month", "3rd wednesday in 2007", etc. • TIME - simple time patterns like "2 PM", "14:30", etc. A pattern, at its simplest, is just a list of tag classes. A list of input tokens successfully matches a pattern if, for every token, at least one of its tags is an instance of the tag class mentioned at the corresponding position in the pattern. For example, the text "20 days ago" had these tags: Token Tags ----- ---- 20 [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME] days [REPEATER-DAY] past [POINTER] It will match any of these patterns: (scalar repeater pointer) (scalar repeater-day pointer) ((? scalar) repeater pointer) The last example shows a pattern with an optional tag - (? scalar). It will match tokens with or without the scalar e.g. both "20 days ago" and "week ago" will match. Our pattern matching engine also allows us to match an entire pattern class. For example, (repeater-month-name scalar-day (? separator-at) (? p time)) (? p time) here means that any pattern that belongs to the TIME pattern class can match. So all of "January 1 at 12:30", "January 1 at 2 PM" and "January 1 at 6 in the evening" will match without us needing to duplicate all the time patterns. Note: There's one limitation - a pattern class can only be specified at the end of a pattern in Chronicity. So a pattern like (repeater (p time) pointer) won't work. This will be fixed in the future. Each pattern has a handler function that decides how to convert the matching tokens to a date span. A pattern and its handler function are defined using the DEFINE-HANDLER macro. It assigns one or more patterns to a pattern class, and if either of these patterns match, the function body is run. Its general form is: (define-handler (pattern-class) (tokens-var) (pattern1 pattern2 ...) ... body ... ) An example handler is shown below. (define-handler (date) (tokens) ((repeater-month-name scalar-year)) (let* ((month-name (token-tag-type 'repeater-month-name (first tokens))) (month (month-index month-name)) (year (token-tag-type 'scalar-year (second tokens))) (start (make-date year month))) (make-span start (datetime-incr start :month)))) Most handler functions will use make use of the the repeater methods R-NEXT, R-THIS and R-OFFSET that we described above. Chronicity implements this pattern matching logic in the TOKENS-TO-SPAN function. All the patterns and their handler functions are defined inside handler-defs.lisp. Patterns defined earlier in the file get precedence over those defined later. If you add, remove or modify a handler, you should reload the whole file rather than just evaluating that handler's definition. ## Returning the result Finally, we put everything together. (defun parse (text &key (guess t)) (let ((tokens (tokenize-and-tag (pre-normalize text)))) (pre-process-tokens tokens) (values (guess-span (tokens-to-span tokens) guess) tokens))) By default PARSE will return a timestamp instead of a time span. This depends on the value passed to the :GUESS keyword - see the GUESS-SPAN function to see how it is interpreted. If you want to return a time span send NIL instead. The second value that this function returns is the list of tokens alongwith all its tags. This is useful for debugging Chronicity results in the REPL. CHRONICITY> (parse "20 days ago") @2018-12-12T12:01:53.758578+05:30 (#<TOKEN 20 [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME] {1007639243}> #<TOKEN days [REPEATER-DAY] {10076AF5D3}> #<TOKEN past [POINTER] {1007553443}>) CHRONICITY> (parse "20 days ago" :guess nil) #<SPAN 2018-12-12T00:00:00.000000+05:30..2018-12-13T00:00:00.000000+05:30> (#<TOKEN 20 [SCALAR-YEAR, SCALAR-DAY, SCALAR, REPEATER-TIME] {1001B78BC3}> #<TOKEN days [REPEATER-DAY] {1001B78C03}> #<TOKEN past [POINTER] {1001B78C43}>) The actual PARSE function has a few more bells and whistles than the one defined here: • :ENDIAN-PREFERENCE to parse ambiguous dates as dd/mm (:LITTLE) or mm/dd (:MIDDLE) • :AMBIGUOUS-TIME-RANGE to specify whether a time like 5:00 is in the morning (AM) or evening (PM). • :CONTEXT can be :PAST, :FUTURE or :NONE. This determines the time span returned for strings like "this day". See the definition of R-THIS` above. #### McCLIM — "Yule" progress report · 176 days ago Dear Community, Winter solstice is a special time of year when we gather together with people dear to us. In pagan tradition this event is called "Yule". I thought it is a good time to write a progress report and a summary of changes made since the last release. I apologise for infrequent updates. On the other hand we are busy with improving McCLIM and many important (and exciting!) improvements have been made in the meantime. I'd love to declare it a new release with a code name "Yule" but we still have some regressions to fix and pending improvements to apply. We hope though that the release 0.9.8 will happen soon. We are very excited that we have managed to resurrect interest in McCLIM among Common Lisp developers and it is thanks to the help of you all - every contributor to the project. Some of you provide important improvement suggestions, issue reports and regression tests on our tracker. Others develop applications with McCLIM and that helps us to identify parts which need improving. By creating pull requests you go out of your comfort zone to help improve the project and by doing a peer review you prevent serious regressions and make code better than it would be without that. Perpetual testing and frequent discussions on #clim help to keep the project in shape. Financial supporters allow us to create bounties and attract by that new contributors. ## Finances and bounties Speaking of finances: our fundraiser receives a steady stream of funds of approximately$300/month. We are grateful for that. Right now all money is destined for bounties. A few times bounty was not claimed by bounty hunters who solved the issue - in that case I've collected them and re-added to funds after talking with said people. Currently our funds available for future activities are $3,785 and active bounties on issues waiting to be solved are$2,850 (split between 7 issues). We've already paid $2,450 total for solved issues. Active bounties: • [$600] drawing-tests: improve and refactor (new!).
• [$600] streams: make SEOS access thread-safe (new!). • [$500] Windows Backend.
• [$450] clx: input: english layout. • [$300] listener: repl blocks when long process runs.
• [$150] UPDATING-OUTPUT not usable with custom gadgets. • [$150] When flowing text in a FORMATTING-TABLE, the pane size is used instead of the column size.
Claimed bounties (since last time):
• [$100] No applicable method for REGION-CONTAINS-POSITION-P -- fixed by Cyrus Harmon and re-added to the pool. • [$200] Text rotation is not supported -- fixed by Daniel Kochmański.
• [$400] Fix Beagle backend -- cancelled and re-added to the pool. • [$100] with-room-for-graphics does not obey height for graphics not starting at 0,0 -- fixed by Nisar Ahmad.
• [$100] Enter doesn't cause input acceptance in the Listener if you hit Alt first -- fixed by Charles Zhang. • [$100] Listener commands with "list" arguments, such as Run, cannot be executed from command history -- fixed by Nisar Ahmad.
• [\$200] add PDF file generation (PDF backend) -- fixed by Cyrus Harmon; This bounty will be re-added to the pool when the other backer Ingo Marks accepts the solution.
## Improvements
I'm sure you've been waiting for this part the most. Current mid-release improvements and regressions are vast. I'll list only changes which I find the most highlight-able but there are more and most of them are very useful! The whole list of commits and contributors may be found in the git log. There were also many changes not listed here related to the CLX library.
• Listener UX improvements by Nisar Ahmad.
• Mirrored sheet implementation refactor by Daniel Kochmański.
• New demo applications and improvements to existing ones,
• Font rendering refactor and new features:
This part is a joint effort of many people. In effect we have now two quite performant and good looking font rendered. Elias Mårtenson resurrected FFI Freetype alternative text renderer which uses Harfbuzz and fontconfig found in the foreign world. Daniel Kochmański inspired by Freetype features implemented kerning, tracking, multi-line rendering and arbitrary text transformations for the native TTF renderer. That resulted in a major refactor of font rendering abstraction. Missing features in the TTF renderer are font shaping and bidirectional text.
• Experiments with xrender scrolling and transformations by Elias Mårtenson,
• Image and pattern rendering refactor and improvements by Daniel Kochmański.
Both experiments with xrender and pattern rendering were direct inspiration for work-in-progress migration to use xrender as default rendering mechanism.
Patterns have now much better support coverage than they used to have. We may treat pattern as any other design. Moreover it is possible to transform patterns in arbitrary ways (and use other patterns as inks inside parent ones). This has been done at expense of a performance regression which we plan to address before the release.
• CLX-fb refactor by Daniel Kochmański:
Most of the work was related to simplifying macrology and class hierarchy. This caused small performance regression in this backend (however it may be fixed with the current abstraction present there).
• Performance and clean code fixes by Jan Moringen:
Jan wrote a very useful tool called clim-flamegraph (it works right now only on SBCL). It helped us to recognize many performance bottlenecks which would be hard to spot otherwise. His contributions to the code base were small (LOC-wise) and hard to pin-point to a specific feature but very important from the maintanance, correctness and performance point of view.
• Text-size example for responsiveness and UX by Jan Moringen,
• Various gadget improvements by Jan Moringen,
clim-extensions:box-adjuster-gadget deserves a separate mention due to its usefulness and relatively small mind share. It allows resizing adjacent panes by dragging a boundary between them.
• New example for output recording with custom record types by Robert Strandh,
• PostScript and PDF renderer improvements by Cyrus Harmon,
• Scrigraph and other examples improvements by Cyrus Harmon,
• Multiple regression tests added to drawing-tests by Cyrus Harmon,
• Ellipse drawing testing and fixes by Cyrus Harmon,
• Better contrasting inks support by Jan Moringen,
• Output recording and graphics-state cleanup by Daniel Kochmański,
• WITH-OUTPUT-TO-RASTER-IMAGE-FILE macro fixed by Jan Moringen,
• Regions may be printed readably (with #. hack) by Cyrus Harmon,
• event-queue processing rewrite by Nisar Ahmad and Daniel Kochmański:
This solves a long standing regression - McCLIM didn't run correctly on implementations without support for threading. This rewrite cleaned up a few input processing abstractions and provided thread-safe code. SCHEDULE-EVENT (which was bitrotten) works as expected now.
• Extensive testing and peer reviews by Nisar Ahmad:
This role is easy to omit when one looks at commits but it is hard to overemphasize it - that's how important testing is. Code would be much worse if Nisar didn't put as much effort on it as he did.
## Plans
Before the next release we want to refactor input processing in McCLIM and make all stream operations thread-safe. Refactoring input processing loop will allow better support for native McCLIM gadgets and streams (right now they do not work well together) and make the model much more intuitive for new developers. We hope to get rid of various kludges thanks to that as well. Thread-safe stream operations on the other hand are important if we want to access CLIM application from REPL in other process than the application frame (right now drawing from another process may for instance cause output recording corruption). This is important for interactive development from Emacs. When both tasks are finished we are going to release the 0.9.8 version.
After that our efforts will focus on improving X11 backend support. Most notably we want to increase use of the xrender extension of clx and address a long standing issue with non-english layouts. When both tasks are accomplished (some other features may land in but these two will be my main focus) we will release 0.9.9 version.
That will mark a special time in McCLIM development. Next release will be 1.0.0 what is a significant number. The idea is to devote this time explicitly for testing, cleanup and documentation with a feature freeze (i.e no new functionality will be added). What comes after that nobody knows. Animations? New backends? Interactive documentation? If you have some specific vision in which direction McCLIM should move all you have to do is to take action and implement the necessary changes :-).
## Merry Yule and Happy New Year 2019
This year was very fruitful for McCLIM development. We'd like to thank all contributors once more and we wish you all (and ourselves) that the next year will be at least as good as this one, a lot of joy, creativeness and Happy Hacking!
Sincerely yours,
McCLIM Development Team
#### Zach Beane — Personal Notes on Corman Lisp 3.1 Release
· 176 days ago
For older items, see the Planet Lisp Archives.
Last updated: 2019-06-19 21:15
|
2019-06-25 18:45:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4199354946613312, "perplexity": 1978.7241602167464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999876.81/warc/CC-MAIN-20190625172832-20190625194832-00057.warc.gz"}
|
https://tex.stackexchange.com/questions/320880/qtree-and-booklet-packages
|
# qtree and booklet packages
I am using the booklet package, and I would like to draw syntactic trees with the qtree package. However it seems the two packages are in conflict: just by stating the qtree package cut the whole page in half. Here is the code:
\documentclass[12pt]{article}
\usepackage[print]{booklet}
\setpdftargetpages
\usepackage{linguex}
\renewcommand{\firstrefdash}{}
\def\exr{\setcounter{ExNo}{0}\ex}
\usepackage{tipa}
\usepackage{qtree}
\usepackage{natbib}
\begin{document}
bla bla bla
ex.[(3)] \Tree [.DP [.D the ]]
\end{document}
Any idea of how to fix this?
Thanks!
• Could you post your conflicting code? – Ignasi Jul 25 '16 at 11:06
• That's what booklet does! By default it makes four-page signatures, so two pages are always produced, that should then be printed on both sides of a sheet. I see no difference when qtree is loaded an when it's not (in this case I just added some mock text instead of the tree). – egreg Jul 25 '16 at 14:36
• Thank you for replying. I might not make myself clear. This is what I get when using qtree and booklet together: click – RobertP. Jul 25 '16 at 16:24
• @RobertP. please don't use external image hosting sites. I you want tow show an image, you can always edit your question and include one. – user36296 Jul 25 '16 at 17:04
• Dis you try to move the qtree package before booklet? – user36296 Jul 25 '16 at 17:08
|
2019-08-17 18:03:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5814622044563293, "perplexity": 2306.1563938182253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313436.2/warc/CC-MAIN-20190817164742-20190817190742-00402.warc.gz"}
|
https://www.physicsforums.com/threads/quick-help-needed-on-diagonalization.56782/
|
Quick Help needed on Diagonalization
1. Dec 14, 2004
matrix_204
Could someone tell me how to get the P(inverse), P^-1. For example I read all the examples in my book and it has like given the matrix for P and then it finds the matrix for P(inverse)Vo , how do i find P^-1. Plz help me quickly.
Ex. P= | 2 -1 |
asdfasf| 3 as1 |
and Vo=| 1 |
iiiiiiiiiiiiiiiiii| 1 |
so P^-1Vo=1/5 [ 2 -1](transpose)
Last edited: Dec 14, 2004
2. Dec 14, 2004
cyby
For two by two matrices, it is easy.
For a 2x2 matrix
$$A = \left(\begin{array}{cc}a&b\\c&d\end{array}\right)$$
The inverse is just...
$$A^{-1} = \frac{1}{\|A\|} \left(\begin{array}{cc}d&-b\\-c&a\end{array}\right)$$
3. Dec 14, 2004
matrix_204
thank you very much, it saved me so much time, also, is there a formula for a 3x3 matrix too or no.
btw is ||A||= a^2 + b^2 - c^2 - d^2,
just wondering, i dunno if thats right but what would it be for a 2x2 matrix.
Last edited: Dec 14, 2004
4. Dec 14, 2004
cyby
5. Dec 14, 2004
matrix_204
ok got it, thanx
6. Dec 14, 2004
cyby
$$\|A\|$$ is the determinant of A. For a 2x2 matrix
$$A = \left(\begin{array}{cc}a&b\\c&d\end{array}\right)$$
$$\|A\| = ad - bc$$
7. Dec 14, 2004
matrix_204
I have one more question, because sometimes its hard to figure out the eigenvector, i was wondering is there a easier way to figure what what the eigenvectors are besides using the equation
http://mathworld.wolfram.com/eimg422.gif [Broken]
Last edited by a moderator: May 1, 2017
8. Dec 14, 2004
cyby
I don't recall so. But please, do yourself a favor and don't work them out by hand. It's just too much boring arithmetic...
9. Dec 14, 2004
shmoe
Just to clarify terminology, that equation you linked to gives the eigenvalues, which you then use to find the eigenvectors by looking at the nullspace of $$A-\lambda I$$, where $$\lambda$$ is an eigenvalue.
I do suggest you work these out by hand when first learning them. You're more likely to understand what an eigenvector is if you're swimming through the arithmetic trenches than if you're simply entering a matrix into a computer or calculator and having it spit out some answers for you. Of course if you feel you have fully mastered the concept, by all means use mechanical aid (and certainly don't shy from using it to check your work). Just my opinion.
Last edited by a moderator: May 1, 2017
|
2018-03-21 10:04:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6228832602500916, "perplexity": 1138.353587430048}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647600.49/warc/CC-MAIN-20180321082653-20180321102653-00016.warc.gz"}
|
http://www.seanmathmodelguy.com/?p=1024&owa_medium=feed&owa_sid=896400576
|
## Continuity – Limits and Continuity
Posted: 12th February 2013 by seanmathmodelguy in Lectures
#### Continuity
A function $$f(x)$$ is said to be continuous at a point $$a$$ in its domain if the following three properties hold.
• $$\displaystyle \lim_{x \to a} f(x)$$ exists. This takes three steps to show in itself.
• $$f(a)$$ has to exist,
• $$\displaystyle \lim_{x \to a} f(x) = f(a)$$.
Continuity connects the behaviour of a function in a neighbourhood of a point with the value at the point.
If the domain of the function is bounded say $$[a,b]$$ then each end point of the interval can only be approached in one way. The left end point is at $$x = a$$ so $$f(x)$$ is said to be continuous at the left end point ‘$$a$$’ if $$\displaystyle \lim_{x \to a^+} f(x) = f(a)$$. In a similar fashion, $$f(x)$$ is said to be continuous at the right end point ‘$$b$$’ if $$\displaystyle \lim_{x \to b^-} f(x) = f(b)$$.
###### Types of Discontinuities
Since there are only a few ways that the limit of a function cannot exist at a point there are few ways that a function can fail to be continuous. These are classified into four types.
1. Jump Discontinuity (also known as a simple discontinuity)
2. Removable Discontinuity
3. Infinite Discontinuity
4. Oscillatory Discontinuity
An explicit example of each type of discontinuity follows next.
###### Examples
1. Jump Discontinuity:
Is $$f(x) = \begin{cases} \displaystyle x-1, & 1 \le x \le 2 \\ -1, & -2 \le x < 1 \end{cases}$$ continuous at $$x = 1$$? Where is $$f(x)$$ continuous?
To answer this we go back to the definition. By computing $$L^+ = 0$$ and $$L^- = -1$$ (for $$x = 1$$) we see that they are not equal and consequently, $$\displaystyle \lim_{x \to 1} f(x)$$ DNE. Therefore $$f(x)$$ is not continuous at $$x=1$$. As to where $$f(x)$$ is continuous, this is everywhere else in the domain $$(-2,1) \cup (1,2)$$. For the endpoints, in this case we say that $$f(x)$$ is continuous from the right at $$x=-2$$ and continuous from the left at $$x=2$$.
A graph of the function appears to the right.
2. Removable Discontinuity:
Consider $$g(x) = \begin{cases} \displaystyle x, & x \ne 2 \\ 5, & x = 2. \end{cases}$$ Is $$g(x)$$ continuous at $$x = 2$$? No because even though $$\displaystyle \lim_{x\to 2}g(x) = 2$$ exists, $$\displaystyle \lim_{x\to 2}g(x) \ne g(2) = 5$$. Since this function can be made continuous by redefining it at $$x=2$$, we call this type of discontinuity removable.
To illustrate how a function can be fixed, notice that $$f_1(x) = \displaystyle\frac{\sin x}{x}$$ is not continuous for all $$x\in {\Bbb R}$$ since $$f_1(0)$$ DNE. However, $$f_2(x) = \begin{cases} \displaystyle \frac{\sin x}{x}, & x \ne 0 \\[3mm] 1, & x = 0 \end{cases}$$ is continuous for all $$x\in {\Bbb R}$$.
3. Infinite Discontinuity:
Consider the function $$y = 1/x$$ on any domain that includes $$x=0$$. Since the function becomes unbounded continuity fails at $$x=0$$ since the limit does not exist there. The domain of the function is very important since the same function on a different domain ($$\displaystyle y = 1/x, 1 \le x \le 2$$), does not have an infinite discontinuity because it does not become unbounded on the given domain.
4. Oscillatory Discontinuity:
This type of discontinuity occurs when a function oscillates too much, as in the case of $$y = \sin(1/x)$$. As $$x\to 0$$, $$f(x)$$ does not approach a single value.
We leave it as an exercise to the student to show that
$$f(x) = \begin{cases} \displaystyle x\sin\left(\frac{1}{x}\right), & x \ne 0 \\[3mm] 0, & x = 0 \end{cases}$$ is a continuous function for all $$x\in {\Bbb R}$$.
1. Kiru Sengal says:
🙂 There are many other ways a function can be discontinuous.
The list of discontinuities should be sufficient for nice functions (i.e., “elementary” functions aka closed form expressions involving polynomials, trig, log, etc.).
|
2019-06-15 22:43:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9314476251602173, "perplexity": 208.77867003493492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997501.61/warc/CC-MAIN-20190615222657-20190616004657-00086.warc.gz"}
|
https://dpc.pw/
|
# Dawid Ciężarkiewicz aka dpc
contrarian notes on software engineering, Open Source hacking, cryptocurrencies etc.
## My case against mocking frameworks
Since “mocks” and “mocking” is somewhat vague, and nuances between mocks, fakes, mock objects, test doubles, spies, etc. are confusing, let's start with what I mean by “mocking”.
What I mean by it is intercepting and/or substituting internal and often arbitrary function calls to test your code.
A great example of a mocking approach is a mocking framework like Mockito, where the tests look like this:
## “Objects” (in OOP) are just confusing people
I have recently discovered a very good youtube channel about business software development: CodeOpinion. Very concisely yet informative. I highly recommend it. Don't worry – it's OOP-rant free, and all about mainstream accepted good practices, not some wacko OOP-hater like me.
Anyway, while going through some of the episodes I have found a good example of how “Objects” (in OOP) are a metaphor that is just confusing people.
Please watch the AVOID Entity Services by Focusing on Capabilities – it's just 7:30:
## You can trust science, but can you trust scientists?
BTW. I highly recommend Statistical Rethinking 2022 Youtube lectures by the same author. I started watching them recently and I am enjoying them a lot so far (finished first two videos so far).
## What I'd like you to know about making your software fast
I've been optimizing a lot of projects over the years, and here are some tips that I'd like to share with you
## Adding parallelism to your Rust iterators with dpc-pariter
TL;DR: I published a parallel processing library for Rust that works differently than rayon and is especially useful in cases where rayon falls a bit short. I'll talk a little bit about it, and show you when it can be helpful.
## “Data-Oriented Programming Unlearning objects” – short review
I've recently purchased the Data-Oriented Programming Unlearning objects by Yehonathan Sharvit. I and would like to write out some notes I've captured during reading it.
I don't like technical/programming writing using personal narration. It just seems fluffy to me. A lot of words, with very low information density. But that's just my opinion – maybe other people enjoy it.
The book starts with a straw-man of a naive (but not uncommon to see in reality) OOP code. Then the author goes on to explain what's wrong with it. I agree with most of the critique. The core of the proposed alternative approach is to separate code from the data, which I also advocate. I actually agree with a lot of things described in the book, so from now on, I'll focus on my disagreements.
The book advocates representing all the data separated from the code as a recursively nested map of free-form objects keyed by strings. Basically, all the data is one big JSON object. And I can't agree with that.
I guess at the root of the disagreement comes dynamic vs static typing again. If you are going to use a dynamically typed language... you might as well do what the book advocates. Shove everything in JSON maps. In modern programming languages with a nice type system (Rust, Ocaml, etc.) with generics, type inference, structural pattern matching, such an approach doesn't make sense. You'd be throwing tons of benefits, for a very little benefit.
The book unsurprisingly uses JavaScript for code examples, and the author mentions coming up with DOP from a Closure background. I don't want to get too into dynamic vs static typing debate, but IMO: the way type system was implemented in legacy languages like Java is terrible, and I can't blame the industry for switching to Ruby, Python, JS, etc. eventually, but nowadays statically types languages like TypeScript, Rust, etc. are way better and static typing wins over dynamic typing.
My preference and advice: if you are in the static typing camp, ignore these untyped maps, and use types instead. A lot of other things described in the book will work with types just as well. The author mentions the disadvantages of using static types, and there's a lot of truth in it. But it does not change my opinion.
Generally while reading the book it was clear to me that the whole approach is optimized for backend request-response JSON data shoveling. My background is in implementing much more complex software, with tight performance and complexity requirements, of various types, among some usual JSON shoveling between HTTP endpoints and a database. This DOP approach is going to quickly break down when challenged by more complex (technically or socially) scenarios, IMO.
While I am an advocate of functional programming and immutability, the author severally downplays and/or ignores the downsides and limitations of immutability, persistent data structures, functional programming, etc. Yes, mutability is a sharp tool and it's best to opportunistically avoid it, but it's still a necessary and/or worthwhile tradeoff in a lot of cases.
In chapter 13: Polymorphism without objects, a multimethod approach to dynamic polymorphism was presented and it seems very clunky. In my own (very data-oriented) programming approach, I just use interfaces a lot, in the code part of my software. And if I had to implement a dynamic call based on the value of some “type” field, I would just use case statement, and I think the author is making similar mistakes as OOP developers by dismissing it.
Anyway, to sum up: the “Data-Oriented Programming Unlearning objects” is an interesting book as in so far as it breaks out of OOP dogma. It presents a complete set of ideas and tools to design and implement software avoiding OOP dogma. I don't agree with some ideas in it, in particular, abandoning the benefits of static typing by representing everything as a JSON object, but nevertheless, I think it's a worthwhile read for people looking for some food for thought and exploring alternatives to OOP.
## Making Open Source economy more viable with dual license collectives
I've been a FOSS enthusiast since around 1998 when I discovered Linux when I was about 13 years old. It was truly one of the definitive events in my life. There is something extraordinary and even spiritual in the idea of a global, voluntary, collaborative yet competitive network of people sharing their knowledge and work and building together something that can challenge even the largest commercial organizations.
However, it's clear that there's a huge problem with FOSS:
https://imgs.xkcd.com/comics/dependency.png
and the older I get, the more this practical reality bothers me.
While sharing work freely is noble and leads to great overall efficiencies, the pragmatic reality is that FOSS developers live in a material world, a market economy, and need to make money somehow. Many approaches to funding FOSS development have been tried, all of which with rather unsatisfying results.
Here is an idea that has been sitting in my mind for more than a year now, and I still think it might work. I finally decided to write it down, so people can tell me if it already has been tried or why is it bad. I almost never have any truly unique idea, so I bet someone will send me a link proving that I just suck at googling stuff. If you think it's good – feel free to give it a try. After all, ideas are cheap and execution is where the value is.
## Data-oriented, clean&hexagonal architecture software in Rust – through an example project
This post and work behind tries to achieve multiple goals:
## What OOP gets wrong about interfaces and polymorphism
I often receive feedback to my general OOP critique from people somewhat sympathetic to my message suggesting that since OOP is vague and not precisely defined, it would be more productive to talk about its core tenets/features in separation and drop the “OOP” name altogether.
I've also received an email (hi Martin!) asking among other things on my opinion about usage of interfaces in OOP. So instead of writing a private response, I'm just going to dwell a bit more on it in a blog post.
BTW. I'm continuing to read #oop #books to gather more insight and arguments. Currently, I am going through Object-Oriented Software Construction by Bertrand Meyer. The book is huge and presents the case for OOP in-depth, which is perfectly fulfilling my needs. And on top of it – it's old, so it gives me a lot of insight on “what were they thinking?!” ;). Hopefully, I'll get to a post about it in the not-too-distant future, but I will be referring to it already in this post.
Anyway... about the polymorphism and stuff...
|
2022-05-22 13:20:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2028420865535736, "perplexity": 1682.6584933355127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00481.warc.gz"}
|
https://brilliant.org/problems/can-we-use-fermats-little-theorem/
|
Don't try to cheat.
Find the remainder when $${ 5 }^{ 561 }$$ is divided by $$17$$
Note- Do not use calculator.Don't cheat.
×
|
2018-04-26 09:43:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25648391246795654, "perplexity": 2023.7927314288793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948125.20/warc/CC-MAIN-20180426090041-20180426110041-00548.warc.gz"}
|
http://mathhelpforum.com/calculus/278322-transformations.html
|
1. Transformations
Hi,
When using the dash method for determining transformations, when it says a translation of +2 units parallel to the x axis, does this mean +2 units from the y axis?
So would it be : (x,y) --> (x, y+2)?? Similarly to this, would a dilation of 2 parallel to the x axis, mean dilation 2 from the y axis? (x,y) --> (x, 2y) ?
Could someone please clarify the use of the words parallel or from the x or y axis in regards to the use of this dash method.
Thank you
- ChanelSapphire
2. Re: Transformations
Originally Posted by ChanelSapphire
When using the dash method for determining transformations, when it says a translation of +2 units parallel to the x axis, does this mean +2 units from the y axis?
So would it be : (x,y) --> (x, y+2)?? Similarly to this, would a dilation of 2 parallel to the x axis, mean dilation 2 from the y axis? (x,y) --> (x, 2y) ?
It is a curse of mathematical terminology is that there is no international standard.
The transformation $(x,y)\to(x,y+2)$ simply takes a graph two units up (a positive vertical transformation).
Because the transformed graph "looks" exactly the same with respect to the x-axis but "moved" two units up, we would say that it is parallel to the x-axis.
3. Re: Transformations
Originally Posted by Plato
It is a curse of mathematical terminology is that there is no international standard.
The transformation $(x,y)\to(x,y+2)$ simply takes a graph two units up (a positive vertical transformation).
Because the transformed graph "looks" exactly the same with respect to the x-axis but "moved" two units up, we would say that it is parallel to the x-axis.
Thanks!!
So for this question, would this be correct:
(x,y) --> (x-1,4y+2) ?
so x'= x-1 and y'=4y+2
x= x'+1 y= (y'-2)/4
so the transformed graph would be y = 4(x-1)^3 +2
Is this the correct interpretation of translation parallel to the x and y axis?
4. Re: Transformations
Originally Posted by ChanelSapphire
.
Doing any sort of "bumping" does nothing but aggravate.
Originally Posted by ChanelSapphire
Thanks!!
So for this question, would this be correct:
(x,y) --> (x-1,4y+2) ?
so x'= x-1 and y'=4y+2
x= x'+1 y= (y'-2)/4
so the transformed graph would be y = 4(x-1)^3 +2
Is this the correct interpretation of translation parallel to the x and y axis?
This particular subject has no universally accepted vocabulary. We have no way to know the background material given to you by the instructor/textbook. That is to say that we cannot give you a tutorial.
I have a suggestion for a reference textbook that gives excellence coverage on transformations.
Any good mathematics library should have Modern Geometries by James R. Smart.
|
2018-12-13 00:10:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5722967982292175, "perplexity": 1416.3068082208438}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824180.12/warc/CC-MAIN-20181212225044-20181213010544-00108.warc.gz"}
|
https://math.stackexchange.com/questions/399592/v-w-1-oplus-cdots-oplus-w-k-if-and-only-if-dimv-sum-dimw-i
|
# $V=W_1\oplus\cdots\oplus W_k$ if and only if $\dim(V)=\sum{\dim(W_i)}$
If $W_1,\dots, W_k$ are subspaces of a finite dimensional vector space $V$ such that $W_1+\cdots+W_k=V$, and I want to show that $V=W_1\oplus\cdots\oplus W_k$ if and only if $\dim(V)=\sum{W_i}$, then will what's displayed below suffice?
$$V=W_1\oplus\cdots\oplus W_k$$ $$\iff$$ $$V=W_1+\cdots+W_k~\text{and}~W_i \cap (W_1 + \ldots + W_{i-1} + W_{i+1} + \ldots + W_k) = \{0\}$$ $$\iff$$ $$\text{The subspaces W_i are independent; that is, no sum w_1+\cdots+w_k with w_i in W_i is zero except the trivial sum.}$$ $$\iff$$ $${\scr{B}}=\{\beta_1,\dots,\beta_k\}~\text{is a basis for V, where \beta_i is a basis for W_i}$$ $$\iff$$ $$\dim{V}=\dim{(W_1+\cdots+W_k)}=\dim{W_1}+\cdots+\dim{W_k}=~\mid\beta_1\mid+\cdots+\mid\beta_k\mid=k$$ $$\iff$$ $$\overset{\text{Does this belong here?}}{\dim{\scr{B}}=k}$$
• you are supposing that $W_i$ $i\in {1,2,..,k}$ has dimension $1$ wich must not be true – sigmatau May 22 '13 at 20:50
• Which denotes cardinality then? – Trancot May 22 '13 at 20:52
• Yes, that is correct. $|A|$ denotes the cardinality of set $A$. If $\beta_i$ is a basis for $W_i$, then $|\beta_i|=\dim W_i$. – Jared May 22 '13 at 20:53
• The second line shows a big misunderstanding of the direct sum... it is not equivalent to being a direct sum. For instance, suppose $A\cap B=\{0\}$. Then $A\cap A\cap B=\{0\}$ also but the sum of $A+A+B$ is not direct. The correct version is: $(\sum_{i\neq j} W_i)\cap W_j=\{0\}$. – rschwieb May 22 '13 at 21:08
• Try to reduce your usage of notation. In particular avoid using $\iff$ between statements that are not strictly equivalent. None of the equivalences in your post actually is an equivalence of mathematical statements. – Martin May 22 '13 at 21:12
Your first equivalence is false: when $k \geq 3$, $\bigcap_{i=1}^k W_k = \{0\}$ is too weak to imply that the spaces $W_1,\ldots,w_k$ are independent, i.e., that $W_1 + \ldots + W_k = W_1 \oplus \ldots \oplus W_k$. For example, take $V = \mathbb{R}^2$, $W_1 = \langle (1,0) \rangle$, $W_2 = \langle (0,1) \rangle$, $W_3 = \langle (1,1) \rangle$. This is one of two or three subtle traps in linear algebra that even professional mathematicians can fall into if they're not careful.
Any of the following is an acceptable definition of independent subspaces (and they are equivalent):
(i) For all $1 \leq i \leq k$, $W_i \cap (W_1 + \ldots + W_{i-1} + W_{i+1} + \ldots + W_k) = \{0\}$.
(ii) If for all $i$ we choose a nonzero $v_i \in W_i$, then $\{v_1,\ldots,v_k\}$ is a linearly independent set.
(iii) If for all $i$ we choose a linearly independent set $S_i \subset W_i$, then $S = \bigcup_{i=1}^k S_i$ is a linearly independent set.
Note that if you believe that (iii) is equivalent to the sum being a direct sum then you've got the equivalence you're asking about, so I suggest you concentrate instead on showing these conditions are equivalent to your given definition of internal direct sums.
• @Barisa: your new version is improved. If you take your first $\iff$ as the definition of the sum being direct, it seems to me that you should explain why the third $\iff$ holds (it's hard for me to evaluate the second $\iff$ because I don't know which definition of "independent subspaces" you're taking). If that $\iff$ can be taken as known, then your argument is essentially complete. – Pete L. Clark May 22 '13 at 23:14
|
2021-08-02 13:41:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252088069915771, "perplexity": 169.50641211294953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154320.56/warc/CC-MAIN-20210802110046-20210802140046-00539.warc.gz"}
|
http://math.stackexchange.com/questions/170788/separated-schemes-and-unicity-of-extension
|
# Separated schemes and unicity of extension
In point set topology, we have the following result, which is easily proved.
Theorem. Let $Y$ be Hausdorff space and $f,g:X \to Y$ be continuous functions. If there exists a set $A\subset X$ such that $\bar{A} = X$ and $f|_A = g|_A$, then $f=g$.
I was trying to understand what would be the natural generalization of this fact in the category of schemes. We know that the correct analogous of a Hausdorff space is a separated scheme. So I was thinking in a statement like this:
"Let $Y$ be a separated scheme and $f,g:X \to Y$ be morphisms of schemes. If there exists a set $A\subset X$ such that $\bar{A} = X$ and $f|_A = g|_A$, then $f=g$."
The first problem is that we must be careful with this restriction "$f|_A$", since $f$ is a morphism and I want to consider the morphism of sheaves also. Then I saw that Liu's book on Algebraic Geometry has the following statement:
"Let $Y$ be a separated scheme, $X$ a reduced scheme, and $f,g:X \to Y$ morphisms of schemes. If there exists a dense open subset $U$ such that $f|_U=g|_U$, then $f=g$."
Now this makes sense, since we are dealing with open subsets now. But I still find this result too restrictive. So I came up with this:
"Let $Y$ be a separated scheme, $X$ a reduced scheme, and $f,g:X \to Y$ morphisms of schemes. If there exists a morphism $\varphi:S \to X$ such that $\varphi(S)$ is dense in $X$ and $f\circ \varphi = g\circ \varphi$, then $f=g$."
It's easy to see that Liu's proof of the result concerning only the open set also applies to this context. Finally, let's go to the questions:
1. Is this really the best generalization? Is there any other results in this direction that are at least slightly different?
2. I can see where the "reduced" hypothesis enters in the proof, but I found it a little strange. Is it just a technical point or can be "understanded" in some sense? Maybe counterexamples of this fact when this hypothesis isn't valid would help to clarify , but I didn't think of any.
P.S. Sorry for the bad english.
-
To answer your question (1): Let $X=\mathrm{Spec}(B)$ be such that $B\to O_X(U)$ is not injective for some dense open subset $U$ (this can't happen if $B$ is reduced), let $Y=\mathrm{Spec}\mathbb Z[t]$. Fix an element $b\in B$ non-zero such that $b|_U=0$.
Let $\varphi: \mathbb Z[t] \to B$ be any ring homomorphism and let $\psi : \mathbb Z[t] \to B$ be defined by $\psi(t)=\varphi(t)+b$. Then the corresponding morphisms $f, g : X\to Y$ coincide on $U$ but are not equal.
Standard example of such $B$: $B=k[x,y]/(x^2, xy)$ and $U=X\setminus \{ (0,0)\}=D(y)$.
For your question (1), you can replace the hypothesis $X$ reduced by $S\to X$ schematically dominant. When $X$ is noetherian, this means that the image of $S\to X$ contains the associated points of $X$ (= maximal points of $X$ if the latter is reduced).
|
2014-08-20 15:08:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9732438325881958, "perplexity": 85.55730184000028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00264-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-rational-equations-7-4x-3-x-2-1-2x-2#220352
|
# How do you solve rational equations 7/(4x) - 3/x^2 = 1/(2x^2)?
Feb 2, 2016
Place on an equivalent denominator.
#### Explanation:
The LCD (Least Common Denominator) is $4 {x}^{2}$
7(x) - 3(4) = 1(2)
7x - 12 = 2
7x = 14
x = 2
Your solution is x = 2
|
2023-02-06 22:26:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5099788308143616, "perplexity": 3977.775384286107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00262.warc.gz"}
|
http://sioc-journal.cn/Jwk_hxxb/CN/abstract/abstract346425.shtml
|
不对称氢化杂环亚胺合成四氢吡咯/吲哚[1,2-a]并吡嗪
1. a 中国科学院大连化学物理研究所 大连 116023;
b 中国科学院大学 北京 100049
• 投稿日期:2017-11-02 发布日期:2018-01-09
• 通讯作者: 周永贵 E-mail:ygzhou@dicp.ac.cn
• 基金资助:
项目受国家自然科学基金(Nos.21532006,21690074)、中国科学院前沿科学项目(No.QYZDJ-SSW-SLH035)和大连市科技局(No.2016RD07)资助.
Synthesis of Tetrahydropyrrolo/indolo[1,2-a]pyrazines by Enantioselective Hydrogenation of Heterocyclic Imines
Hu Shu-Boaa,b, Chen Mu-Wanga, Zhai Xiao-Yonga, Zhou Yong-Guia
1. a Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023;
b University of Chinese Academy of Sciences, Beijing 100049
• Received:2017-11-02 Published:2018-01-09
• Contact: 10.6023/A17110476 E-mail:ygzhou@dicp.ac.cn
• Supported by:
Project supported by the National Natural Science Foundation of China (Nos. 21532006, 21690074), the Chinese Academy of Sciences (No. QYZDJ-SSW-SLH035) and Dalian Bureau of Science and Technology (No. 2016RD07).
1,2,3,4-Tetrahydropyrrolo[1,2-a]pyrazines are an important motif due to their biological activities and widely existing in natural products. Notably, the substituent and the absolute configuration are important for the medicinal efficacy. Thus, the synthesis of chiral tetrahydropyrrolo[1,2-a]pyrazines has attracted much attention of scientists. Most synthetic methods utilized chiral starting materials or auxiliaries. Kinetic resolution was an alternative way to give chiral tetrahydropyrrolo[1,2-a]pyrazines. The first cata-lytic asymmetric synthetic method was developed in 2011 by Li and Antilla through a chiral phosphoric acid-catalyzed asymmetric intramolecular aza-Friedel-Crafts reaction of aldehydes with N-aminoethylpyrroles in high enantiocontrol level. Subsequently, the sequential aerobic oxidation-asymmetric intramolecular aza-Friedel-Crafts reaction between N-aminoethylpyrroles and benzyl alcohols for the synthesis of tetrahydropyrrolo[1,2-a]pyrazines was realized using chiral bifunctional heterogeneous materials composed of Au/Pd nanoparticles and chiral phosphoric acids. The asymmetric hydrogenation as an efficient way has been successfully applied to synthesize the kind of chiral amines. In 2012, Our group achieved the asymmetric hydrogenation of 1-substituted pyrrolo[1,2-a]pyrazines via a substrate activation strategy. Recently, we reported the direct asymmetric hydrogenation of 3-substituted pyrrolo[1,2-a]pyrazines in up to 96% ee values. Considering their impressive significance, herein, we successfully hydrogenated 3,4-dihydropyrrolo[1,2-a]pyrazines and 3,4-dihydroindolo[1,2-a]pyrazines with up to 99% yield and 95% ee. The reaction features mild condition, high enantioselectivity and high atom-economy. The typical procedure for asymmetric hydrogenation is as follows:A mixture of[Ir(COD)Cl]2 (3.0 mg, 0.0045 mmol) and the ligand Cy-WalPhos (6.6 mg, 0.0099 mmol) was stirred in toluene (1.0 mL) at room temperature for 5 min in the glove box. Then the solution was transferred to the vial containing the substrate 3,4-dihydropyrrolo[1,2-a]pyrazines (0.3 mmol) together with toluene (2.0 mL). The vial was taken to an autoclave and the hydrogenation was conducted at 40℃ as well as at a hydrogen pressure of 500 psi for 48 h. After carefully releasing the hydrogen, the autoclave was opened and the toluene was evaporated in vacuo. The residue was purified by column chromatography to afford the corresponding chiral tetrahydropyrrolo[1,2-a]pyrazines.
|
2022-08-12 21:56:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2153903990983963, "perplexity": 12193.675787162816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00566.warc.gz"}
|
https://docs.blender.org/manual/ru/latest/addons/import_export/scene_dxf.html
|
Reference
Category
Import-Export
File ‣ Import/Export ‣ AutoCAD DXF
Import¶
DXF layers are reflected as Blender groups. This importer uses a general purpose DXF library called «dxfgrabber».
DXF Type Mapping¶
To be as non-destructive as possible the importer aims to map as many DXF types to Blender curves as possible.
DXF to Curves¶
• LINE as POLYLINE curve (with the option to merge connecting lines).
• (LW)POLYLINE, (LW)POLYGON as POLYLINE curve if they have no bulges else as BEZIER curve.
• ARCs, CIRCLEs and ELLIPSEs as BEZIER curves.
• HELIXes (3D) as BEZIER curves.
DXF to Meshes¶
• MESH is mapped to a mesh object with a Subdivision Surface modifier, including the edge crease.
• POLYFACEs and POLYMESHes are imported to a mesh object.
• 3DFACE s, SOLID s, POINT s are imported into one combined mesh object per layer called layername_3Dfaces.
• Hatches
Properties¶
Merge Options¶
Blocks As
DXF Blocks can be imported as linked objects or group instances. Linked objects use parenting for DXF sub-blocks (blocks in blocks).
Parent Blocks to Bounding Boxes
Draw a bounding box around blocks.
Merged Objects
Since Blender (v2.71) is pretty slow at adding objects the user might want to merge similar DXF geometry to one object.
By Layer
Produces one object per layer; if there is mesh, curve, lamp, text data on one layer one object per layer and per Blender object.
By Layer and DXF Type
The second not only differentiates between Blender data types but also DXF types, such as LWPOLYLINE and POLYLINE.
By Layer and Closed No-bulge Polygons
Closed polylines with no bulge, that is no curved edges, can be merged to one single mesh. This makes sense when the DXF polylines have an extrusion and/or an elevation attribute which basically describes a location/rotation/scale transformation. If this merge option is chosen, line thickness settings will be ignored/disabled.
By Layer and DXF-Type and Blocks
For DXF files with a block being referenced many times, this option allows to insert the same block many times with one instanced-face object instead of with one object for each time the block needs to be inserted. Unfortunately this works only for block inserts that are uniformly scaled. Non-uniformly scaled block inserts are being imported as defined in Blocks As.
Combine LINE Entities to Polygons
Separated lines in DXF might be merged to one consecutive Blender poly curve. Similar to Remove Doubles but for curves.
Line Thickness and Width¶
Represent Line Thickness/Width
DXF line attributes thickness and width have an effect on line in Z and X/Y direction respectively. A straight line might be turned to a cube by its attributes for instance. Therefore in Blender these attributes are represented with curve extrusion, bevel and taper objects.
Merge by Attributes
If both Merged Objects and Represent Line Thickness/Width are activated the object merging needs to be extended to separate all lines width different thickness and width. With Merge by Attributes this separation option is also available without the actual representation of line thickness and width.
Optional Objects¶
Import TEXT
(TEXT, MTEXT)
Import LIGHT
Including support for AutoCAD colors.
Export ACIS Entities
Export NURBS 3D geometry (BODY, REGION, PLANESURFACE, SURFACE, 3DSOLID) to ACIS-Sat files, since this is the format AutoCAD stores NURBS to DXF. You are going to be notified about the amount of stored .sat/.sab files.
View Options¶
Display Groups in Outliner(s)
Switch the Outliner display mode to GROUPS (DXF layers are mapped to groups).
Import DXF File to a New Scene
Todo.
Center Geometry to Scene
Center the imported geometry to the center of the scene; the offset information is stored as a custom property to the scene.
Georeferencing¶
Important: DXF files do not store any information about the coordinate system / spherical projection of its coordinates. Best practice is to know the coordinate system for your specific DXF file and enter this information in the DXF importer interface as follows:
Pyproj
Installation: Download (Windows, macOS) Pyproj and copy it to your
AppData/ApplicationSupport Folder/Blender/2.91/scripts/modules/.
In case you need to compile your own binary refer to this post on Blender Artists.
Pyproj is a Python wrapper to the PROJ library, a well known C library used to convert coordinates between different coordinate systems. Open source GIS libraries such as PROJ are used directly or indirectly by many authorities and therefore can be considered to be well maintained.
If Pyproj is available the DXF importer shows a selection of national coordinate systems but lets the user also to enter a custom EPSG / SRID code. It also stores the SRID as a custom property to the Blender scene. If a scene has already such a SRID property the coordinates are being converted from your DXF file to target coordinate system and therefore you must specify a SRID for the DXF file. If no SRID custom property is available the scene SRID is by default the same as the DXF SRID.
No Pyproj
In case Pyproj is not available the DXF importer will only use its built-in lat/lon to X/Y converter. For conversion the «transverse Mercator» projection is applied that inputs a lat/lon coordinate to be used as the center of the projection. The lat/lon coordinate is being added to your scene as a custom property. Subsequent imports will convert any lat/lon coordinates to the same georeference.
Important: So far only lat/lon to X/Y conversion is supported. If you have a DXF file with Euclidean coordinates that refer to another lat/lon center the conversion is not (yet) supported.
Rules of thumb for choosing an SRID
if you have your data from OpenStreetMap or some similar GIS service website and exported it with QGIS or ArcGIS the coordinates are most likely in lat/lon then use WGS84 as your SRID with Pyproj or «spherical» if Pyproj is not available. For other DXF vector maps it’s very likely that they use local / national coordinate systems.
Open the DXF with a text editor (it has many thousands of lines) and make an educated guess looking at some coordinates. DXF works with «group codes», a name Autodesk invented for «key» as in key/value pairs. X has group code 10, Y has 20, Z has 30. If you find a pattern like:
10, newline, whitespace, whitespace, NUMBER, newline,
20, newline, whitespace, whitespace, NUMBER, newline,
30, newline, whitespace, whitespace, NUMBER
then NUMBER will be most likely your coordinates. Probably you can tell from the format and/or the range of the coordinates which coordinate system it should be.
Export¶
Supported Data¶
• Mesh face: POLYFACE or 3DFACE
• Mesh edge: LINE
• Modifier (optionally)
Unsupported Data¶
• Mesh vertex: POINT
• Curve: LINEs or POLYLINE
• Curve NURBS: curved-POLYLINE
• Text: TEXT or (wip: MTEXT)
• Camera: POINT or VIEW or VPORT or (wip: INSERT(ATTRIB+XDATA))
• Light: POINT or (wip: INSERT(ATTRIB+XDATA))
• Empty: POINT or (wip: INSERT)
• Object matrix: extrusion (210-group), rotation, elevation
• 3D Viewport: (wip: VIEW, VPORT)
• Instancing vert: auto-instanced or (wip: INSERT)
• Instancing frame: auto-instanced or (wip: INSERT)
• Instancing group: auto-instanced or (wip: INSERT)
• Material: LAYER, COLOR and STYLE properties
• Group: BLOCK and INSERT
• Parenting: BLOCK and INSERT
• Visibility status: LAYER_on
• Frozen status: LAYER_frozen
• Locked status: LAYER_locked
• Surface
• Meta
• Armature
• Lattice
• IPO/Animation
|
2020-12-02 18:47:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21796545386314392, "perplexity": 9623.392028502776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00304.warc.gz"}
|
https://cs.stackexchange.com/questions/114853/i-came-up-with-a-way-to-modify-dijkstras-algorithm-to-handle-graphs-with-negati
|
# I came up with a way to modify Dijkstra's Algorithm to handle graphs with negative edge weighs [duplicate]
1. Add a constant $$c\geq |w_{min}|$$ to each edge of $$G$$, so that each edge now has non-negative weight.
2. Run Dijkstra's algorithm
Can anyone tell me if this is viable or if it fails?
Let $$G(V, E)$$ denote a graph with a cost function $$c:e\in E\mapsto Z$$, i.e., both positive and negative whole numbers, and no negative cycles. Let us assume there is an edge $$e\langle v_i, v_j\rangle$$ with a negative cost, i.e., $$c(e_{ij}) <0$$, so that $$w_{min}=c(e_{ij})$$.
Let $$G'(V, E)$$ denote a graph where the cost of each edge is now $$c'(e_{ij})=c(e_{ij}) + w_{min}, \forall v_i, v_j\in V$$. Certainly, there are no negative edge costs in $$G'$$.
Running Dijkstra on $$G'$$ to find the shortest path between two arbitrary vertices $$s$$ and $$t$$ would return a path $$\pi:\langle s=v_0, v_1, v_2, ..., v_n=t\rangle$$ with a cost equal to $$\sum_{i=0}^{n-1}c'(e_{i, i+1})=\sum_{i=0}^{n-1}\left(c(e_{i,i+1})+w_{min}\right)=n\times w_{min}+\left(\sum_{i=0}^{n-1}c(e_{i, i+1})\right)$$ ---please note the usage of $$c'$$ and $$c$$ in these expressions. In other words, the overall cost of paths in $$G'$$ penalizes larger paths by the constant $$w_{min}$$ and in the end, it might very easily return a different path.
It does not matter whether you correct now the cost of the path returned by running Dijkstra in $$G'$$, as it might be simply different.
|
2020-07-11 08:12:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374015092849731, "perplexity": 184.84557196086155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00597.warc.gz"}
|
https://newproxylists.com/tag/sphere/
|
## world of darkness – How does the Prime Sphere affect gaining and storing Quintessence in M20?
Previous answers did not satisfy me so I tried to look into the matter more thoroughly, asking a few friends who are in the know and trying to cite sources that clarify things as much as possible. Of particular help was Antonios “Rave-n” Galatis, co-developer in the making of M20, who helped me put all this together as clear and well-defined as possible.
## For starters let’s clarify a few things:
As a Mage you can Meditate near a Node to
(M20 -Meditation p.281)
(Quintessence Rating being equal to your Avatar Background)
The process of of drawing Quint. from a source (Node/Tass/etc) is also called “Channeling” (M20 – How Do You DO That, ‘Channeling Quintessence’ p.43) and, evidently, does not decrease/drain the actual Node’s reserve.
This particular method via Meditation is simply your own Avatar refreshing its full potency, its natural energy reserves (M20 – Quintessence p.332). Furthermore, both in the latter and in (M20 – How Do You DO That, ‘The Basics’ p.42) it is made clear that you can Channel more Quintessence into your Pattern than that Rating through a Prime 1 Effect. Essentially, we come to the result that
– (Mages have a “default” amount of Quintessence in their pattern which depends on the potency of their Avatar (as per the relevant trait)
It should also be mentioned that “Channeling” actually has a somewhat broader meaning which, usually, meaning with “moving” Quintessence from a source to a recipient, as the previous sources mention.
So, to answer the actual items the question consists of:
• Prime1 states that without it “a mage can’t absorb Quint. beyond their Avatar Rating. Does that mean the previous limit is no longer relevant or something else?
Yes. A Prime 1 Effect indeed allows you to gain (and store) more Quintessence than your Quintessence/Avatar Rating from a Node. However, that Rating “limit” still remains relevant because it’s the natural “energy baseline” you replenish your reserves to when Meditating at a Node, as described above. Furthermore the Avatar/Quintessence Rating defines *Note: In spite of the general “Rank 1 = Perception” paradigm of Sphere Effects, this seems rather certain due to having 2 sources, mentioned above.*
• Prime1 also states you can “perceive and channel Quintessence from Nodes/Tass/Wonders”. Can’t you already get Quint. from Nodes via Meditating (the Skill) on them? Does this represent another way to do the same, possibly in less time or does “channelling” imply manipulation of Quintessence without absorbing it into your Pattern?
See previous element. Moreover, the “Tass/Wonders” part of the text seems to imply the overall possibilities in studying the Prime Sphere so it’s a bit confusing. Which brings us to…
• Prime3, the one which causes the most confusion, states that with it you can “draw both free and raw quint from Nodes/Tass/Junctures etc”. This directly correlates with the previous point as they appear, at least, to have some common ground, which seems irregular given their different level. What does each one do?
We can conclude that Prime 3 is the required rank to Channel from Tass/Wonders/Junctures and, probably every non-Node source, into Effects or Patterns(including your own). This deduction comes from the both the very description of the Rank and a relevant table in Common Magickal Effects (M20 – ‘Quintessence Quintessence Energy’ p.510), where ‘Absorb’ probably means to Channel into yourself.
Also, something that should be mentioned is that if a Mage simply wishes to use the Quintessence reserves in a Tass, they don’t need to roll for an Effect if the Tass is on their person, and even if it is at a distance a Prime 2 roll suffices. That part is verified by (M20 – How Do You DO That, ‘Accessing Tass or Periapt Quintessence’ p.45)
## world of darkness – Usage of Prime Sphere M20
We started an M20 campaign and I created a character that focuses on Prime and Life Spheres. Life is a very useful sphere and can have many good effects on its own. Prime on the other hand seems a little tricky to me.
The character concept is of a hunter that hunts down and kills Technocracy affiliated people. He is an expert in Melee, Firearms and Do. He has a Wonder background of twin Revolvers that produce infinite bullets via Matter magic (they need to be fed 1 point of Quintessence every 4 clips).
The only usages I have of my Prime sphere at the moment are:
• Feeding the revolvers
• Enchanting the bullets with quintessence for aggravated dmg
• Creating a lightsaber out of raw quintessence for use with my Weapon Art Do special skill
• Enchanting my fists with Quintessence to deal better damage via Do/Martial Arts techniques
But this seems to lackluster and there are no combinations with Life magic. Are there any other usages that I’m not seeing? I really like the idea of a character that plays with magic in its purest form but I can’t seem to find ways to make it awesome…
• Prime 3/Life 3
• Melee/Firearms 3 and Do 2
• Node BG 5
• Avatar BG 4
• Buffed up physical stats using Life 3
• Cyclic Mage merit (we try to adventure while we are in our zenith)
I have access to a Sanctum, Resources 5 and a Mansion which is built over our Node (also contains the Sanctum). My team has Time3/Corre3, Mind2/Prime1/Life3, Mind3/Forces3.
PS: Yes Revolvers do indeed have clips in MTA
## dnd 5e – How does Freezing Sphere delayed explosion interact with simple coverings?
There are spells, which can create a bead like object which will explode after
a time. Otiluke’s Freezing Sphere is the one I am particularly interested in (incidentally for the same bard my earlier question about Destructive Wave was for), as it is rather safer to handle than Delayed Blast Fireball
Relevant snippets from spell description:
A frigid globe of cold energy streaks from your fingertips to a point of your choice within range, where it explodes in a 60-foot-radius sphere.
(…)
You can refrain from firing the globe after completing the spell, if you wish. A small globe about the size of a sling stone, cool to the touch, appears in your hand. (…) You can also set the globe down without shattering it. After 1 minute, if the globe hasn’t already shattered, it explodes.
How much cover can block the spell effect?
Example cases to make answering easier:
• The spell globe is suddenly placed on the table in front of you by a Mage Hand. It may explode at any time. Clever as you are, you cover it with your brass beer mug and wait 1 minute. Did you just foil a level 6 spell? What if you covered it with your cloak? Or a napkin?
• You, the mighty culinary wizard, cast the spell, put the globe into a jar filled with cream and berries, close it with a lid, and give it to a servant to hurriedly take it to the king’s feast. Did you just initiate an assassination, or send out an exquisite ice-cream dessert?
## at.algebraic topology – Outer automorphism group of Brieskorn homology sphere?
In this post, it is discussed how a Brieskorn homology sphere $$Sigma(a_1,a_2,a_3)$$ with $$displaystyle frac{1}{a_1}+ frac{1}{a_2}+ frac{1}{a_3} < 1$$ is an aspherical manifold with superperfect fundamental group and non-trivial center. Would anyone know what the outer automorphism groups of their fundamental groups are? I’m looking to do semidirect products with these groups as the kernel group with another group ($$mathbb{Z} times mathbb{Z}$$) as the quotient group, so I need to know the outer automorphism grousp of these groups.
## world of darkness – Detecting Magick use by using the Prime Sphere
I have a Mage: the Ascension 20th Anniversary Edition game running and one of the players played a game of Mage the Ascension Second Edition before that with a different Storyteller.
In that chronicle they used the prime sphere as a universal detection spell for the use of magick (however that player is unsure if that was just a houserule).
The rulebook of M20 states inside the description of what you can do with the first dot of the Prime Sphere on page 520 (highlights added by the asker):
She may spot energetic ebbs and flows, can sense and at least try to
, and could also absorb
Quintessence into her personal Pattern.
One wiki states for a 1 dot rote of the prime sphere:
Etheric Senses: The mage can perceive Quintessential energy, and is
alerted when someone uses magic in their vicinity. source
They state Mage: The Ascension Revised Edition Pg. 179-180 as the source for this rote together with page 520 in the M20 core rulebook.
Is that a correct application of rules as written in Mage the Ascension Second Edition and would the rules of Mage: the Ascension 20th Anniversary Edition allow for a similar reading?
## equation solving – Mathematica crashing on Solve (finding points on 4 circles on a sphere). How to reformulate?
I define a small circle on a unit sphere by the direction of the plane’s normal and its distance to the sphere center (i.e. origin) like this (parametrized by the angle t):
``````sphereCircleRadiusFromOfs(ofs_) := Sqrt(1 - ofs^2);
pointOnSphereCircle(dir_, ofs_) := dir*ofs + Normalize@Cross(Cross({0, 1, 0}, dir), dir) * sphereCircleRadiusFromOfs(ofs);
sphereCircle(dir_, ofs_, t_) := dir + RotationMatrix(t, dir).(pointOnSphereCircle(dir, ofs) - dir);
``````
Now, given 4 such circles with plane normals towards the 4 vertices of a regular tetrahedron and a distance of 1/Sqrt(2), I want to find solutions for the 4 angles such that the sums of the 4 points on the 4 circle are {0,0,0}.
I attempt to this by:
``````ofs = 1/Sqrt(2);
sol = Solve(
sphereCircle(Normalize@{+1, +1, +1}, ofs, tPPP) +
sphereCircle(Normalize@{+1, -1, -1}, ofs, tPNN) +
sphereCircle(Normalize@{-1, +1, -1}, ofs, tNPN) +
sphereCircle(Normalize@{-1, -1, +1}, ofs, tNNP) == {0, 0, 0}, {tPPP, tPNN, tNPN, tNNP})
``````
Unfortunately, Mathematica keeps computing forever, consuming more and more memory and will eventually crash. Is there a way to reformulate the problem such that Mathematica is more successful in solving it ?
I am trying to make a moon object. I have a (mostly) spherical object model that I want to place the moon texture on top of. The moon texture is an equirectangular projection as seen below.
However, the moon drawn is rendered like this:
As you can see, not all of the north pole is rendering. Roughly 3/4 of the moon seems to have rendered, but it doesnt seem to have any of the red transition color in the north pole.
The moon model was made in Blender by simply creating a UV Sphere and then exporting it with triangulation enabled. No changes were made. The moon obj file can be found here in text version. Maybe the normals aren’t correct? I tried using an ico sphere as well, but the results with that were even worse. Any advice for why it isn’t rendering correctly?
Here is the sphere mesh if that helps:
## geometry – Create N points that are spaced as far as possible within a D dimensional sphere
I’m not a big geometry buff, but I was wondering if you’re give a dimension $$D$$ and a number of points $$N$$ and a radius $$R$$ of a sphere in that dimension. How would you generate $$N$$ points on the edge or in the sphere such that their magnitude and cosine similarity between each point is as far as possible? The dimensional space could be something as large as 512.
## simulations – Sphere – AABOX Edge detection
I am trying to implement a particle-AABOX edge collision, the below images represent, two timesteps(dt) where the spherical particle is accelerated by gravity.
I have the particle center C and its radius r, along with AABOX coordinates as B_min and Bmax.
I have to check if the particle collides with any edges, and then bounce back, if a collision happens.
If I use Arvo’s AABB vs Sphere collision algorithm, then I can find if there is an intersection within the radius, but I can’t get the hitpoint and hitdistance.
And after getting the hitpoint and hitdistance, how to reflect it back? (for every timestep, I will have the current position C, and also the previous position P).
I tried to calculate it by implementing,
``````BOX_N(0) = make_vector(-1, 0, 0);// x-min;
BOX_P(0) = make_point(bb_min.x, bb_min.y + (bb_max.y - bb_min.y) / 2.0f , bb_min.z + (bb_max.z - bb_min.z) / 2.0f);
BOX_N(1) = make_vector(1, 0, 0); // x-max
BOX_P(1) = make_point(bb_max.x, bb_min.y + (bb_max.y - bb_min.y) / 2.0f , bb_min.z + (bb_max.z - bb_min.z) / 2.0f);
BOX_N(2) = make_vector(0, -1, 0);// y-min;
BOX_P(2) = make_point(bb_min.x + (bb_max.x - bb_min.x) / 2.0f, bb_min.y, bb_min.z + (bb_max.z - bb_min.z) / 2.0f);
BOX_N(3) = make_vector(0, 1, 0); // y-max
BOX_P(3) = make_point(bb_min.x + (bb_max.x - bb_min.x) / 2.0f, bb_max.y, bb_min.z + (bb_max.z - bb_min.z) / 2.0f);
BOX_N(4) = make_vector(0, 0, -1);// z-min;
BOX_P(4) = make_point(bb_min.x + (bb_max.x - bb_min.x) / 2.0f, bb_min.y + (bb_max.y - bb_min.y) / 2.0f , bb_min.z);
BOX_N(5) = make_vector(0, 0, 1); // z-max
BOX_P(5) = make_point(bb_min.x + (bb_max.x - bb_min.x) / 2.0f, bb_min.y + (bb_max.y - bb_min.y) / 2.0f , bb_max.z);
__device__ int box_collision(const Point& previous_position, const Point& current_position, const Vector& direction, const float& radius, Point& hitPoint, Vector& normal, float& hit_distance) {
Point boxPoint;
int index;
if (current_position.x < (bb_min.x + radius)) { index = 1; normal = BOX_N(0); boxPoint = BOX_P(0); boxPoint.x += radius;}
else if (current_position.x > (bb_max.x - radius)) { index = 1; normal = BOX_N(1); boxPoint = BOX_P(1); boxPoint.x -= radius;}
else if (current_position.y < (bb_min.y + radius)) { index = 2; normal = BOX_N(2); boxPoint = BOX_P(2); boxPoint.y += radius;}
else if (current_position.y > (bb_max.y - radius)) { index = 2; normal = BOX_N(3); boxPoint = BOX_P(3); boxPoint.y -= radius;}
else if (current_position.z < (bb_min.z + radius)) { index = 3; normal = BOX_N(4); boxPoint = BOX_P(4); boxPoint.z += radius;}
else if (current_position.z > (bb_max.z - radius)) { index = 3; normal = BOX_N(5); boxPoint = BOX_P(5); boxPoint.z -= radius;}
else return 0;
auto denom = vdot(direction, normal);
if (denom < 1e-6) return 0;
hit_distance = vdot(normal, boxPoint - previous_position) / denom;
if (hit_distance < 0) return 0;
hitPoint = previous_position + hit_distance * direction;
return index;
}
inline __device__ bool compute_box_collision(Point& previous_position, Point& current_position, const float& radius) {
Point hitPoint;
Vector normal;
float hit_distance;
Vector direction = vnormalize(current_position - previous_position);
int collision = box_collision(previous_position, current_position, direction, radius, hitPoint, normal, hit_distance);
if (!collision) return true;
Vector damping{1, 1, 1};
if (collision == 1) damping.x = bounce_factor;
else if (collision == 2) damping.y = bounce_factor;
else if (collision == 3) damping.z = bounce_factor;
float relection_length = vlength(hitPoint - current_position);
Vector R = vnormalize(direction - 2.0f * vdot(normal, direction) * normal) * damping;
current_position = hitPoint + R * relection_length;
previous_position = hitPoint - R * hit_distance;
return false;
}
``````
by calling the 2nd function to check for collision,
``````inline __device__ bool compute_box_collision(Point& previous_position, Point& current_position, const float& radius)
``````
In the above code, I used the ray-plane interesection, and then use reflection of the direction to get the bounce direction, but after 500-550 timesteps, the solution deviates and becomes numerically unstable(possibly they settle at the bottom and collides continously). Is there something, I am missing here? Or am I doing something wrong?
Note: I am using simiulating 10k particles, so the solution deviation compounds by a large amount.
## 3d – Triangle vs Sphere collision detection
I am working on collision detection for my 3D game, and I don’t really know how to approach this problem. I have an ellipsoid that i use for my player’s collision shape.
My idea is that I transform the vertex data in such a way that my ellipsoid can be represented as a sphere, which would make intersection testing easier. But the thing is, how would I know if there is a collision happening if the plane intersects with the sphere, but none of the vertices do?
I want a simple solution to this problem. No velocity vectors or predictions, just a static test to see if there is an intersection.
|
2021-04-10 11:41:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3785529136657715, "perplexity": 4376.597292180058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056869.3/warc/CC-MAIN-20210410105831-20210410135831-00328.warc.gz"}
|
https://study.com/academy/answer/a-14-kg-box-slides-down-a-long-frictionless-incline-of-angle-30-degree-it-starts-from-rest-at-time-t-0-at-the-top-of-the-incline-at-a-height-of-22-m-above-ground-a-what-is-the-original-poten.html
|
A 14 kg box slides down a long, frictionless incline of angle 30 degree . It starts from rest at...
Question:
A {eq}14 \ kg {/eq} box slides down a long, frictionless incline of angle {eq}30^\circ {/eq}. It starts from rest at time {eq}t = 0 {/eq} at the top of the incline at a height of {eq}22 \ m {/eq} above ground.
(a) What is the original potential energy of the box relative to the ground?
(b) From Newton's laws, find the distance the box travels in {eq}1 \ s {/eq} and its speed at {eq}t = 1 \ s {/eq}.
(c) Find the potential energy and the kinetic energy of the box at {eq}t = 1 \ s {/eq}.
(d) Find the kinetic energy and the speed of the box just as it reaches the bottom of the incline.
Newton's Laws of Motion:
From the information given, it is clear that the block slides down due to gravity but because there is an incline, you use trigonometry to find the component of the acceleration down the slope. If you are confused about which equation from Newton's Laws to apply, simply list down the details you have and compare with the equations.
Part(a)
The potential energy {eq}E_{p} {/eq} is computed as follows
\begin{align*} E_{p} &= mgh\\ &= 14(9.81)(22) = 3021.5J \end{align*}
Part(b)
Considering that down the incline, {eq}a = g\sin \theta {/eq} and t = 1s,
\begin{align*} v &= u + at\\ &= 0 + gt\sin \theta\\ &= 9.81(1)\sin 30 = 4.9m/s \end{align*}
the distance at 1s is calculated as follows
\begin{align*} v^2&= u^2 +2as \\ &= 0 + 2s_{1}g\sin \theta\\ s_{1}&= \frac{v^2}{2g \sin \theta}\\ &= \frac{4.9^2}{2(9.81) 30}\\ &= 2.45m \end{align*}
Part(c)
The new perpendicular height {eq}h_{1} = h - s_{1} \sin\theta {/eq}. The potential energy {eq}E_{p1} {/eq} is computed as follows
\begin{align*} E_{p1} &= mgh_{1}\\ &= 14(9.81)(22 - 2.55\sin 30) = 2846.4J \end{align*}
The kinetic Energy is calculated as follows
\begin{align*} E_{k1} &= \frac{1}{2}mv_{1}^2\\ &= \frac{1}{2}(14)4.9^2 = 168.1J \end{align*}
Part (d)
By the law of conservation of energy, the final kinetic energy just before the box reaches the bottom is equal to the potential energy we started with.
$$\therefore E_{kf} = 3021.5J$$
Now,
\begin{align*} E_{kf} &= \frac{1}{2}mv_{f}^2\\ v_{f}&= \sqrt\frac{2E_{kf}}{m}\\ &= \sqrt\frac{2(3021.5)}{14}\\ &=20.8m/s^2 \end{align*}
|
2019-08-18 09:15:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999979734420776, "perplexity": 2243.7513535773555}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313747.38/warc/CC-MAIN-20190818083417-20190818105417-00498.warc.gz"}
|
http://physics.stackexchange.com/questions/94067/do-i-understand-measurement-of-dispersion-relation-in-a-solid-correctly
|
# Do I understand measurement of dispersion relation in a solid correctly?
I'm currently doing an introduction to solid state physics course and have a quick question about measurement of the dispersion relation of phonons in a solid:
The way I understood it is the following. One can look at the vibrations of the atoms in a solid in a quantum mechanical way and introduce phonons as a quasiparticle which is a boson. One can then derive the following laws of energy and momentum conservation for the interaction of photons with phonons:
$\hbar \omega(q) = \hbar \omega_0 \pm \hbar \omega_{Ph}$ (Energy conservation)
$\hbar q = \hbar q_0 \pm \hbar q_{Ph} + G$ (Momentum conservation)
Where the subscript ph denotes the frequency / q-value of a phonon, and the subscript 0 denotes the frequency/ q-value of the photon after the photon and phonon interacted with each other. G is a reciprocal lattice vector.
To sample the dispersion relation of the phonons in the solid one can now simply shoot photons at the solid and look at the out coming phonons.
Solving the second equation for $q_{Ph}$ and the first one for $\omega_{Ph}$ (assuming we can properly measure the frequency and impulses of the out coming and incoming photons) then give us a relation between the q-values of the phonons and the frequency which is exactly the dispersion relation.
Did I understand this correctly or are there any flaws in my reasoning?
Cheers!
-
|
2015-10-10 05:28:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7637320160865784, "perplexity": 361.62585168776354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940794.79/warc/CC-MAIN-20151001221900-00227-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-to-compare-the-ac-method-factoring-by-grouping-and-the-new-transforming-meth#431048
|
# How to compare the AC Method (factoring by grouping) and the new Transforming Method in solving quadratic equations?
May 28, 2017
Solving quadratic equations by the new Transforming Method
#### Explanation:
A good way to compare these 2 methods is solving a sample of quadratic equation.
The Transforming Method. Solve
$y = 16 {x}^{2} - 62 x + 21 = 0$
Transformed equation:
$y ' = {x}^{2} - 62 x + 336 = 0$ -->( ac = 336)
Proceeding: Find the 2 real roots of the transformed equation y', then, divide them by a = 16.
Find 2 numbers knowing sum (-b = 62) and product (ac = 336).
Compose factor pairs of (336) --> ...(4, 82)(6, 56). This sum is (6 + 56 = 62 = -b). Then, the 2 real roots of y' are: 6 and 56.
Back to y, the 2 real roots are:
$x 1 = \frac{6}{a} = \frac{6}{16} = \frac{3}{8}$, and $x 2 = \frac{56}{16} = \frac{7}{2}$.
May 28, 2017
Solving quadratic equation by the AC Method (splitting the middle term)
#### Explanation:
$y = 16 {x}^{2} - 62 x + 21 - 0$
Proceed to split the middle term by proceeding as follows:
Find 2 numbers knowing sum (b = -62) and product
(ac = 16*21 = 336)
Compose factor pairs of (336):
... (-4, - 82)(-6, -56). This sum is (-62) and its product is (336).
Re-write the equation and split the middle term (-62x) into (-6x) and
(- 56x)
$y = 16 {x}^{2} - 6 x - 56 x + 21$
$y = 2 x \left(8 x - 3\right) - 7 \left(8 x - 3\right)$
$y = \left(8 x - 3\right) \left(2 x - 7\right)$
Solve the 2 binomials:
$\left(8 x - 3\right) = 0$ --> $x = \frac{3}{8}$
$\left(2 x - 7\right) = 0$ --> $x = \frac{7}{2}$
NOTE. The new Transforming Method (Google Search) avoids the lengthy factoring by grouping and solving the 2 binomials.
After you find the 2 numbers (-6) and (-56). Take the opposite of them, (6) and (56), then divide them by a = 16, you immediately get the 2 real roots. You don't need to proceed factoring by grouping further.
|
2021-10-25 11:17:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.394897997379303, "perplexity": 3111.4371097358444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587659.72/warc/CC-MAIN-20211025092203-20211025122203-00038.warc.gz"}
|
https://www.statistics-lab.com/%E6%95%B0%E5%AD%A6%E4%BB%A3%E5%86%99%E5%BE%AE%E5%88%86%E6%96%B9%E7%A8%8B%E4%BB%A3%E5%86%99differential-equation%E4%BB%A3%E8%80%83math-2003/
|
### 数学代写|微分方程代写differential equation代考|MATH 2003
statistics-lab™ 为您的留学生涯保驾护航 在代写微分方程differential equation方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写微分方程differential equation代写方面经验极为丰富,各种代写微分方程differential equation相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
## 数学代写|微分方程代写differential equation代考|Modeling Chemical Reactions
1.2.4. Modeling Chemical Reactions. One of the important uses of differential equations, at least in this book, is to model the dynamics of chemical reactions. The two elementary reactions that are of most importance here are conversion between species, denoted
$$A \stackrel{\alpha}{\rightleftarrows} B,$$
called a first order reaction, and formation and degradation of a product from two component species, denoted
$$A+B \underset{\approx}{\rightleftarrows} C$$
called a second order reaction.
The differential equations describing the first of these are
$$\frac{d a}{d t}=\beta b-\alpha a, \quad \frac{d b}{d t}=-\beta b+\alpha a$$
where $a=[A]$ and $b=[B]$, is the statement in math symbols that $B$ is created from $A$ at rate $\alpha[A]$ and $A$ is created from $B$ at rate $\beta[B]$. Of course, the total of $A$ and $B$ is a conserved quantity, since $\frac{d}{d t}(a+b)=0$.
The second of these reactions is described by the three differential equations
$$\frac{d a}{d t}=-\gamma a b+\delta c, \quad \frac{d b}{d t}=-\gamma a b+\delta c, \quad \frac{d c}{d t}=\gamma a b-\delta c$$
where $c=[C]$, which puts into math symbols the fact that $C$ is created from the combination of $A$ and $B$ at a rate that is proportional to the product $[A][B]$, called the law of mass action. Notice that the units of $\gamma$ are different ((time) ${ }^{-1}$ (concentration $)^{-1}$ ) than those for first order reactions ((time) $\left.{ }^{-1}\right)$. The degradation of $C$ into $A$ and $B$ is a first order reaction. For this reaction there are two conserved quantities, namely $[A]+[C]$ and $[B]+[C]$
An important example of reaction kinetics occurs in the study of epidemics, with the so-called SIR epidemic. Here $S$ represents susceptible individuals, I represents infected individuals, and $R$ represents recovered or removed individuals. We represent the disease process by the reaction scheme
$$S+I \stackrel{\alpha}{\longrightarrow} 2 I, \quad I \stackrel{\beta}{\longrightarrow} R$$
This implies that a susceptible individual can become infected following contact with an infected individual, and that infected individuals recover at an exponential rate.
## 数学代写|微分方程代写differential equation代考|Stochastic Processes
1.3.1. Decay Processes. Now that we have the review of differential equations behind us, we must face the fact that differential equation descriptions of biological processes are at best, highly idealized. This is because biological processes, and in fact many physical processes, are not deterministic, but noisy, or stochastic. This noise, or randomness, could be because, while the process actually is deterministic, we do not have the ability or the patience to accurately calculate the outcome of the process. For example, the flipping of a coin or the spin of a roulette wheel has a deterministic result, in that, if initial conditions were known with sufficient accuracy, an accurate calculation of the end result could be made. However, this is so impractical that it is not worth pursuing. Similarly, the motion of water vapor molecules in the air is by completely deterministic process (following Newton’s Second Law, no quantum physics required) but determining the behavior of a gas by solving the governing differential equations for the position of each particle is completely out of the question.
There are other processes for which deterministic laws are not even known. This is because they are governed by quantum dynamics, having possible changes of state that cannot be described by a deterministic equation. For example, the decay of a radioactive particle and the change of conformation of a protein molecule, such as an ion channel, cannot, as far as we know, be described by a deterministic process. Similarly, the mistakes made by the reproductive machinery of a cell when duplicating its DNA (i.e., the mutations) cannot, as far as we currently know, be described by a deterministic process.
Given this reality, we are forced to come up with another way to describe interesting processes. And this is by keeping track of various statistics as time proceeds. For example, it may not be possible to exactly track the numbers of people who get the flu every year, but an understanding of how the average number changes over several years may be sufficient for health care policy makers. Similarly, with carbon dating techniques, it is not necessary to know exactly how many carbon- 14 molecules there are in a particular painting at a particular time, but an estimate of an average or expected number of molecules can be sufficient to decide if the painting is genuine or a forgery.
1.3.1.1. Probability Theory. To make some progress in this way of describing things, we must define some terms. First, there must be some object that we wish to measure or quantify, also called a random variable, and the collection of all possible outcomes of this measurement is called its state space, or sample space. For example, the flip of a coin can result in it landing with head or tail up, and these two outcomes constitute the state space. Similarly, an ion channel may at any given time be either open or closed, and this also constitutes its state space. The random variable could be a discrete or continuous variable taking on only integer values if it is discrete or a real valued number or vector if it is continuous.
## 数学代写|微分方程代写differential equation代考|Several Reactions
1.3.2. Several Reactions. In the example of particle decay there was only one reaction possible. However, this is not typical as most chemical reactions involve a range of possible reactions. For example, suppose a particle (like a bacterium) may reproduce at some rate or it may die at a different rate. The question addressed here is how to do a stochastic simulation of this process.
Suppose the state $S_{j}$ can transition to the state $S_{k}$ at rate $\lambda_{k j}$. To do a stochastic simulation of this process, we must decide when the next reaction takes place and which reaction it is that takes place.
To decide when the next reaction takes place, we use the fact that the probability that the next reaction has taken place by time $t$ is 1 minus the probability that the next reaction has not taken place by time $t$. Furthermore, the probability that the reaction from state $j$ to state $k$ has not taken place by time $t$ is $\exp \left(-\lambda_{k j} t\right)$. So, the probability that no reaction has taken place by time $t$ (since these reactions are assumed to be independent) is
$$\prod_{k} \exp \left(-\lambda_{k j} t\right)=\exp \left(-\sum_{k} \lambda_{k j} t\right) .$$
It follows that the cdf for the next reaction is
$$1-\exp \left(-\sum_{k} \lambda_{k j} t\right)=1-\exp (-r t),$$
where $r=\sum_{k} \lambda_{k j}$. In other words, the next reaction is an exponential process with rate $r$
Next, the probability that the next reaction is the $i$ th reaction $S_{j} \rightarrow S_{i}$ is
$$p_{i j}=\frac{\lambda_{i j}}{\sum_{k} \lambda_{k j}}=\frac{\lambda_{i j}}{r} .$$
To be convinced of this, apply the results of Exercise $1.26$ to the case where either the $S_{j} \rightarrow S_{i}$ reaction occurs first or another reaction occurs first.
With these facts in hand, as we did above, we pick the next reaction time increment to be
$$\dot{\delta} t=\frac{-1}{r} \ln R_{1} \text {, }$$
where $0<R_{1}<1$ is a uniformly distributed random number. Next, to decide which of the reactions to implement, construct the vector $x_{k}=\frac{1}{r} \sum_{i=1}^{k} \lambda_{i j}$, the scaled vector of cumulative sums of $\lambda_{i j}$. Notice that the vector $x_{k}$ is ordered with $0 \leq x_{1} \leq x_{2} \leq \cdots \leq$ $x_{N}=1$, where $N$ is the total number of states. Now, pick a second random number $R_{2}$, uniformly distributed between zero and one, and pick the next reaction to be $S_{j} \rightarrow S_{k}$ where
$$k=\min {j}\left{R{2} \leq x_{j}\right}$$
## 数学代写|微分方程代写differential equation代考|Modeling Chemical Reactions
1.2.4。模拟化学反应。至少在本书中,微分方程的重要用途之一是模拟化学反应的动力学。这里最重要的两个基本反应是物种之间的转化,表示为
d一个d吨=bb−一个一个,dbd吨=−bb+一个一个
d一个d吨=−C一个b+dC,dbd吨=−C一个b+dC,dCd吨=C一个b−dC
## 数学代写|微分方程代写differential equation代考|Stochastic Processes
1.3.1。衰减过程。既然我们已经回顾了微分方程,我们必须面对这样一个事实,即生物过程的微分方程描述充其量是高度理想化的。这是因为生物过程,实际上是许多物理过程,不是确定性的,而是嘈杂的或随机的。这种噪音或随机性可能是因为虽然过程实际上是确定性的,但我们没有能力或耐心来准确计算过程的结果。例如,掷硬币或转动轮盘赌具有确定性结果,因为如果初始条件足够准确,则可以对最终结果进行准确计算。然而,这太不切实际了,不值得追求。相似地,
1.3.1.1。概率论。为了在这种描述事物的方式上取得一些进展,我们必须定义一些术语。首先,必须有一些我们希望测量或量化的对象,也称为随机变量,并且该测量的所有可能结果的集合称为其状态空间或样本空间。例如,抛硬币会导致它头朝上或尾部朝上落地,这两种结果构成了状态空间。类似地,离子通道可以在任何给定时间打开或关闭,这也构成了它的状态空间。随机变量可以是离散变量或连续变量,如果它是离散的,则它可以是仅取整数值的变量,或者如果它是连续的,则可以是实数值或向量。
## 数学代写|微分方程代写differential equation代考|Several Reactions
1.3.2. 几个反应。在粒子衰变的例子中,只有一种反应可能。然而,这并不典型,因为大多数化学反应都涉及一系列可能的反应。例如,假设一个粒子(如细菌)可能以某种速度繁殖,或者它可能以不同的速度死亡。这里解决的问题是如何对这个过程进行随机模拟。
∏ķ经验(−λķj吨)=经验(−∑ķλķj吨).
1−经验(−∑ķλķj吨)=1−经验(−r吨),
p一世j=λ一世j∑ķλķj=λ一世jr.
d˙吨=−1rlnR1,
k=\min {j}\left{R{2} \leq x_{j}\right}
## 有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
2023-01-30 11:11:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129004836082458, "perplexity": 428.7638738901473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00530.warc.gz"}
|
https://cpbrust.wordpress.com/2017/02/17/cats-vs-dogs-part-2-a-simple-cnn/
|
## Cats vs Dogs, Part 2 – A simple CNN
Now that we know what we’re dealing with, we’ll undertake some simple steps to pre-process the images and pass them through a CNN. We’ll use Tensorboard to monitor the progress of the algorithm.
A good place to start would seem to be borrowing some code. We want to build a binary classifier, an algorithm that can classify an image into one of two categories. We want it to be exclusive – the images have either a cat or a dog, but not both. To that end, straightforward googling yields the following useful piece of code, which we’ve re-uploaded to GitHub rather than filling this entire blog post.
The code takes care of pre-processing for us – it loads all of the files, splits them into a train and test set for diagnostic purposes, seemingly performs feature normalization and augments the data sets by acting with a subset of $O(2)$ transformations – left-right reflections and rotations up to 25 degrees. This seems entirely reasonable to me, so I’m leaving it alone.
The one thing that it is doing that we may want to modify later is downsizing all the images to 64 x 64. This seems reasonable to get going, but I worry that it might not be enough information for the network.
Next up, it’s building an architecture for us. At first, I didn’t know much about architectures, so I left it as is, but this architecture is the bulk of what we plan to change in the future. We ran the code in $O(5)$ hours, producing a model file. Tensorboard kept track of the training and produced some diagnostics for us to look at:
We’re seemingly doing quite well, by my expectations. The accuracy appears to be in excess of 90%, which I found quite surprising. However, training set accuracy or loss don’t tell the whole story – we must examine the validation set accuracy and loss. We seem to cap out at 90% accuracy, but more importantly, the validation loss seems to decrease before increasing again. This appears to indicate overfitting. In principle we combat this to some extent with dropout, but we’ll be focusing on combating overfitting over the next few posts.
Now, we’d like to see how we do on our real test set – the one produced by Kaggle, which we don’t have the answers for. We have to load the model and produce a collection of predictions. We coded this by trial and error. We don’t need to load and pre-process the images, but in order to load the model, TFLearn appears to want me to prepare the network first. Therefore, for our test set run, we’ll use this code, which will henceforth serve as a representative piece of code for all future runs, but with the different architecture substituted. In particular, the only new work is done by the following:
```model.load('./model_cat_dog_6_final.tflearn')
myCount = 0
for f in test_files:
myCount += 1
if(myCount % 500 == 0):
print("On file " + str(myCount) + "...")
try:
|
2020-02-28 22:23:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48832154273986816, "perplexity": 622.5908250251097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147647.2/warc/CC-MAIN-20200228200903-20200228230903-00317.warc.gz"}
|
https://socratic.org/questions/how-do-you-prove-csc-cot-csc-cot-1
|
# How do you prove (csc+cot)(csc-cot)=1?
Oct 17, 2015
${\csc}^{2} \theta = 1 + {\cot}^{2} \theta$ is a trigonometric identity.
#### Explanation:
$\left[1\right] \textcolor{w h i t e}{X X} \left(\csc \theta + \cot \theta\right) \left(\csc \theta - \cot \theta\right) = 1$
Property: $\left(a + b\right) \left(a - b\right) = {a}^{2} - {b}^{2}$
$\left[2\right] \textcolor{w h i t e}{X X} {\csc}^{2} \theta - {\cot}^{2} \theta = 1$
$\left[3\right] \textcolor{w h i t e}{X X} {\csc}^{2} \theta = 1 + {\cot}^{2} \theta$
This is a trigonometric identity.
|
2020-02-22 01:23:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.846415102481842, "perplexity": 5175.278584583247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145621.28/warc/CC-MAIN-20200221233354-20200222023354-00178.warc.gz"}
|